id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
65240477
pes2o/s2orc
v3-fos-license
Sensor Transmission Power Schedule for Smart Grids Smart grid has attracted much attention by the requirement of new generation renewable energy. Nowadays, the real-time state estimation, with the help of phasor measurement unit, plays an important role to keep smart grid stable and efficient. However, the limitation of the communication channel is not considered by related work. Considering the familiar limited on-board batteries wireless sensor in smart grid, transmission power schedule is designed in this paper, which minimizes energy consumption with proper EKF filtering performance requirement constrain. Based on the event-triggered estimation theory, the filtering algorithm is also provided to utilize the information contained in the power schedule. Finally, its feasibility and performance is demonstrated using the standard IEEE 39-bus system with phasor measurement units (PMUs). Introduction Aiming at sustaining long-term energy supply, renewable energy has attracted widespread attention. Renewable energy such as wind and solar needs a new-generation power grid infrastructure called smart grid to accommodate to the power network. Smart grid utilizes automate control, intelligent and information technology to realize a reliable and sustainable energy management system (EMS) [1][2][3]. Among the EMS, accurate estimation for the dynamic status of smart grid plays the key role, which is hard to be realized by conventional supervisory control and data acquisition (SCADA) systems. To satisfy the accurate and dynamic estimation needs, the real-time phasor measurement unit (PMU) is invented utilizing the GPS time-stamped technology to provide high frequency accurate synchronous phasor data. Based on PMU, Wide-area measurement systems (WAMS) is established to accomplish the synchrophasor real-time state estimation (RTSE) [4,5]. The structure of WAMS is show in Fig. 1 [6]. Amount of research on RTSE has been developed based on the observation data of PMU [7][8][9][10][11][12]. Due to the nonlinearity of power system, nonlinear filter is wildly applied in RTSE. Extended Kalman filter (EKF), which linearizes the system functions to realize the iteration of covariance in nonlinear filtering process, was firstly introduced to estimate the dynamic states of smart grid [7]. However, the higher order terms of Taylor series translates into linearization error during the EKF process. Thus, when the system's nonlinearity is strong, EKF will lead low filtering accuracy and even filtering A usual assumption in RTSE research is that the communication channel in Fig. 1 is ideal. However, with widespread development of wireless sensors, the communication channel between smart grid and estimation center is susceptible to environmental influence. Amid all the influence, transmission failure, which means that the observation transmitted by the smart grid is not received by the estimation center, is the one that has the most serious impact on the estimation. The probability of transmission failure typically has a positive correlation with transmission power. Because the transmission power is usually provided by limited on-board batteries in smart grid, an sensor transmission power schedule should be designed to balance the transmission failure probability and the transmission power consumption [13][14][15]. This paper focuses on the sensor transmission power schedule problem in RTSE of power system. Considering the nonlinearity of power system, the transmission power schedule is analyzed based on EKF. The rest of this paper is organized as follows. Section 2 introduces the dynamic states estimation and the transmission power schedule problem of power system. Section 3 designs the transmission power schedule that minimizes energy consumption and guarantee proper estimation performance by determining transmission power according to the innovation of observation. Also, the filtering algorithm is provide, which utilizes the information in transmission power schedule to promote estimation performance. Section 4 utilizes IEEE 39-bus system to demonstrate the feasibility of the proposed method. The following standard notations are adopted throughout this paper. The norm of vector x  stands for the maximum norm x   max(|x 1 |, · · · , |x n |). The expectation of random number x is described as E(x), and the condition expectation of random number x given y is described as E (x|y). The probability of an event is described as Pr (· ). System description and problem statement As most smart grid is nonlinear system, a general nonlinear model is set up as, where x k  R n and y k  R p are the system state and measurement output, respectively. f (x) and g(x) are continuously diff erentiable at ∀x. The process noise ω k ∈ R n and measurement noise ν k R n are white sequences with zero means. Their covariance matrices E[ω k ω T ] = Q k δ kj > 0 and E[ν k ν T ] = R k δ kj > 0. The initial distribution of x 0 is Gaussian with zero mean and covariance matrix P 0 . Moreover, x 0 , ω k and ν k are assumed to be independent with each other. The measurement y k is transmitted over a wireless fading channel by the wireless emission infrastructure in the smart grid, which is shown in Figure 1. A binary variable γ k is denoted to reflect whether y k is successfully received by the estimation center as, At the time instance k, both the receiving states γ k and the measurement y k make up the observation information for estimation as It should be noticed that y k is meaningless when γ k = 0, because estimation only get a pure noise at time instance k. Moreover, the trigger contained information set is defined as 12 The status of γ k depends on both the communication channel and the communication power. According to the law of large number, the noise in communication channel can be described by an Additive White Gaussian Noise (AWGN). At the same time, the observation signal is modulated by Quadrature Amplitude Modulation (QAM) [16]. Therefore, by the communication theory, the symbol error rate (SER) can be approximated as, here N 0 is the AWGN noise power spectral density, W is the channel bandwidth, α is the constant determined by the number of byte in the observation signal, and M k is the transmission power of QAM. Amount these parameter, N 0 , W and α are all positive constant and determined off -line in practice, while M k is the only parameter need to be scheduled. According to (5) and [16], the probability of transmission failure is that, It should be noticed that θ(0, 1), thus, the transmission failure probability satisfies λ(0, 1).The task of sensor transmission power schedule is calculating the minimum transmission power M k that guarantee proper estimation performance. EKF based transmission power schedule and filtering algorithm This section establish the transmission power schedule based on EKF, also the filtering algorithm under the designed schedule is also proposed to promote the estimation accuracy inspired by the eventtrigger filter theory. EKF theory In this part, the two-step EKF is introduced, and execute the filtering algorithm when the observation y k is received. For convenience, the one step state prediction and the prior respective error covariance matrices are denoted as |1kk x  and P k−1 are already know, the iteration of EKF includes the prediction and measurement update steps. The prediction step is described as where A k is got by Taylor expansion of the continuous function f (x) as, Moreover, if the observation y k is received by the estimation center, the measurement update step satisfies that, where C k is the first order Taylor polynomial of the continuous function h(x) as, and K k satisfies that, Transmission power schedule It can be noticed from (10) that diff erent value of y k has diff erent influence on the measurement update step of EKF process. In filtering theory, (y k − ŷ k|k−1 ) is known as the "innovation" of y k , which evaluate the importance of the observation y k . To obtain an appropriate estimation accuracy, the transmission power should determined by the innovation of y k . When the innovation is big, the information contained in y k is important for filter, thus paying plenty of transmission power is worth. On the contrary, y k with small innovation is less important for estimation, which dose not need too much transmission power. Before utilizing the innovation of y k , the weighted average can be carried out by where k Z is the weighted matrix to be designed and |1kk y  is the prior condition mean of k y as, Thus the expectation of the transmission failure probability is calculated by According to related work with EKF, the performance of EKF depends on the expectation of transmission failure probability. To get proper performance of EKF, it must be satisfied that P(γ k = 0|F k −1) ≤ λ c , where λ c is the gate depended on diff erent system [17]. If c  is determined and the aim of transmission power schedule is to minimize energy consumption as much as possible, ρ can be calculated as, Filtering algorithm in estimation center Section 3.2 established the transmission power schedule in smart grid, this section will provide the corresponding filtering algorithm in estimation center. From Section 3.2 it can be noticed that the observation y k can either successful or failed to be received by the estimation center. When γ k = 1, filtering algorithm can be realized by EKF in Section 3.1. Otherwise, when γ k = 0, filtering algorithm should be delicately design by takeing advantage of event-trigger theory to make full use of information containing in the transmission power schedule. The iteration under γ k = 0 is same with EKF as, However, the measurement update step is quite diff erent from EKF, because the observation k y is not available for estimation center. Even though, the information contained in the transmission power schedule can be utilized by the filtering algorithm. Noticing that (16) is same with the trigger strategy in [18], the stochastic event-triggered filter in [18] can be utilized to realize the filtering algorithm when k  = 0, which is shown as, And K satisfies that, Comparing the posterior condition covariance in (25) utilizing the transmission power schedule information with the prior condition covariance. It can be found that the posterior condition covariance is smaller than the prior condition covariance due to the existence of the term |1 k k k k K C P  , which actually reflects the influence of the information contained in transmission power schedule. Take advantage of the binary variable γ k , the filtering algorithm regardless of the receiving of the observation y k can be united into Case study and simulation IEEE 39-bus system, which is also know as the 10-generator New England system, is wildly applied to evaluate the estimation performance of smart grid [8,10]. Fig. 2 describes the constitution of IEEE 39-bus system, where each grid is a generator. The parameters of generators quotes from [10]. The simulation executes 15 seconds after a fault breaks out at the line connecting the bus 14 to bus 15. By selecting the sampling interval as 0.02s, 750 discrete time interval estimation is performed. In Fig. 1, each generater consists a smart grid, and its model can be described as, The system noise is set as Gaussian noise with the covariance matrix diag[10 −18 , 10 −18 , 10 −9 , 10 −9 , 10 −9 , 10 −9 , 10 -9 ]and the observation noise is set as Gaussian noise with covariance matrix diag[10 −10 , 10 −8 , 10 −10 , 10 −8 ]. The estimator is placed at the estimation center, and the observations is transmitted by the pow-er schedule in Section 3.2. For convenient, this section denotes the filtering algorithm in Section 3.3 is denoted as ET-EKF. The gate for the expectation of transmission failure probability is selected as λ c = 90%, which means that only 10% of the measurement by PMU is received by the estimation center. The simulation result is shown in Fig. 3. It can be seen that ET-EKF keeps stable in the statistics sense and provides accuracy filtering results, even only one-tenth observations are received by the estimation center. trigger based transmission power schedule is designed. Moreover, the corresponding filter is designed based on EKF, which utilizes the information contained in the power schedule to promote the filtering accuracy. The simulation verifies that the power schedule and filter designed in this paper can reduce the transmission power and achieve high accuracy estimation result at the same time.
2019-02-17T14:17:13.390Z
2017-11-01T00:00:00.000
{ "year": 2017, "sha1": "b1ef8a5a520bc3be93540092af24dfd74965b281", "oa_license": null, "oa_url": "https://doi.org/10.1088/1755-1315/93/1/012071", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "800f0c4c9eb77bc3f3decf7d4eb6993ac1d171b2", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Physics", "Computer Science" ] }
15210966
pes2o/s2orc
v3-fos-license
Genus theta series, Hecke operators and the basis problem for Eisenstein series We derive explicit formulas for the action of the Hecke operator $T(p)$ on the genus theta series of a positive definite integral quadratic form and prove a theorem on the generation of spaces of Eisenstein series by genus theta series. We also discuss connections of our results with Kudla's matching principle for theta integrals. Introduction In the theory of theta series of positive definite quadratic forms the problem of giving explicit formulas for the action of Hecke operators on theta series has received some attention [1,19]. If p is prime to the level N of the quadratic form q of rank m in question, the action of the usual generators T (p), T i (p 2 ) of the p-part of the Hecke algebra for the group Γ (n) 0 (N ) ⊆ Sp n (Z) is known [1,19] except for the case that n < m 2 and χ(p) = −1, where χ is the nebentype character of the degree n theta series of q. In this last case it is unknown whether T (p) leaves the space of cusp forms generated by the theta series of positive definite quadratic forms of the same level and rational square class of the discriminant invariant. Some deep results concerning this question have been obtained by Waldspurger [18]. To our surprise, there seem to be no results available even for the question how to describe the action of T (p) on the genus theta series of q, i.e., Siegel's weighted average over the theta series of the quadratic forms q ′ in the genus of q. The present note intends to fill this gap. It turns out that we have different methods available to express the image of the genus theta series under the operator T (p) in terms of theta series: Using results of Freitag [5], Salvati Manni [14] and Chiera [4] one obtains an expression as a linear combination of theta series of positive definite quadratic forms of level lcm(N, 4). We show in Section 5 that this result can be improved to an (explicit) expression as a linear combination of genus theta series of positive definite quadratic forms of level N if N is an odd prime. In fact we prove in that case that any n + 1 of the genera of quadratic forms that are rationally equivalent to the given genus and have level dividing N yield a basis of the relevant space of holomorphic Eisenstein series. This can be generalized to arbitrary square free level under a slightly technical condition on the degree n depending on the Q p -equivalence class for p dividing N Dedicated to the memory of Tsuneo Ararakawa. of the given genus of quadratic forms; generalizations to arbitrary level will be the subject of future work. On the other hand, using the explicit expression for the action of Hecke operators on Fourier coefficients of modular forms given in [1], Siegel's mass formula and relations between the local densities of quadratic forms we find a much simpler expression: The genus theta series is transformed into a multiple of the genus theta series of a different genus of quadratic forms. If χ(p) = −1, the genus involved turns out to be indefinite, and the theta series is the one defined by Siegel (n = 1) and Maaß [17,13]. This phenomenon is an instance (with quite explicit data) of the matching principle for Siegel-Weil integrals attached to different quadratic spaces that has been observed by Kudla in [11], we discuss this in Section 6. As a consequence of our work we are able to give a positive solution to the basis problem for modular forms in a number of new cases; this will be done in joint work with S. Böcherer. Preliminaries Let L be a lattice of full rank on the m-dimensional vector space V over Q, q : V −→ Q a positive definite quadratic form with q(L) ⊆ Z, B(x, y) = q(x + y) − q(x) − q(y) the associated symmetric bilinear form, N = N (L) the level of q (i.e., N −1 Z = q(L # )Z, where L # is the dual lattice of L with respect to B); we assume m = 2k to be even. Let R be Z or Z p for some prime p and let H n (R) denote the set of half-integral matrices of degree n over R, that is, H n (R) is the set of symmetric matrices (a ij ) of degree n with entries in 1 2 R such that a ii (i = 1, ..., n) and 2a ij (1 ≤ i = j ≤ n) belong to R. We note that for x = (x 1 , . . . , x n ) ∈ L n the matrix q(x) := ( 1 2 B(x i , x j )) is in the set H n (Z); we also note that H n (Z p ) is equal to the set M sym n (Z p ) of symmetric n × n matrices over Z p for p = 2. For two square matrices T 1 and T 2 we write T 1 ⊥ T 2 = T1 0 0 T2 . We often write a ⊥ T instead of (a) ⊥ T if (a) is a matrix of degree 1. If K = (K, q ′ ) is a quadratic Z p -lattice with Gram matrix T with respect to some basis we will freely switch notation between T and K, so for example if K is a one-dimensional lattice with basis vector of squared length a and M a quadratic lattice with Gram matrix T we write as above a ⊥ T = (a) ⊥ T = K ⊥ T = K ⊥ M. The theta series of degree n of (L, q) is well-known to be in the space M For definitions and notations concerning modular forms we refer again to [1], we recall that the Hecke operator associated to the double coset is as usual denoted by T (p). We let {L 1 , . . . , L h } be a set of representatives of the classes of lattices in the genus is the group of isometries of L onto itself with respect to q) and write for Siegel's weighted average over the genus. By Siegel's theorem (see [10]) the Fourier coefficient r(gen L, A) at a positive semidefinite half integral symmetric matrix A can be expressed as a product of local densities, with some constant c. Here the local density α ℓ (L, A) is given as for sufficiently large j with an additional factor 1 2 if m = n where S denotes a Gram matrix of L and where we write Eisenstein series and theta series Proposition 3.1. Let L be a lattice of rank m = 2k with positive definite quadratic form q of square free level N , let n < k − 1 and let F = ϑ (n) (gen(L)) denote the genus theta series of L of degree n. Then for any prime p ∤ N the modular form F | k T (p) is a linear combination of genus theta series of genera of lattices with positive definite quadratic form of level N ′ = lcm(N, 4). Proof. By [2] G := F | k T (p) is an eigenfunction of infinitely many Hecke operators T (ℓ) for the primes ℓ ∤ pN with χ(ℓ) = 1 (where χ is the nebentyp character for ϑ (n) (L)). Proposition 4.3 of [5] implies then that G is in the space that is generated by Eisenstein series for the principal congruence subgroup of level N ; this can also be obtained from Siegel's main theorem if one uses that this space is Hecke invariant. We want now to use Theorem 6.9 of [5] (see also [14]) to prove that G is a linear combination of theta series with characteristic for the principal congruence subgroup of level N ′ = lcm (N, 4). For this recall that with Our modular form G is therefore indeed a linear combination of theta series with characteristic for the principal congruence subgroup of level N ′ = lcm(N, 4). Since G is in fact a modular form for Γ 0 (N ′ ), Chiera's Theorem 1 [4] implies that G is a linear combination of theta series ϑ (n) (K j ) attached to full lattices K j with quadratic form of level dividing N ′ . It is well known that the values of the theta series of lattices in the same genus at zero dimensional cusps are the same. From Proposition 3.3 of [5] we can then conclude that G is in fact a linear combination of the ϑ (n) (gen(K j )) as asserted. Action of T (p) and local densities The action of the Hecke operator T (p) on the Fourier coefficients of a Siegel modular form at nondegenerate matrices A has been described explicitly by Maaß [12] and by Andrianov (Ex. 4. 2. 10 of [1]): Let K be a Z-lattice with quadratic form of rank n that has Gram matrix p · A with respect to some basis and write M i for the set of lattices M ⊃ K for which K has elementary divisors (1, . . . , 1, p, . . . , p) with (n − i) entries p. A≥0 f (A) exp(2πitr(AZ)), G(Z) = (F | k T (p))(Z) = g p (A) exp(2πitr(AZ)), one has for non-degenerate A: Then and the Z p -latticeL ℓ is given bỹ Here p L ℓ denotes the lattice L ℓ with quadratic form scaled by p. Proof. It is (by induction) enough to consider nondegenerate A. We write the total factor in front of f (M ) for M ∈ M i in (4.1) as γ i and rewrite (4.1) in the present situation as Since det M = p 2i−n det A for M ∈ M i this becomes So it remains to prove and divide both sides of (4.4) by as the assertion that we have to prove. For χ(p) = 1 this is proved in [19] (see also [2]), where it is also proved for χ(p) = −1 and n ≥ k (in which case the factor λ p (L) is zero). To prove it for χ(p) = −1 notice that L p is unimodular even by assumption. By Lemma 3.5 of [15] there exists is true for all (even) unimodular Z p -latticesL p of even rank 2k withk ∈ N and with Hence both sides of our assertion (4.5) are polynomials in X = χ(p)p −k asL p varies over (even) unimodular Z p -lattices of (varying) rank 2k. The truth of the assertion forL p with χL p (p) = 1 andk arbitrary shows that these polynomials take the same value at infinitely many places, hence must be identical. The assertion is therefore true for all even unimodular L p of even rank. Lemma 4.2. There is a unique isometry class of rational quadratic spacesṼ = (Ṽ ,q) of dimension m, such that V carries a latticeL such that ℓ is that of V ℓ and the product of the Hasse symbols s ℓ V ′ ℓ over the finite primes ℓ is the Hilbert symbol (p, (−1) If V ′ ∞ is positive definite for χ(p) = 1 and of signature (m − 2, 2) if χ(p) = −1 one sees therefore that disc V ′ ℓ = disc V ℓ for all ℓ (including ∞) and ℓ,∞ s ℓ V ′ ℓ = 1, hence there is a rational quadratic spaceṼ such thatṼ ℓ ∼ = V ′ ℓ for all ℓ including ∞. The uniqueness ofṼ is clear from the Hasse-Minkowski theorem, and thatL as in (4.7) exists onṼ is obvious. We recall that for an integral lattice of positive determinant and even rank Siegel [17] for degree one and Maaß [13] for arbitrary degree defined a holomorphic theta series in the indefinite case whose Fourier coefficients at positive definite A are proportional to the product of the local densities of that lattice, subject to the restriction that the signature (m + , m − ) satisfies the condition min( m++m−−3 2 , m + , m − ) ≥ n. Denote this theta series (if it is defined) forL, normalized such that its Fourier coefficient at A is equal to by ϑ(L, Z) or also by ϑ(genL), Z) (notice that this theta series does indeed depend only on the genus of the lattice). The signature condition is in our situation always satisfied if n = 1, for bigger n it can be satisfied by choosing j in 4.2 appropriately if n ≤ k − 2 (with k = m/2). If the signature condition is not satisfied, we use the same notation r(genL, A) (without knowing a priori whether these numbers are the Fourier coefficients of a modular form). Then we arrive at the following final result: where ϑ (n) (genL, Z) is a holomorphic modular form of the same level as L whose Fourier coefficient at a positive definite matrix A is equal to r(genL, A). The modular form ϑ (n) (genL, Z) is the usual genus theta series ifL is positive definite and is equal to the theta series of Siegel and Maaß from above ifL is indefinite and this series is defined. In particular, for all n < k there exists a holomorphic modular form of the same level as L with Fourier coefficients r(genL, A) at at positive definite matrices A. Remark. a) λ p (L) = 0 if n ≥ k holds with χ(p) = −1, which agrees with Andrianov's result [2] for this case. b) In the introduction we mentioned the question whether the space of cusp forms generated by the theta series of positive definite lattices of fixed level and rational square class of the discriminant is invariant under the action of the Hecke operators. In view of our theorem we might reformulate this question by substituting "modular forms" for "cusp forms" and omitting the restriction to positive definite lattices. Since the indefinite theta series of Siegel and Maaß don't contribute to the space of cusp forms, this doesn't change the problem with regard to the subspace of cusp forms. c) Of course the same result holds true when we take an indefinite latticeL of signature (m − 2, 2) as above as our starting point. The lattices appearing in Spaces of genus theta series for odd prime level We will need some additional notations in this section. Let p be an odd prime. For a non-zero element a ∈ Q p we put χ p (a) = 1, −1, or 0 according as Further for non-negative integers l, e and matrices , and call it (as usual) the primitive density. Further for 0 ≤ i ≤ m put Our goal in this section is to prove the following theorem: Theorem 5.1. Let p be an odd prime, k, n ∈ N with n ≤ k − 1 and p ≡ (−1) k mod 4. Then the space of modular forms for Γ (n) 0 (p) spanned by the genus theta series of degree n attached to the genus of positive definite integral quadratic lattices of rank 2k, level p and discriminant p 2r+1 for some 0 ≤ r < k and the space spanned by the genus theta series of degree n (in the sense of Theorem 4.3) attached to the genus of integral quadratic lattices of signature fixed), level p and discriminant p 2r+1 for some 0 ≤ r < k coincide. This space has dimension n + 1 and is equal to the space of holomorphic Eisenstein series for the group Γ (n) 0 (p) of weight k and nontrivial quadratic character. For each of these signatures the theta series of any n + 1 of the k genera of level dividing p and having this signature form a basis of this space of modular forms. The proof of this theorem will require a few intermediate results which may be of independent interest. A half-integral matrix S 0 over Z p is called Z p -maximal if it is the empty matrix or a matrix corresponding to a Z p -maximal lattice. The main result we need is the following theorem, whose proof again is broken up into several steps: Theorem 5.2. Let p be an odd prime, let T ∈ H n (Z p ). Let k be a positive integer, and S 0 be a Z p -maximal half-integral matrix of degree not greater than 2. Then there exist rational numbers a i = a i (k, S 0 , T ) (i = 0, 1, 2, ..., n) such that α p (H k−l−1 ⊥ pH l ⊥ S 0 , T ) = a 0 + a 1 p l + ... + a n p nl for any l = 0, 1, ..., k − 1. To prove the theorem, first we remark that for p = 2 a Z p -maximal matrix S 0 of degree not greater than 2 is equivalent over Z p to one of the following matrices: (1) Let S 0 be of type (M-3) or (M-5). Then for any k ≥ n we have for an odd positive integer m and an integer i such that i ≤ (m − 1)/2. for an even positive integer m and an integer i such that i ≤ m/2, and ǫ = ±1. . Furthermore, by Proposition 2.8 of [7] we have . This proves the assertion (1). Similarly, the assertion (2) can be proved. Now again by Proposition 2.2 of [8] we have On the other hand, we have (e.g. Lemma 9, [9].) Thus the assertion (3) holds. Now for a non-degenerate half-integral matrix B of degree n over Z p define a polynomial γ p (B; X) in X by (1) Let B be a half-integral matrix of degree n over Z p . Put l = l p (B). Then we have if l is even, Proof. The assertion (1) follows from Lemma 9, [9]. The assertion (2) is well known (cf. [9]). Let ( , ) p be the Hilbert symbol over Q p and h p the Hasse invariant (for the definition of the Hasse invariant, see [10]). Let B be a non-degenerate symmetric matrix of degree n with entries in Q p . We define if n is even. (1) Let n be even. Then we have where ξ = ξ(B), and K(B) is a rational number depending only on B. (2) Let n be odd. Then we have whereξ = ξ(B 2 ), and K(B) is a rational number depending only on B. Here we understand that B 2 is the empty matrix and that we have ξ = 1 if n = 1. Proof. Proposition 5.6. Let S 0 and the others be as in Lemma 5.3. Now by (2) of Proposition 5.5 and (2) of Lemma 5.4, we have We note that Thus the assertion (2) holds. Remark 5.7. In the above theorem, K(S 0 , T ) can be expressed explicitly in terms of the invariants of T. Proposition 5.8. Let S 0 , T andT and the others be as in Proposition 5.6. (1) Assume that S 0 is of type (M-3) or (M-5). Then for any non-negative integer l ≤ k − 1 we have where K(S 0 , T ) is the rational number in Proposition 5.5. In particular, if n = 1, for a non-zero element T of Z p , we have where c = c(S 0 , T ) is the rational number determined by T and S 0 . where K(S 0 , T ) is the rational number in Proposition 5.5. In particular, if n = 1, for a non-zero element T of Z p , we have where c = c(S 0 , T ) is a rational number determined by T and S 0 . Throughout (1) and (2), we understand Proof. (1) First let n + deg S 0 be even. Then by (1) of Proposition 5.6 and (1) of Lemma 5.3, we have By (1) Furthermore, again by (1) and (3) This proves the assertion (1) in case n + deg S 0 is odd. Next again by (2) of Proposition 5.6 and (1) of Lemma 5.3, the assertion (1) can be proved in case n + deg S 0 is odd. (2) First let n + deg S 0 be even. Then by (1) of Proposition 5.6 and (2) of Lemma 5.3, we have We evaluate this further as By (1) and (3) which can be transformed into This proves the assertion (2) in case n + deg S 0 is odd. Next again by (2) of Proposition 5.6 and Lemma 5.3, the assertion (2) can be proved in case n + deg S 0 is odd. Proof of Theorem 5.2. We prove the assertion by induction on n. The assertion for n = 1 follows from (2) of Proposition5.8. Let n ≥ 2 and assume that the assertion holds for n − 1. Then by the induction assumption we have where a j = a j (s, S 0 ,T ) and a ′ j (s − 1, S 0 ,T ) in Theorem 5.2. We may assume that . First assume that S 0 is of type (M-3) or (M-5). Thus by Proposition 5.8 we have which is equal to For 0 ≤ j ≤ n − 1 put Then for j ≤ n − 2, M (j) is a polynomial in p l of degree at most n − 1. On the other hand, Thus α p (H k−l−1 ⊥ B l , T ) is a polynomial in p l of degree at most n. This proves the assertion in case (M-3) or (M-5). Similarly, the assertion can be proved in the remaining case. Remark 5.9. A more careful analysis shows that we have a 0 (k, S 0 , T ) = 1 in the above theorem. The above relation holds for T = p 2r ⊥T with any integer r andT ∈ H n−1 (Z p ) ∩ GL n−1 (Q p ). Then by Proposition 5.8, where w(T ) is a certain rational number depending only onT . Thus by taking the limit r → ∞ we obtain Proof. This is clear from Theorem 5.10 and Theorem 4.3. a) The result of Theorem 5.1 is more generally true in the case of square free level N , in which case the dimension of the space spanned by the genus theta series becomes (n + 1) ω(N ) where ω(N ) is the number of primes dividing N ; one has then a basis of genus theta series if one considers (n + 1) ω(N ) genera of lattices on the same quadratic space V such that for each p dividing n one has n+1 local integral equivalence classes. In that case our proof given above requires the restriction that the anisotropic kernel of the quadratic space under consideration has dimension at most 2. Moreover we can not guarantee the holomorphy of the indefinite genus theta series if the character is trivial (i.e., if the underlying quadratic space has square discriminant). One proceeds in the proof as above, adding an induction on the number of primes ω(N ) dividing N . b) A different (and much shorter) proof of Theorems 5.2 and 5.10 has been communicated to us by Y. Hironaka and F. Sato [6]. The proof given here gives a little more information (e.g. explicit recursion relations) than theirs. The proof of Hironaka and Sato removes the restriction on the anisotropic kernel mentioned above (if one strengthens the condition on n to n + 1 < k in the new cases) and provides also a version for levels that are not square free. The application of that version to the study of the space of Eisenstein series generated by the genus theta series in the case of arbitrary level will be the subject of future work. Connection with Kudla's matching principle In Section 4 we have seen that the Hecke operator T (p) can provide a connection between theta series for lattices in positive definite quadratic space (V 1 , q 1 ) and in a related indefinite quadratic space (V 2 , q 2 ). Such a connection has recently been observed in a different setup by Kudla [11]. We sketch his approach briefly in order to study the relation to our construction, for details we refer to [11], Section 4.1. For j = 1, 2 we have then for ϕ ∈ S((V j (A)) n ) the theta kernel (g ∈ Sp n (A), h j ∈ O (Vj ,q) (A)). and the theta integral which (under our conditions) is absolutely convergent for j = 1 and for j = 2 if V 2 is anisotropic or m > n + 2. Let now L j be a lattice on V j and assume ϕ j to be factored as ϕ j = v ϕ j,v over all places v of Q, where ϕ j,p = 1 Lj,p is the characteristic function of the lattice L j,p in the Q p -space V j,p for all finite primes p. Then for ϕ 1,∞ (x) = exp(−2π tr(q(x))) for x ∈ (V 1 ⊗ R) n (the Gaussian vector) the intgral I(g; ϕ 1 ) is the adelic function corresponding to the Siegel modular form in the usual way. For the space V 2 we consider two different test functions at infinity: If we choose a fixed majorant ξ of q and put the value of the theta kernel at h 2 = 1 V2 corresponds to the theta function ϑ (n) (L 2 , ξ, Z) = x∈L n 2 exp(2πi tr(q(x)X)) exp(−2π tr(ξ(x)Y )) (with Z = X + iY ∈ H n ) considered by Siegel, and its integral over O (V2,q ) (Q) \ O (V2,q) (A) corresponds to the integral of this theta function over the space of majorants ξ; this is a nonholomorphic modular form in the space of Eisenstein series by Siegel's theorem (or its extension to the Siegel-Weil-Theorem). To simplify the discussion, we restrict now (following [11]) to n = 1. We denote by χ the quadratic character of Q × A /Q × defined by for all places v, where ( , ) v is the Hilbert symbol. Then associated to ϕ j there is a unique standard section Φ j :G(A) × C −→ C with Φ j (·, s) ∈ I(s, χ), (where I(s, χ) is the principal series representation ofG(A) with parameter s and character χ) such that for s 0 = m 2 − 1 one has Φ j (g, s 0 ) = (ω j (g)ϕ j )(0) =: λ j (ϕ j ). Although this identity is a trivial corollary of the Siegel-Weil theorem, the matching principle gives highly nontrivial arithmetical identities since the integrals I(g, ϕ 1 ) and I(g, ϕ 2 ) carry completely different arithmetic information; in [11] the principle is exploited to give identities between degrees of certain special cycles on modular varieties and linear combinations of representation numbers of positive definite quadratic forms. Kudla gives in [11] explicit local matching functions at the infinite place and asserts the existence of local matching functions at the finite places for m > 4 and for m = 4 if χ p = 1. We can now state the contribution of our computations from the previous sections to this matching principle: Proposition 6.2. Let L, V, q be as in the previous sections, let n = 1 and let ϕ 1 = v ϕ 1,v ∈ S(V (A)) be the test function for the positive definite lattice L as described above. Assume that L is of square free odd level N and that all p|N divide the discriminant of L to an odd power. Let χ be the (primitive) quadratic character mod N with ϑ(L, q) ∈ M k (Γ 0 (N ), χ) and let p be a prime with χ(p) = −1. Let ϑ(gen(L))| T (p) = c i ϑ(gen(L i )) be the explicit linear combination of theta series of all the positive definite genera of lattices of level N and discriminant in d · (Q × ) 2 given by the results of Section 5, let ψ i be the test function attached to the positive definite lattice L i as above. Let (V 2 , q 2 ) be the quadratic spaceṼ of signature (m − 2, 2) from Lemma 4.2 in Section 4, let L 2 =L in the notation of Lemma 4.2 and let ϕ ′ 2 = ϕ ′ 2,∞,ξ ⊗ p =0 ϕ 2,p be the test function attached to L 2 as described above. Then the test functions ψ := i c i ψ i ∈ S(V 1 (A)) and ϕ ′ 2 ∈ S(V 2 (A)) match and we have I(g, ψ) = I(g, ϕ ′ 2 ). Proof. This is clear from the discussion above and Theorem 4.3. Remark 6.3. As already stated in [11] the matching principle can easily be generalized to arbitrary Sp n . In the range of our results in Sections 4 and 5 we have then examples for the matching principle for general n in the same way as described above.
2014-10-01T00:00:00.000Z
2005-08-30T00:00:00.000
{ "year": 2005, "sha1": "1135ac7ac6e87ea2c15e6d127d98141284255199", "oa_license": null, "oa_url": "http://arxiv.org/pdf/math/0508613", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "5ec041e53905a97b412bd5b9762e87d2a4d1980c", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
244602903
pes2o/s2orc
v3-fos-license
UNDERSTANDING HIGHER EDUCATION: ALTERNATIVE PERSPECTIVES Boughey, C. & McKenna, S. 2021. Book title: Understanding higher education: Alternative perspectives. Cape Town: African Minds. The authors of this book, Chrissie Boughey and Sioux McKenna, enjoy a hard-earned reputation for their contribution to higher education and for the lucidity with which they make that contribution. This book however is a frustrating read. Perhaps that is because their reputation raises expectations as do the dust jacket reviews of the book. The book argues for respect of context and thus I chose a partner for this review who is at a different point in their own academic career from mine to test the value of the book from at least two contexts. My fellow reviewer has a perspective on higher education recently informed by his own postgraduate studies and his perspective as a new lecturer, while mine, it must be declared, is probably more accurately currently described as managerial. Our debate about the book was rich and that is precisely where the book will contribute to understanding higher education in South Africa. One expects to develop greater understanding of higher education (in South Africa at least) from reading the book. For someone not familiar with the extensive literature on what the problems are in higher education and where they came from, the book does not disappoint as it deals with all the standard concerns from massification to managerialism. This book has reflexive value for postgraduate students who want a sense of the field/lay of the land and how to use theory in research and it has the same value for academics in fields other than higher education that wish to enter the debate on where we are and where we need to go without only drawing on their untested assumptions. For these reasons alone it is an important contribution. What it does not necessarily achieve is furthering an understanding of why the problems, articulated so well for so long now in this book and in other spaces, are not being solved. If higher education cannot heal itself what social AUTHOR: In Chapter 7 the book offers a short insight into why we have made so little progress. It posits that changes in the management of teaching and learning and attempts to drive new behaviours that would foreground the contextual needs of students did not alter the domain of culture in institutions and we assume, the sector. Those placed in positions that were intended to make the changes that we needed to create an inclusive system were drawn from the very personal and theoretical positions that precluded it. Perhaps the most powerful contribution of the book is in this last chapter where the authors are explicit that a deeply respectful acknowledgement of the experience of students is needed so that, as they say, "the call for epistemological access must be held alongside the call of epistemological justice" (p.139). This is however, where the frustration is as there is little offered in terms of how this is to be achieved and only room for inference that what is in place is not working as evidenced by graduation rate patterns. An understanding of higher education would offer solutions for consideration or at least some calls to action. On the other hand, perhaps that is exactly the problem we face and the frustration in reading it comes from a sense of powerlessness about access to any of the levers that would result in real change -at the level of each lecturer, each committee and each management, let alone the state. Here the authors' use of the empirical, actual and real is useful to describe the resistance to change and the inertia. Higher Education in South Africa is still bedevilled by too many of the problems that were baked into it by the Apartheid government. That makes it infinitely more complex and multi-layered than many other systems in the world. The book does call out these structural inequities but does not suggest ways of dismantling them other than reference to culture. It suggests, but does not say so explicitly, that attempts to date have failed. As a description of most of the themes of concern the book does the expected work but as a roadmap to understanding what needs to happen next or how the parts of the system are holding each other up, back or down, the reader is left with too many questions for the book to claim to have resulted in understanding. The authors hint towards but do not address what they take the ultimate purpose of higher education to be and as one reads the book one keeps wondering what they think a "better" higher education would be. It does a good job of looking at some of the issues such as decontextualising students and it speaks about identity and the disjuncture between education systems in our country. None of this is new and while we have understood this to be true for so long now, we do not seem able to do much about this. The book ascribes some of this failure to the rise of managerialism and the disconnection of the professoriate from power in institutions but does not account for the lack of agency of academics in this scenario. It touches on the inability of some academics and even some disciplines and fields to really honour and respond to the context of students, but it goes no further. What does the book achieve? As a reminder of the core teaching and learning challenges in institutions it provides, as these authors always do, a clear and coherent synopsis of why we are not seeing sufficiently improved graduation rates and positive student outcomes as we widen access to achieve some form of justice. http://dx.doi.org/10.18820/2519593X/pie.v39.i4.17 Coughlan & Coughlan Understanding higher education: Alternative perspectives We accept that the book touches on what contributes to the challenges in teaching and learning (and thus student success) and that it attempts to place these in a context beyond each institution. Unfortunately, the book does not address what can and must change beyond its calls for institutional shifts. Its focus on teaching and learning is to be expected, given the authors, but it does feel like a missed opportunity given the title of the book. There are references to the tension between private and public good, the private sector and the social and economic pressures on students but these are tangential to the overall contribution of the book (and in the case of the private higher education sector generalised, superficial and barely accurate). By such a light touch on these topics, the authors miss an opportunity to leverage ideas that exist for deepening understanding. For instance, if it is true that the private sector has grown to about 15% of student enrolments because of a pursuit of private over public good is there an understanding of how successful this sector is in delivering on this expressed need? In a positioning of private good as "lesser" does the public higher education sector not continue to decontextualise students and their needs and aspirations? Why did managerialism emerge? What is meant by this term that is used so loosely? Is there a possibility that it emerged as an attempt to protect the university as much or even more than trying to control it? If that is even possible, how has it become so separated from the academy? These are questions that the book does not cover and perhaps should not be expected to, but if the objective is to understand then they cannot be glossed over and simplified the way they are. Higher education is an interesting microcosm of the challenges we face as a country. The Apartheid regime deliberately manipulated the notion of higher education as an intellectual agent capable of driving social change and development to try instead to achieve training spaces for reinforcing their lack of ethics and legitimacy. Restructuring of the landscape as the authors point out did not remove this inherently corrupt history and it is thus not that surprising that the outputs continue to mirror too closely the history of each institution and perhaps even, institutional type. Placed within a context of international social inequity such as outlined in Chapter 3 of the book, the problems in SA Higher Education make sense but are no less depressing because other systems in the world also do not graduate the rich and the not rich at the same rate. The book spends time considering how the international higher education system has been moulded by globalisation, the new economy, reduction in meaningful state support, new public management models and high skill production alongside the pressure for access. It sees all these as moving the university (or higher education) away from its public good mandate towards a focus on the private good. It nods in the direction of student agency in the pursuit of private benefit but does so in a manner that could be read as slightly critical or the pursuit of economic success. While there is probably a general agreement that higher education should do more than enable the achievement of private ambitions it is also true that for most South Africans, achieving certification that enables employment is the primary reason for studying. Whether or not one considers that problematic does not make it less real and thus it is disappointing that the book, which uses a social realism framework, does not explore the ways in which higher education enables that ambition to be achieved. It does not matter whether higher education sees that as a core purpose if it is what students, particularly in South Africa, are pursuing and that must be understood as part of their context. http://dx.doi.org/10.18820/2519593X/pie.v39.i4.17 2021: 39(4) The "correct" purpose of higher education is not directly addressed by the authors although there is frequent reference to what has changed and a suggestion that some of what has been lost or replaced is regrettable or damaging. If they are correct that higher education has been instrumentalised and commodified and that this has been at the expense of the "lost" elements and that this in turn is something that should be reversed or reclaimed, then this is the reality from which we need to work to regain that which we take to be missing. It would have assisted the readers had the authors spent more time explicating their understanding of what has been lost and needs restoring, what the ultimate purpose of higher education in our world now is, and once clear on this, how we can work with the social reality (that is aptly described by them) so that higher education can be understood and move toward its purpose. Perspectives in Education That being said, there are many important contributions that one does not always find foregrounded in the literature. For instance the reminder to academics and other higher education leaders alike of the need to recognise the contextual realities of students is important. The chapter that begins to touch on how academics are caught between their disciplines or fields (Chapter 6) and the imperative to be able to teach in a way that grants genuine epistemological access to that discipline or field serves its purpose to highlight the complexity of being an academic in a modern university. Read together with the failure of achieving genuine change in the way that academics teach and the way that decontextualisation can be addressed, but is not, there is insufficient responsibility placed at the door of academics to change their practice. The critique of traditional development and training is well made but no alternative is provided to address the disconnect between academics and students. There is some attention (again) to managerialism and institutional decision-making being dislocated from the professoriate and it can be inferred that this is seen as part of the problem. What is not addressed though is whether, under the pressure of numbers and research targets, the academics generally can or want to be involved in the pragmatism of decision making about institutional functioning. If one views the agency of academics, one must ask why there has not been an assertion of this agency with good effect in any institution in SA? It is therefore a book that should be read and then spoken about. It is not a book that provides answers perhaps as there simply are not any. Thus if understanding is to be defined as knowing more about what questions to ask perhaps the book succeeds, and learning, then unfortunately does not.
2022-07-02T08:35:59.291Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "3fd31d46626af7cc2c2b40c0351180171e50e396", "oa_license": "CCBY", "oa_url": "https://doi.org/10.47622/9781928502210", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "3fd31d46626af7cc2c2b40c0351180171e50e396", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [] }
257489606
pes2o/s2orc
v3-fos-license
Epigenetic modifications: Allusive clues of lncRNA functions in plants Long non-coding RNAs (lncRNAs) have been verified as flexible and important factors in various biological processes of multicellular eukaryotes, including plants. The respective intricate crosstalk among multiple epigenetic modifications has been examined to some extent. However, only a small proportion of lncRNAs has been functionally well characterized. Moreover, the relationship between lncRNAs and other epigenetic modifications has not been systematically studied. In this mini-review, we briefly summarize the representative biological functions of lncRNAs in developmental programs and environmental responses in plants. In addition, we particularly discuss the intimate relationship between lncRNAs and other epigenetic modifications, and we outline the underlying avenues and challenges for future research on plant lncRNAs. Introduction To efficiently adapt to the habitat environment, plants have evolved intricate strategies to orchestrate temporal and spatial gene expression patterns in response to exogenous environmental signals and endogenous developmental cues. Among these precise and complex strategies, epigenetic regulation mechanisms which mainly including DNA methylation, histone modification, histone variant, chromatin remodeling and noncoding RNAs [1], play a dispensable role. Due to the vital function in solving global challenges such as crop yield and food security, epigenetic study in plants has become the forefront and hotspots during the past decades [2,3]. Long non-coding RNAs (lncRNAs) are classified as a type of ncRNAs whose length is more than 200 nt and do not encode proteins or have extremely low encoding capacity [4]. At the early stage of their discovery, lncRNAs were considered "noise" of genome transcription without discernible biological function. However, lncRNAs are abundant and widely distributed in eukaryotes, and they exhibit important roles in the biological process of animals and plants. They were involved in X chromosome silencing, genomic imprinting, chromatin modification, transcriptional activation transcription interference, and other regulatory processes [5][6][7][8]. During the past decade, an increasing number of studies demonstrated the vital roles of plant lncRNA in multiple biological processes [9][10][11][12]. However, the internal relationships between lncRNAs and other epigenetic modifiers in plants remain elusive, including how the lncRNA transcript levels are regulated by other epigenetic factors, and how lncRNAs cooperate with other epigenetic factors to function in gene transcriptional regulation. In this minireview, we summarize the participation of lncRNAs in developmental programs orchestration and environment responses, and we mainly discuss the interactions between multiple epigenetic factors and lncRNAs in plants, aiming at obtaining clues with guard to lncRNA functions and regulatory mechanisms. Landscapes of epigenomic data source in plants Epigenetics participate in nearly all developmental processes of plants, from seed germination to flowering followed by pollination and seed ripening, and also respond to various environmental cues [10]. In Arabidopsis, epigenetic modification at the FLOWERING LOCUS C (FLC) locus through H3K27me3, H2Bub and lncRNAs play vital roles with respect to flowering time regulation [13][14][15][16], and the key chromatin modifications including DNA, RNA, and histone methylation or acetylation are indispensable in the light signaling pathway [17]. In rice, studies on various genome-wide epigenetic signals identified epigenomic variations that are significantly associated with plant growth, fitness, yield and other important agronomic traits [18]. Until now, high throughput sequence technologies have depicted the multidimensional epigenome landscape in various plants, thus providing rich data sources for further epigenetic studies. As shown in Table 1, six databases have collected and organized more than ten thousands of public datasets including histone modification (ChIP-seq), DNA methylation on 5-methylcytosine and N 6 -methyladenine (BS-seq, meDIP-seq, SMRT-seq), and chromatin states (ATAC-seq, DNase-seq, MNase-seq and FAIRE-seq) [19][20][21][22][23][24]. In these databases, model plants such as Arabidopsis thaliana (A. thaliana), Oryza sativa (O. sativa) and Zea mays (Z. mays) account for the highest proportion of datasets, especially the data source in O. sativa is most prevalent. Access to such data is vital for biological researchers to visibly gather detailed information on specific target genes. For example, the peak enrichment distribution of histone modifications can be easily searched through a genome browser tool based on web service. In epigenetic databases, gene annotation information with regard to epigenetic modifications in plants is nearly comprehensive for protein coding genes. Although the epigenetic datasets can be obtained from PlantDB V2.0, which is a plant lncRNA database [25], the number of datasets (454 datasets across seven species) pertaining to epigenetics in plant lncRNA databases was much fewer than that summarized in databases (Table 1). Therefore, combined with the support of public data sources, especially for epigenetic modification analysis, further exploring the biological roles of lncRNAs related to epigenetic modifications seems to be practical and meaningful. Among those complex biological processes, a small part of lncRNAs have been well characterized, which flexibly participated in regulating target genes as cis or trans-acting elements by interacting with DNA, RNA, or proteins [10,45]. In common, lncRNAs participate in regulatory mechanisms refer to neighboring and distant gene transcription, RNA splicing and stability, and as miRNA sponges [45]. Apart from these regulatory mechanisms, the intricate and precise cooperation with epigenetic modifiers to orchestrate gene transcription or chromatin structure is worth future research in plants. Survey of functional lncRNAs associated with epigenetic modifications In the past ten years, the functions of several lncRNAs in plants have been well characterized. Here, we review, in particular, the lncRNAs associated with epigenetics marks to alter target gene transcription ( Table 2). According to their regulatory relationship with other epigenetic modifiers, lncRNAs are classified into two categories (Table 2), i.e., lncRNAs that actively cooperate with epigenetic modifiers to trigger downstream targets (Fig. 1A), and lncRNAs the transcription of which is controlled by epigenetic modifiers (Fig. 1B). During vernalization in A. thaliana, activation of three lncRNAs (COOLAIR, COLDAIR, and COLADWARP) is conducive to the suppression of FLC through enrichment of H3K27me3 at the FLC locus ( Fig. 1A, lower part) [14,16,32,33]. COLDAIR (transcribed from the first intron of FLC) and COLDWRAP (transcribed from the promoter region of FLC) cooperate to form a chromatin loop through directly binding to the PRC2 complex to establish a repression state of FLC locus [32,33]. A further mediated precocious flowering lncRNA in A. thaliana termed MAS which is produced from the anti-sense strand of MADS AFFECTING FLOWERING4 (MAF4) locus, activate MAF4 transcription by recruiting WDR5a to enhance H3K4me3 level of MAF4 loci [46]. Meanwhile, LAIR also an NAT-lncRNA, originated from LRK (encoding leucine-rich repeat receptor kinase), positively regulate grain yield in rice through binding OsMOF and OsWDR5 to the LRK1 gene region resulting in up-regulating its expression through the enrichment of H3K4me3 and H4K16ac [47]. Although MAS and LAIR (Fig. 1A, upper part) are involved in different biological processes, both of them are transcribed from the anti-sense strand of target gene loci and recruit homologs of histone-modifying enzymes to regulate target gene. In leaf development, TWISTED LEAF (TL) constrains its sense gene expression by mediating chromatin modifications [48]; however, the cooperators facilitating diverse histone modification level changes remain to be identified. Besides, with regard to hormone response, the functions of two intergenic members, MARneral Silencing (MARS) and APOLO (Fig. 1A. upper part), are worthwhile noting. Through decoy of the LIKE HETEROCHROMATIN PROTEIN 1 (LHP1) protein, MARS orchestrates the H3K27me3 distribution and promotes the chromatin loop formation [49]. Coincidentally, APOLO also shows a close relationship with LHP1. APOLO interacted with LHP1 to promote the formation of the chromatin loop by APOLO-LHP1 in auxin response [28]. The formation of DNA-RNA duplexes (termed R-loops) is modulated by APOLO to target associated or distant loci [50]. Beyond those mentioned above, APOLO is also shown to coordinate VARIANT IN MET-HYLATION 1 (VIM1) to form APOLO-LHP1-VIM1 complex, directly regulating the transcription of the auxin biosynthesis gene YUCCA2. Interestingly, compared to APOLO, the sequence of lncRNA UHRF1 in humans showed poor similarity but displayed similar performance in transcription regulation of YUCCA2 [51], predicting that analysis of conservative regulatory mechanisms may be a feasible approach to study the conservation of lncRNAs. The coordination regulation to target loci between lncRNAs and histone modifiers reflects the ability of lncRNAs to affect gene regulation through recruiting, decoy mechanisms, and interaction with histone modifiers. Reduced DNA methylation levels at the APOLO locus are also conducive to auxin-induced APOLO expression [50]. It is unclear how DNA methylation was removed during APOLO activation, however, an lncRNA was verified as downstream of epigenetics modifications ( Table 2). In photoperiod-sensitive male sterility rice (Nongken 58 S), Psi-LDMAR (a siRNA) mediates methylation of the lncRNA LDMAR promoter region through the RdDM pathway to inhibit the transcription of LDMAR. Reduced LDMAR transcription leads to male sterility under long-day conditions [34,52]. DNA methylation markedly affects lncRNA transcription; however, changes in histone methylation and acetylation level also affect lncRNA activity (Fig. 1B, upper part). In phosphate starvation responses, the lncRNA At4 is directly targeted by histone acetyltransferase GCN5 mediated H3K9/ 14 acetylation [53]. The expression of a specific fraction of lincRNAs was possibly negatively regulated by HISTONE DEACETYLASE 6 (HDA6) and LSD1-LIKE 1/2 (LDL1/2). The enhanced level of H3Ac and H3K4me2 at increased expression lncRNA sites in hda6 or hda6/ldl1/2 mutant indicated that HDA6, LDL1, and LDL2 were potential regulators [54] suggesting different histone modifications may exhibit crosstalk and together target the same lncRNAs (Fig. 1B, lower part). In rice, numerous lncRNAs are more likely to be targets of repression by PRC2 rather than participate in regulation via PRC2 as they display high expression levels in PRC2 mutant [55]. Later, the lncRNA MISSEN was cloned because of a low-fertility phenotype after T-DNA insertion in rice, and a further study showed that its transcription was inhibited by H3K27me3 modification after pollination. After derivation and verification, MISSEN was up-regulated in the emf2a mutant [56], implying EMF2a is an upstream repressor in endosperm development. MISSEN is a good example, showing it is practical to research lncRNA functions according to transcriptome analysis of epigenetic modifier mutant to predict possible regulated lncRNAs, and based on phenotypes induced by lncRNA mutation to guess upstream potential regulator. Although lncRNAs function in various biological pathways, the characteristics of low expression, poor sequence conservation and flexible roles render them elusive with regard to functional performance. Thus bioinformatic analyses of publicly available data are vital to speculate and examine how lncRNAs may work. Constant attention on such big data analysis across various plant species find that the hallmarks of histone modifications or DNA methylation mainly reflect effects at protein coding gene sites, while a considerable proportion of those located in non-coding regions cannot be ignored. In A. thaliana, analysis of large-scale ChIP-seq datasets produced the typical enrichment profile of various histone marks at the lncRNA loci (excluding some short regions near transcription start and termination sites), which disturbed similar to protein coding genes region [57,58]. Among those histone marks, the expression of lncRNAs was preferentially correlated with H3K4me3, H3AC, H3K4me2/3 and H3K36me3 rather than with H3K9me2 and H3K27me3 [54]. Moreover, the expression level of a group of lincRNAs (such as Lnc2-1, Lnc3-3, Lnc4-1, Lnc6-1, Lnc8-1, and Table 2 Regulation of lncRNA associated with epigenetics modifiers in plants. * NAT-lncRNA: natural antisense transcript; incRNA: intronic RNA; lincRNA: long intergenic noncoding RNA. LncRNA Lnc12-1) is sharply increased in ddm1a/1b mutant [59]. These lncRNAs are negatively regulated by chromatin-remodeling factor DECREASE IN DNA METHYLATION1 (DDM1) (Fig. 1B, lower part). In addition, absence of mCG in DNA methylation mutants has more impact on lincRNA transcription, compared to non-CG methylation in A. thaliana, O. sativa, and S. lycopersicum [60]. Hence, epigenetic modifiers universally determine transcription of protein coding genes, but their possibilities to affect transcription of lncRNA regions seem to be concerned. In summary, the strong relationships between lncRNA transcription and epigenetic changes, no matter coordinated regulation or passive regulation mainly via writers or erasers of DNA methylation and histone modification (Fig. 1), imply lncRNAs may work in epigenetic modifiers mediated biological pathways. Challenges of lncRNA functions digging from an epigenetic perspective LncRNAs are widely involved in the development processes, and hormone and stress response in plants via co-transcriptional processes with multiple epigenetic factors. In contrast, these epigenetic modifiers can also control the activity of lncRNAs (Fig. 1, Table 2). LncRNAs are closely associated with epigenetic modifier mediated biological processes, and they could also be involved in small RNA (sRNA) regulation. For example, the activity of LDMAR depended on phasiRNA mediated DNA demethylation in rice [34,52] suggesting lncRNA could also be targets of sRNAs. In addition, lncRNA could be precursors of sRNA (siRNAs and miRNAs) and modulate the transcription of downstream genes via controlling the production of sRNAs [61,62]. Therefore, the relationships of lncRNAs, sRNAs and epigenetic modifiers probably modulate co-target loci in some highly organized and precise manners, which remain to be investigated. Those above speculations highlight the complexity, diversity, and challenges of regulatory mechanisms mediated by lncRNAs. Meanwhile, similar to protein coding transcripts, lncRNAs could be modified with N6-methyladenosine and later exhibit close crosstalk with different epigenetic modification processes including writing or erasing of DNA methylation and histone modifications, which has been well informed in animals and humans but is worth to be explored in plants [63,64]. Considering the lack of lncRNA sequence similarity, the identification of homologous lncRNAs across species as done for protein coding genes is currently unfeasible. This aspect thus warrants the hidden characteristics of lncRNAs and bias of lncRNAs function research in human or animals need to be distinguished for plant biologists. Additionally, many kinds of integrated databases containing lncRNA annotation and function prediction data have been well developed for animals and humans [65,66]. For example, the database Lnc2Meth specializes in providing services on regulatory relationships between lncRNAs and DNA methylation in various human diseases [67]. However, such comprehensive and detailed databases for plant lncRNA research are rather lacking. Considerable detailed work on data integration need to be done for lncRNA annotations, especially considering lncRNA related agronomic traits, which has been comprehensively conducted in rice [68]. Compared to animals and humans, some new high-throughput sequencing technologies for lncRNA annotation and function prediction have not been widely applied to plants. For instance, capture long-read sequencing, CAGE-seq, long-read RNA-seq, and RACE-Seq are useful for full-length of lncRNA annotation [69][70][71], and RIP-seq and CHIRPseq are helpful for lncRNA-protein or lncRNA-chromatin interaction perdition [72,73]. Nevertheless, some of these technologies above have only been started to be used in research on lncRNAs of cotton, rice and Arabidopsis [46,58,74]. Surprisingly, single molecule-based RNA structure sequencing was designed to capture single RNA molecule structure in vivo [75]. Further, the hyper-variable region of COOLAIR had stronger ability to interact with chromatin mediated slicing of FLC in response to cold and warm conditions in A. thaliana [75], confirming that various structures of the lncRNA isoforms could precisely modulate gene transcription. Moreover, algorithms or tools for lncRNA characteristics research, such as lncRNA coding potential and structure prediction are generally developed based on animal or human models [76]. Whether those perform well in plants remain to be confirmed. Consequently, further embed regulatory networks for lncRNA and epigenetic modifications will be discovered in plants with the emergence of respective methods and technologies. Declaration of Competing Interest The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
2023-03-13T15:03:21.933Z
2023-03-01T00:00:00.000
{ "year": 2023, "sha1": "0d96c4c150c9e06a2c6935fd23355442cf38970b", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1016/j.csbj.2023.03.008", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f74831dae91ba6cc37a3d68fa6c213df5b72708a", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
232076297
pes2o/s2orc
v3-fos-license
Femtosecond laser micromachining of diamond: current research status, applications and challenges Ultra-fast femtosecond (fs) lasers provide a unique technological opportunity to precisely and efficiently micromachine materials with minimal thermal damage owing to the reduced heat transfer into the bulk of the work material offered by short pulse duration, high laser intensity and focused optical energy delivered on a timescale shorter than the rate of thermal diffusion into the surrounding area of a beam foci. There is an increasing demand to further develop the fs machining technology to improve the machining quality, minimize the total machining time and increase the flexibility of machining complex patterns on diamond. This article offers an overview of recent research findings on the application of fs laser technology to micromachine diamond. The laser technology to precisely micromachine diamond is discussed and detailed, with a focus on the use of fs laser irradiation systems and their characteristics, laser interaction with various types of diamonds, processing and the subsequent post-processing of the irradiated samples and, appropriate sample characterisation methods. Finally, the current and emerging application areas are discussed, and the challenges and the future research prospects in the fs laser micromachining field are also identified. Photoinduced changes in diamond properties In diamond, carbon, C, atoms form an sp 3 hybridized directional covalent bonds with the neighbouring atoms in tetrahedral orientation with an s orbital overlapping three p orbitals and, hereby, providing strength to the σ bonds at an enclosed C-C-C angle of 109.47  and C-C bond length of 1.54 Å and a bandgap of 5.5 eV [39]. Laser irradiation cleaves one (or at higher fluences, a few) of covalent C-C σ bonds allowing for three covalently-bonded electrons to rearrange themselves to a planar orientation by slightly opening the enclosed C-C-C bond angle to 120  and reducing the C-C bond length to 1.42 Å. The fourth electron becomes delocalised over the entire ππ sheet, overall forming an sp 2 hybridized bonding arrangement as in graphite. This sp 3 to sp 2 phase transition drastically reduces the bandgap to -0.04 eV. Notably, the sp 3 to an sp 2 phase transition performed in a controlled manner opens a unique opportunity for the development of 'all-carbon' electronic, optical and even quantum devices [40,41] owing to the extreme differences in atomic structure, electron arrangement and chemical configuration and interaction mechanism between the two most common carbon allotropes. In the fs regime, when diamond is irradiated with an applied fluence above its ablation threshold of approximately 3 J/cm 2 , ablation with minimal graphitization is observed (at depths fewer than 50 nm) [36], whereas at fluences below the ablation threshold (i.e., < 3 J/cm 2 ) no ablation is occuring and mostly graphitization is obserbved marked with a formation of an sp 2 phase. This phenomenon is graphically shown in Fig. 3a, where the optical transmission of chemical vapor deposited (CVD) diamond rapidly falls at 0.3 J/cm 2 (well below diamond's ablation threshhold) indicating the start of graphitization process [12]. It has been suggested, that the bulk transformation process from an sp 3 to an sp 2 bonding proceeds with the direct photodamage of the diamonds' sp 3 lattice creating an sp 2 nuclei and subsequent heating of newly formed and embedded sp 2 graphitic centres in the irradiated sp 3 volume [42]. Fig. 3b shows the reduction in total optical transmittance attributed to an increasing local graphitization relative to the projected optical power (the number of laser shots at different incident fluences) for a ps-laser light [43]. Notably, there is an initial photonic accumulation period observed, where the energy density remains low for relatively low laser fluences. Fig. 3. a) Optical transmission of CVD diamond specimens vs. incident laser fluence (120 fs, 800 nm) [12], b) changes in the optical transmittance of CVD diamond vs. number of laser shots at different fluences (220 ps, 539 nm) [43], c) Raman spectra of original CVD diamond (bottom) and laser modified region (top) [12]. The structure of downconverted sp 2 phase that is formed following the laser irradiation of diamond samples in air is never purely crystalline but is highly amorphous. For example, a 532 nm Raman spectra of the original nitrogen, N, doped CVD diamond commonly displays the characteristic sp 3 vibrational mode at ~1331 cm -1 and, a low intensity broad mode at ~1420 -1425 cm -1 , which is commonly atributed to an N-doped CVD, as shown in Fig. 5 [12]. Laser irradiation gives rise to additonal vibrational modes attributed to microcrystalline graphite at ~1417 cm −1 (peak 1, Fig. 3c), amorphous sp 2 carbon at ~1450 cm −1 (peak 2, Fig. 3c) and distorted microcrystalline graphite at ~1560 cm −1 (peak 3, Fig. 3c) and 1580 cm −1 (peak 4, Fig. 3c) [44,45]. The sp 2 rich phase formed in diamond following laser irradiation is mostly composed of a mixture of an sp 2 aromatic clusters and sp 2 olefinic chains [46,47] and often contains poly(p-phenylene vinylene) and poly(p-phenylene) sp 2 fractions, analogous to those found in amorphous carbon materials synthesised in hydrocarbon plasmas [48][49][50]. Following the preliminary sp 3 to sp 2 phase conversion and the formation of a graphitised nuclei the photoablation of diamond proceeds as a combined process of vaporization and oxidation. The vaporization process occurs both in air and in vacuum, whereas the oxidation process occurs only in the presence of oxygen. Komlenok et al. [51] reported that at high laser fluences of 1.2 -3.8 J/cm 2 that are a fraction below and slightly above its TF, diamond can be sublimed at a rate of 30 -200 nm per pulse in both air and vaccum owing to vaporization driven photoablation. At low fluences of 0.34 -0.55 J/cm 2 , which are an order of magnitude lower than the diamond's TF, the siblimation is minimal at 0.01 nm per pulse as the process is mainly driven by oxidation [51] and, no submimation occured at low fluences in the absence of oxygen (in vaccum). The photoelectric properties of diamond, including optical absorption and photon-to-electron quantum efficiency, depend strongly on its intrinsic crystalline structure and the presence of impurities. Owing to its high refractive index, n, that is 2.4 at 590 nm [52], photocurrent in a pure diamond crystal can be induced by photon absorption in the UV range (below 225 nm) [53], however in an instance when impurities and defects are present, -in the visible range (415-478 nm) [54]. This is a very important experimental observation indicating that the presence of defects in diamond increases the photoconductivity by enabling light of lower energy to be absorbed in the material. Brecher et al. [15] recently reported that UV induced photoconductivity over a prolonged period reduced the sensitivity of diamond-based UV detectors. The sensitivity could be fully restored by thermal annealing of diamond crystals in white light [15]. Photoinduced changes in diamond surface topography Descriptions of surface topography in applications to laser-processed materials normally refer to the shape of the surface profile (i.e., waviness, type of surface finish) and an average arithmetic surface roughness, Ra, value, which are known to significantly impact the material's bulk properties and its suitability for applied applications. Irradiation of materials surface changes the surface topography depending on the shape, intensity and polarization of an incident laser beam. Laser irradiation results in the formation and growth of an sp 2 graphitized region immediately within the HAZ in diamonds at lower fluence, which can be rapidly and easily sublimated at higher fluence [55], depending on the pulse width of the incident laser pulse. The use of fs-pulse light is associated with the reduced heat transfer into the bulk of the work material [56], sharp thermal profiles and high precision machining [33,55] compared to conventional ns-and or ps-laser processing methods. Using an fs beam with a circular polarization enables effective micromachining of various spherical, rounded, globular and conical features on the surface of the ablated solid, while preserving the sp 3 phase composition, including curved profiles when accelerating beams are employed [57]. The use of laser beam with a linear polarization generates graphitic periodic surface structures over the irradiated surface [58,59], including ripple-like features with a characteristic periodicity, close to the wavelength of the incident light. The latter phenomenon arises from the interference of the incident laser beam with the scattered waves from the surface of an ablated solid. The interference between the incident light wave and the surface plasmon excitation is responsible for the generation of the surface structures that display periodicity patterns close to or, sometimes, less than the laser incident wavelength. The latter is dependent on the intrinsic Ra values and surface features of the irradiated workpiece [59]. It is important to point out that the periodic ripples of the surface of an ablated solid display the periodic features that are much smaller compared to the irradiated laser wavelength range, are smaller for fs-laser irradiated solids compared to ps-or ns-irradiated materials [44,45,60,61]. Fig. 4 shows surface morphology of CVD diamond samples irradiated with linearly polarized fs-laser beam at fluences near the ablation threshold (i.e., 1.9 J/cm 2 ) [62]. The scanning electron microscopy (SEM) images of nano-sized gratings with regular ~170 nm periodicities are shown displaying abrupt verges between the grooves and ridges [62]. However, the use of a slightly higher fluence (2.8 J/cm 2 ) results in an increased periodicity of ~190 nm and a smoother profile of the gratings due to stronger thermal effect [62]. [62]. Various features can also be laser written (aka. scribed) on diamond with a suitable periodicity and spacing for applied waveguide applications. Recently, Bharadwaj et al. [63] reported on successful fabrication of mid-IR waveguides by scribing two 40 µm waveguide lines on SCD, which were designed to operate at 2.4 µm and 8.6 µm wavelengths; the waveguiding was possible owing to an increased polarizibility and a greatly reduced refractive index in the waveguide's sp 2 rich cladding sheath [64]. Laser irradiation can be tuned to reduce Ra value of the machined workpiece, which is generally considered as an indicator of improved machining surface quality. Moreover, for applied applications it is important to maximise MRRs in laser ablation processes as a measure of improved machining efficiency. Yao et al. [65] have shown that in an instance, when nano-crystalline CVD diamond with is subjected to fs laser ablation, the Ra and MRR are almost linearly co-dependent ( Fig. 5; notably, the Ra of as-grown CVD was 0.342 μm). [65]. Fs laser irradiation enables precise modification of diamond surface profile depending on laser beam characteristics. However, the increase of MRR in laser machining adversely affects the surface roughness of processed samples, the latter effect to be minimized through a careful selection of process parameters and the use of appropriate to the sample ablation threshold parameters. LASER MICROMACHINING OF DIAMOND 3.1 Conventional (nano-, pico-second) laser micromachining of diamond Laser irradiation, a versatile tool for material processing, enables precise control over the amount of energy focused at a specific location on the sample by selecting appropriate to the sample process parameters [66]. Laser ablation processes normally include laser grooving, scribing, drilling and marking [10,67], all of which are used to remove bulk of the material. The ns-laser ablation process is governed by electrons in an ablated media absorbing laser photon energy and reaching a thermal equilibrium with the lattice during the laser pulse duration [68]. Owing to a relatively short electron-lattice relaxation time of 10 -10 -10 -12 s, that is up to three orders of magnitude smaller, compared to ns-laser pulse duration [38], the nslaser ablation processes are thermally destructive to the sample media as extended HAZs are generated in the process. Takayama et al. [69] classified four different forms of damage and their causes that occur during ns-laser processing of diamond (employing 15.6 ns, 532 nm, Pavg > 1W system as a study model), namely: cracking, groove shape deformation, ripple formation, and debris deposition. Cracking resulted from a rapid temperature change, groove shape deformation resulted from an improved absorption of the plasma generated through nslaser irradiation, surface ripples were formed by the ns-laser beam interference with the groove walls, whereas laser induced debris deposition was attributed to the use of sample-specific ablation regimes [69]. Cadot et al. [10] (see Fig. 6) have shown that graphitisation and sp 2 amorphization of diamond occurs at relatively low fluences for ns-pulsed laser as evidenced by appearance of characteristic disordered graphite D and G Raman modes at ~1350 cm -1 and 1580 cm -1 , respectively [48,49,70], which are becoming broader and display higher intensities at higher fluences. Kononenko et al. [71] reported that reduction of ns-pulse width resulted in less graphitization and the disordered graphitized layer thickness was found to be independent of the laser energy for fluence ranges above the ablation threshold, namely at or above 0.2 J/cm 2 . Ohfuji et al. [21] found that the ambient conditions during ns-pulsed laser machining of PCD samples had a negligible effect on both the thickness of the final graphitized layer and the recorded MRR values. These works additionally confirm that the predominant mechanism of material removal in diamond is by graphitization and subsequent sublimation of the graphitized layer from the ablated volume/region, as noted ealier. It has been reported that excimer lasers commonly operating in the range of 5 -20 ns pulse lengths are capable of producing deep holes in diamond during laser drilling operations [72], the actual depth of the hole that can be created is, however, limited by the generated hydrocarbon plasma plume, which obstructs the laser energy from reaching towards the bottom of the hole and thus reducing the pressure available to expel the melted and graphitised material out. Typical ablation plasma attributes include characteristics such as ion density, ion flow velocity, free electron velocity as well as electron temperature [73]. Naturally, longer pulses generate plasma plumes of higher ion density and electron temperature and, therefore short pulse widths have been recommended for laser drilling processes where an increase in the depth of hole was required. Earlier works of Kononenko et al. [74] have shown that ps-pulsed laser drilling of CVD diamond generated less plasma plume while creating a deeper hole (i.e., 500 μm) compared to a ns-pulsed laser used under similar conditions, which produced a much shallower hole (i.e., 50 μm). Additionally, it was also observed that a significantly higher threshold fluence (~17 J/cm 2 ) was required for the through-hole laser drilling process compared to the blind-hole shallow drilling process (2 J/cm 2 ) for the same workpiece material type. The latter observation was attributed to an increased photon absorption of dense plasma restricted by the wall of a deep hole. To curtail the photon scavenging effects of plasma plume during laser drilling of CVD diamond Migulin et al. [75] employed an additional oxidation potential of an O2 jet to facilitate the removal of sublimed graphitised fractions from the ablated volume through CO-and CO2 formation, and found the solution reasonably effective. It is often noted that the thickness of the graphitized layer formed on diamond sample is inversely proportional to the projected ns-laser fluence since complete down-conversion from an sp 3 to aromatic sp 2 fraction is only possible when diamond sublimes above its ablation threshold. Mouhamadali et al. [76] showed that CVD diamond transformed to a 'thicker' amorphous sp 2 graphite-like (a-C) when irradiated with a green (532 nm) 40 ns light at a relatively low fluence of 4.9 J/cm 2 , whereas only a 'thin' sp 2 aromatic phase was observed at much higher 15 J/cm 2 fluences. Additionally, multiple micro cracks on the surface of aromatic sp 2 sites ablated at 15 J/cm 2 were observed, while only minor cracks were detected on a-C sites ablated at 4.9 J/cm 2 and the thickness of a-C layer was found to be ~1.5 thicker compared to sp 2 aromatic phase. Ps-laser machining was found to be more effective compared to ns-pulsed laser processes given a significantly shortened pulse durations and, subsequently, the likelihood of HAZ formation in the ablated volume. Such as Takayama et al. [77] performed a ps-pulsed laser irradiation (1030 nm, 800 ps) using variable repetition rates on a SCD sample to make a tool with a microgrooved edge and found that 15.3 J/cm 2 fluence at 100 kHz afforded rapid machining with limited graphite deposition and devoid of any edge cracking. Guo et al. [78] investigated the effect of ps-laser-induced micro-grooves on the grinding performance of coarse-grained diamond wheels for optical glass surface grinding and reported that the laser micro-structuring reduces the subsurface damage depth, d', effectually from 5 to 1.5 m. Both the d' and an Ra values were found to be lower with reducing the micro-groove spacing. Zhang et al. [79] produced macro-patterns on the diamond grinding wheel surface to improve the grinding performance and reported that the grinding temperature and sub-surface damage decreased significantly for macro-structured grinding condition. The normal and tangential grinding forces were reduced by approximately 15% for ps-laser macro-structured grinding wheels compared to the conventional grinding wheels. Various methods have been proposed to improve the laser machining performance. For instance, Park et al. [80] performed micromachining with a ns-pulsed excimer laser and an assisted compressed auxiliary gas in the machining of diamond films, and observed that the laser beam breaks the atomic bonds in the work material and the resultant plasma plume gets expelled away by an assisted gas jet, improving machining quality of the diamond surface. Additionally, both the ns-and ps-laser processing has been successfully combined with precision grinding to improve the machining performance by several researchers, such as Brecher et al. [15,81] employed a ns-laser combined with grinding for fabrication of PCD turning tools. In this process, the surface was micromachined using ns-laser irradiation to remove the bulk of the material, assisted by conventional grinding process to remove the residual. The cutting forces, the magnitude of grinding wheel vibration and the overall processing time were significantly reduced by approximately 50 % compared to traditional grinding. Likewise, Yang et al. [82] performed the laser induced graphitization of CVD diamond into predominantly aromatic graphite, which was subsequently removed via precision grinding process. A fundamental problem with a ns-laser ablation lies in its inability to generate suitably high ablation rates and acceptable machined surface quality, the problem is also extended to the application of a ps-laser, albeit to a lesser extent. The cracking tendency, the generation of pervasive HAZs and poorly controlled graphitization of machined diamond surfaces are mostly attributed to relatively long (i.e., ns-and ps-) pulse durations compared to electron-lattice relaxation times of 10 -10 -10 -12 s [83]. In particular, ns-process generates more recast layers, debris and HAZ compared to the ps-laser process owing to the extended photon-lattice interaction time [84] and is associated with significant and stable plasma formation at the surface of the ablated media [85]. It is hardly a materials processing method of choice in instances where integrity, composition and properties of tetrahedral sp 3 organisation in diamond are to be preserved during the post laser machining. 3.2 Fs-laser micromachining of diamond The first reported attempts to develop an ultrashort-pulsed laser with a pulse duration of less than 0.1 ps were made in early 1980s by Fork et al. [86] and it has taken over two decades to recognise an opportunity offered by ultrashort laser technology for precise photoablation of diamond. In fs-laser machining, optical energy is transferred to the target media through photoinduced optical breakdown, in which a majority of electrons are ionized resulting in a structural and, in the instance of diamond ablation, a phase (i.e., sp 3 to sp 2 ) transformation and, subsequent ablation of the irradiated volume. In a highly transparent material, such as dopantfree pure diamond, the energy absorption must be non-linear due to the insufficient incident photon energy avaiable for ionisation [87,88], and for a non-linear absorption to occur, the strength of the electric field in the laser pulse needs to be equal to the electric-field strength, something in the order of 10 9 V m -1 (approximately corresponding to laser intensity of 5 × 10 20 W m -2 ) [89]. High intensity and tight focusing are required to achieve electric-field strengths of such a magnitude. The non-linear absorption and tight focusing confine the absorption inside the bulk of the material to the focal volume preventing the absorption at the surface and thus yielding micromachined volumes as small as 0.008 μm 3 [90]. The ultrashort laser pulses deposit the laser energy in thin layers with the thickness of 1/α, where α is the optical absorption coefficient and, sublime the material through a direct vaporisation from the surface [91], which for diamond occurs through conversion of sp 3 into a full-or a partial-sp 2 phase configuration and, subsequent sublimation of sp 2 phase from the irradiated volume. The sublimation occurs only at or above the sample ablation threshold, and it is the critical threshold fluence, which causes the material removal within the irradiated spot area and the focal volume in the sample. The fluence, however, should not exceed a certain value, specific for a given diamond sample, to avoid the thermal damage and an uncontrolled increase of the HAZ in the focal volume. The fs-pulsed lasers are able to generate extremely high-power density (up to a few GW) at relatively low average laser power (as low as 100 mW) [92], high enough to dissociate C-C covalent bonds in a diamond lattice. Graphitization occurs by electrons absorbing energy through an inverse bremsstrahlung process and, thus facilitating a transition from an sp 3 tetrahedral to an sp 2 aromatic and/or an sp 2 olefinic bonding state [93]. The sp 3 to sp 2 transition increases C-C interatomic distances, lowers the density of states (DOS) and changes the physico-chemical properties of the irradiated solid [47][48][49][50]. Notably, the fs-lasers operating at short pulse durations (e.g., in 10 -15 s range) enable precise manipulation of DOS owing to a relatively long electron-lattice relaxation times of 10 -10 -10 -12 s, reduce the probability of HAZ formation, sublime diamond with minimal thermal damage and enable the manufacture of surface structures with clear and well-defined edges as shown in the Fig. 7(a) and 7(b) and only displaying nanoscale surface ripples Fig. 7(c). nJ, (c) magnified image of (b) (adapted from [94]). Fs-laser micromachining involves several physical processes with fairly well-defined timescales as shown in Fig. 8(a) [95] and 8(b) [89]. . 8. a) Ablation mechanism of a short pulse (i.e., ns-laser ablation), adapted from [95] and b) timescale of physical phenomena associated with the interaction of an ultra-short fspulse laser ablation, adapted from [89]. Fig Namely, it takes a few ps for electrons to transfer absorbed optical energy to the lattice, it takes few ns for a dynamic shock/pressure wave to separate hot and dense focal volume [96,97] and, an additional few µs for the thermal energy to diffuse out of the focal volume. These processes result in a non-thermal ionic motion at a sufficiently high energy to leave behind permanent and lasting structural changes [98]. There is a fundamental difference between the fs-photon induced damage and that of the ps-or ns-ones [89]. For sub-ps pulsed lasers, the timescale over which an fs-pulse excites an electron is much smaller than the electron-phonon scattering time (about 1 ps), and as a result, the fs pulses end before the electrons supply its thermal energy to any ions. Heat diffusion is confined to the focal area and, as a result, the precision of the method is increased [84]. In addition, fs-laser processing does not require defect electrons for seeding the absorption process; the fs-pulse generates enough seed electrons through non-linear ionization [99], making the fs-pulsed laser suitable for the precision micromachining applications. Additionally, Zalloum et al. [14] showed high quality micromachining with precise geometry can be achieved using fs-pulsed laser employing pulse energies that are significantly above diamond's ablation threshold owing to the minimal thermal effects and hydrodynamic expansion of the ablated sample material. Ogawa et al. [36] reported that with an increase in energy level, there is an increase in MRR with very minimal change in sp 2 graphitised layer thickness. Common fs laser micromachining systems Ultrashort pulsed laser ablation systems are technologically advanced tools capable of processing various materials at extremely high peak powers (up to few GW), while delivering a highly localized energy that enables a non-thermal ablation of even transparent and lowabsorption samples with high precision and accuracy [92]. Active and passive mode-locked oscillators generate ultrashort pulses, that require amplification to reach the desired energy levels to process diamond above its ablation threshold (i.e., at or above 3 J/cm 2 [31], see Section 2.1). High peak powers generated in fs-pulsed laser system can also inadvertently damage the amplifier, making the fs-pulse amplification a challenging task. In passive mode-locking, the energy is transferred from an external source into a gain medium using a semiconductor diode [100], the latter can be of different chemical composition and structure, commonly a synthetic crystal of the garnet group, which enables the generation of discreet wavelengths, pulse durations, output power range and pulse repetition rates in fs-pulsed lasers as summarised in monitoring the sample machining process as schematically shown in Fig. 9. Fig. 9. Schematic layout of a typical fs-laser system (a Ti:sapphire oscillator is shown). Additionally, the fs-laser machining system may include a separate sample handling and cleaning stage, and, occationally, a vacuum chamber equipped with an axillary low pressure pump. Since preliminary sp 3 to sp 2 phase conversion in diamond and the formation of a graphitised nuclei occurs via a mutually linked process of vaporization and oxidation, laser machining is normally performed in air under abmient conditions, vacuum chambers, however, can be used in instances, where controlled formation of periodic structures on diamond surface is required. The effects of fs-laser process parameters on micromachining quality The laser processing parameters include the following: laser wavelength, λ (nm); pulse energy Ep (mJ); pulse duration, τ (fs); pulse repetition rate, Rp (kHz); scanning speed, Vs (mm/s); beam quality, M 2 ; focal length, f (mm); beam diameter, db (mm) and numerical aperture of the 24 focusing objective, NA. Along with these parameters, some common terms are also used, such as laser peak power, Pp (GW); average power, Pavg (mW); beam intensity, Ip (W/cm 2 ); focal spot area, Af (cm 2 ) and focal spot diameter, Df (mm). The peak power Pp is related to Ep and τ as The pulse energy Ep is related to Pavg and Rp as The beam intensity, Ip which is the energy delivered per unit focal spot area per unit time, is related to Pp and Af as The fluence, F, which is the energy delivered per unit focal spot area, is related to Ep and Af as = (5) 25 The focal spot area Af is related to Df as The focal spot diameter Df is related to M 2 , λ, f, and db as Substituting Df from Eq. 7 into Eq. 6, gives a relationship between an Af and M 2 , λ, f, and db as = . (8) Substituting Af from Eq. 8 into Eq. 5, gives a governing relationship between the F and Ep, M 2 , λ, f and db as The size as well as the intensity distribution profile of the projected beam within the irradiated area affect the material response to the applied laser energy [116]. Also, the laser beam propagation characteristics are known to vary during processing in high power laser delivery systems [117]. A small change in the focal length has a measurable effect on the spot size and the intensity distribution of the projected beam, which in turn influences the stability of the machining process, the local material response and overall the MRR [116]. Also, the size of the laser beam diameter is strongly related to the induced damage probability, overall affecting the damage threshold of the optic system. The smallest diameter of the laser beam permitted for laser induced damage threshold testing as noted in ISO 21254-1:2011 standard is 200 mm [118]. Since higher fluences can be obtained using smaller laser beam diameters many commercial laser machining system suppliers are aiming at providing laser machining centres capable of generating smaller beam diameters. A small micrometre-sized focal spot is achieved by focusing fs laser pulses using external lenses, which normally results in a non-linear absorption. For a collimated Gaussian beam focused on a dielectric material, the diffractionlimited minimum waist radius w0, that is ½ of the Df, is given in the Eq. 10, as Rayleigh range, z0, which is expressed as ½ of the depth of focus, DOF, in a transparent material with retractive index, n, and free space wavelength, λ, is given in Eq. 11, as Eqs. 10 -11 show that a laser beam with a shorter wavelength and a larger NA produce a smaller beam waist diameter and a smaller depth of focus, which in turn result in a pulse of a higher fluence. The number of pulses, N, delivered to a specific spot is related to the Df, Rp and Vs as = (12) Eq. 4 shows that a laser beam intensity is directly related to its pulse energy and inversely related to pulse duration and focal spot area. Fluence is linearly related to laser pulse energy and has an inverse square relationship with the beam wavelength (see Eq. 9). A number of pulses delivered at a specific spot on a sample is linearly related to a focal spot diameter and pulse repetition rate, and inversely related to the scanning speed (see Eq. 12). All these parameters influence the characteristics of material ablation and the effect of these laser processing parameters on machining of diamond has been reported in the following experimental studies. It is known that the laser pulse duration is inversely proportional to its beam intensity( see Eq. [36]. It was reported that an fs-pulsed laser machining of diamond with circular polarization produces an Ra of ~0.02 m and an MRR of 0.004 mm 3 /s with no detectable surface layer graphitization as shown in Fig. 11(a) and Ra remains below 0.3 m range irrespective of the laser power projected on the sample [36]. Notably, the desired high MRR and a relatively low Ra during laser machining were only achieved by employing an fs-pulsed laser, as shown in Fig. 11(c). operations on dimaond [36]. Fs-laser machining with a linear polarized light is normally associated with the formation of so-called light induced periodic surface structures (LIPSS), which appear on the uppermost surface of an ablated workpiece [119]. LIPSSs features vary significantly depending on the selected wavelength and polarization [120,121], and as shown in Fig. 12(a-c) LIPSS topological features can be spatially reduced from ~0.2 µm (Fig. 12 (a)) to 90 nm ( Fig. 12(b)), when an fs laser wavelengths is reduced from 800 nm to 400 nm during laser machining of a diamond sample. Additionally, the application of an fs-pulsed laser light with a circular polarization results in an Ra reduction from 0.16 µm to 0.096 µm and almost complete disappearance of LIPSS at the same wavelength [36]. The thickness of graphitised sp 2 -rich layer at the uppermost surface of diamond specimens irradiated using a ns-pulsed laser is commonly reported to be a few µm (e.g., 2-5 µm), in contrast, the fs-pulsed laser efficiently ablates the material and the thickness of graphitized layer is normally much lower, often only a few tens of nms (e.g., less than 50 nm) as reported by Ogawa et al. [36]. Therefore, the fspulsed laser produces quality ablated surface in diamond with minimal graphitization. A higher pulse energy results in higher fluence, which in turn generates higher MRRs (refer to Eq. (9)). Zalloum et al. [14] investigated the effect of varying the pulse energy on the micromachining of HPHT SCD (τ 200 fs, λ 800 nm, Rp 250 kHz) and reported no LIPSS-like formation below 75 nJ ablation threshold, which corresponds to a laser fluence of 9.6 J/cm 2 using Df of 1 mm. However, using pulse energies that are significantly above the ablation threshold (i.e., 0.42 mJ), it was shown that fs pulses produce clear LIPSS-like saw-tooth structures in diamond at a scale below the diffraction limit (Fig. 12 (c). Increasing the pulse energy to 2.52 mJ increased the depth of LIPSS-like saw-tooth structures (Fig. 12 (d). [14]. Fig. 12. a) SEM images of LIPSS on diamond film treated by near normal incidence P- By using higher laser beam power and lower scanning speed an improved material ablation is achieved (refer to Eq. (12)). Dou et al. [122] showed that the width and depth of an fs-laser (λ 800 nm, τ 120 fs, Pavg 5 W, Rp 1 kHz) induced microgrooves in CVD SCD increased with the increase of laser power and decreased with an increase of scanning speed. The suitable scanning speed for maximizing MRR in fs-laser machining was suggested to be about 0.1 mm/s [122]. The ablation rate in the depth direction was found to decrease when the scanning speed was greater than 0.3 mm/s [122]. Similarly, the machined structure geometry in fs-single shot ablation regime is affected by the fluence as well as an NA, which affects the focal spot radius, as given in Eq. (10). For applications that require high depth and high aspect ratio features, it is useful to irradiate the sample sufficiently above its ablation threshold fluence value. The micromachined structures were reported to be spherically symmetric for NAs larger than 0.6, for NAs lower than 0.6 the resulting structures become asymmetric (τ 200 fs, λ 800 nm, Rp 250 kHz) [14]. Ionin et al. [103] investigated microscale linear damage tracks inside natural SCD ablated at different NAs and focal depths using an fs-pulsed laser source (λ 744 nm, τ 120 fs, Ep 6.5 mJ, This means that the MRR increases with increasing the laser power and incident fluence. between the dimple diameter and the projected fs-laser power, adapted from [102]. Likewise, an increase in pulse repetition rate leads to an improved material ablation, due to having a greater number of pulses projected over an irradiated area, for the same pulse energy and pulse duration, as given in Eq. (12). Sotillo et al. [112] The presence of intrinsic and extrinsic defects and impurities in the synthetic diamond serve as absorption sites that reduce its ablation threshold and need to be considered while comparing different fs-pulsed laser ablation results on seemingly similar diamond samples. Kononenko et al. [12] showed that the impurities and defects in the CVD diamond sample contributed to the process of ionization and increased the seed electrons production for the impact ionization, overall reducing the damage threshold for CVD diamond samples by ~30 % compared to a pure, defect-free, natural diamond. The reported damage thresholds for the CVD diamond and the natural diamond sample were 10 -80 J/cm 2 and 2 -4 J/cm 2 , respectively for fs-laser irradiation (800 nm, 120-fs, 8 mm objective; beam waist diameter 3.0 µm). The laser process parameters, including the wavelength, pulse energy, power, pulse duration and repetition rates, the scanning speed, numerical aperture and beam polarization are to be selected carefully to maximise MRRs, attain low Ra values and produce minimal sp 2 -graphitised layer in processed diamond samples. One of the most important laser process parameters is pulse duration, τ, which influences the intensity of the incident beam (refer to Eq. (4)) and, thus, affecting the MRR, Ra and HAZ mostly. For a specific pulse duration regime, the next most important factor is the laser fluence, F, which depends on the Ep, M 2 , λ, f and db (refer to Eq. (9)). In the ultrashort fs-pulsed regime, the ablation threshold for diamond is over 3 J/cm 2 [31]. The third most important factor is the number of pulses delivered at a specific spot, N, which is influenced by Df, Rp and Vs (refer to Eq. (12)). Hence, the pulse duration, fluence and number of pulses at specific spot are the three most important factors dictating the material ablation. 4.3 Post-processing of the fs laser irradiated diamond samples The fs-laser machining is a relatively clean process, however a small graphitic debris still may form on the machined surface of diamond, which requires removal. The most economical and the simplest method is a chemo-mechanical scrub, in which Q-tips, wipes or soft brushes soaked in hydrocarbon solvents such as ethanol, methanol, acetone, etc. and/or demineralised water with anionic cleaning surfactants are employed to manually dislodge and remove the debris [14,107]. The acid-scrub involves cleaning the samples in a solution of sulphuric (H2SO4), perchloric and nitric (HNO3) acids (mixed at 5:3:1 wt.%) and heated to 200 °C to remove the graphitized layer formed on the machined surface [35]. Likewise, an aqua regia, a mixture of HNO3 and hydrochloric acids (1:3 wt.%) can be employed to boil the samples at 120 °C for 2 hrs [106]. Following the acid cleaning process, the samples are rinsed in deionised water and dried with compressed nitrogen gas [106]. Ultrasonic cleaning [114,125] enables removal of debris from machined diamond samples by using cavitation bubbles produced by high frequency (15-400 kHz) sound waves to agitate the liquid media in which the samples are placed for the duration of the US cleaning cycle. Higher frequency produces smaller nodes between the cavitation points resulting in a more precise cleaning of finer features. Often diamond samples are ultrasonically cleaned in H2SO4 or HNO3 acids for 0.5 -1 hr, followed by deionised water rinse [114,125]. One of the advantages of fs-laser machining is the easy, economical and uncompromising way by which the samples can be cleaned compared to ns-or ps-laser machined samples. The latter require more rigorous post-processing methods to remove the fused sp 2 -debris including the use of multiple repeated steps of chemo-mechanical, acid-scrub and an ultrasonic cleaning and even mechanical grinding [82] and high pressure gas jet ablation [80]. Characterisation of micromachined diamond samples In industrial settings laser irradiated diamond samples are typically analysed employing a limited range of optical and electron microscopy tools to characterise the 3D surface topography and Ra and spectroscopic techniques to characterise their chemical composition. These techniques vary by their processing capabilities and scalability and are often employed for batch sample screening. The pattern and geometry of laser machined surfaces is normally visually inspected using an optical microscope equipped with a high magnification (i.e., 100X) objective lenses [35,82,104,106,107]. The diffraction limits the spatial resolution of examined features to ~200 nm (~½ of blue 400 nm light). A 3D optical microscopy analysis represented by a laser scanning confocal microscopy (LSCM) and a white light interferometry (WLI) enables a detailed observation of surface profile, 3D surface topography and the ablated structures [114] by using a light directed in such a way that it detects the 3D surface. The capabilities of LSCM and WLI vary significantly to batch sample processing, however both the LSCM [115] and the WLI [107,125] offer a sub-μm spatial resolution and detailed measurement and analysis of 3D surface topography. Mechanical surface profilometry uses a contact physical probe with contact force feedback and allows a limited but a straightforward characterisation of surface morphology at ~10 μm resolution in samples were surface features are highly pronounced and surface roughness is high [126]. To a lesser degree, atomic force microscopy (AFM) analysis with a spatial resolution of a few-nm allows precise surface topography studies including the LIPSS features [106], but at a much reduced sample processing rates and increased costs. SEM with spatial resolution of sub-50 nm generates the image of a sample by detecting the reflected electrons focusing only on the surface. SEM provides a relatively simple, relatively quick and less costly method to study samples with diverse geometries [107,125,127]. However, application of SEM to diamond batch processing is limited as the measurements require a compensation for a non-conducting (i.e., diamond) sample charge accumulation [128]. By using surface profile data the MRR can be calculated by (i) multiplying cross-sectional area by the scanning speed [125] and/or, (ii) dividing the total material removal volume by the laser machining time [82,129,130]. is only sensitive to homo-nuclear bonds (e.g., C-C and C=C) of sp 2 -fraction and therefore its application to precise screening of hydrogenated diamond samples is limited, unless complimented with a UV Raman with its capability to directly probe sp 3 DOS. FT-IR, on the other hand, allows the measurement of protonated and hetero-nuclear functional groups (e.g., -OH) and as a traditional and a well-established method FT-IR often compliments Raman when used to classify diamonds [137]. Other analytical techniques such as X-ray photoelectron spectroscopy (XPS) [49,138,139], electron spectroscopy for chemical analysis (ESCA), X-ray powder diffraction (XRD) [127] despite offering an enhanced capability to identify sp 2 and sp 3 phase and surface and subsurface chemical compositions in samples are used sparingly in industrial settings owing to their low batch processing capabilities and a rather involved analytical methods. APPLICATIONS OF FS-LASER MICROMACHINED DIAMOND Laser micro-patterned diamond tools [140] that are used to mechanically machine multiple micro-patterns on ultra-hard workpiece upon direct contact can be effectively fabricated using fs-laser irradiation process [130]. In their production, a high intensity spatial light modulator, such as digital micro-mirror device is employed to project complex patterns into a diamond workpiece sample. This method of production is also making its way to fabrication of identification tags on diamond and direct machining of ultra-hard ceramic materials [141]. Fspulsed laser micromachining of diamond can potentially be employed for fabrication of diamond pencils or micro grinding wheels to produce MEMS [114], light emitting diodes [142,143], precision photonic components, flat panel display components [57] including, nano-and micro-scale photonic channels. Fs-laser machining has an immense potential to be employed in the fabrication (i.e., drilling) of ultra-precise circular holes in pre-indented diamond anvil cell (DAC) gaskets used in DAC ultra-high-pressure devices [144,145] and photonics systems such as birefringent regions and Bragg gratings [146,147]. Diamonds are the prime candidate materials for X-ray compound refractive lenses [148], since they are not only able to withstand extreme radiation and thermal loads but also provide an exceptionally effective and robust optical focusing solution [149,150]. To manufacture such X-ray lenses, the fs-laser micromachining is the only technique devoid of any known technological and manufacturing drawbacks but that is capable of producing lenses with the lowest surface roughness [151]. Nitrogen-vacancy (NV) centres, which occur when a N atom and a vacancy replace two adjacent sites in diamond crystal lattice [152], have shown a potential for quantum computing and sensing applications [150,153,154], since NV centres can be optically found, read out and manipulated owing to their long electron-spin coherence times [155][156][157][158]. However, the intrinsic inertness of diamond is a significant obstacle to the fabrication of NV-integrated optical systems. The application of fs-laser pulses with high repetition rate can solve this problem by machining optical waveguides in diamond and connecting multiple diamond NVs together [112] since there are established routes to fabrication of single mode waveguides the visible to the IR in situ in diamond crystal that enable applications such as evanescent field sensors, quantum information systems, and magnetometry [112]. Since fs-laser processing produces a reversible but controllable change in the refractive index in diamond membranes, these changes can be effectively exploited for sensing or optical engineering applications [159]. The LIPSS that appear on the surface of diamond open new application avenues such as microsolar cells devices [160,161]. In semiconductor manufacturing, diamond can be efficiently sliced into thin semiconductor wafers, a process that is by all means quite challenging, owing to its hard and brittle nature. Since fs-laser internal processing converts an inner layer of diamond to an sp 2 graphite, which can be easily separated, a fabrication of ultra-thin diamond wafers of 1×1 mm 2 is possible [162]. Likewise, by using fs-laser irradiation it is possible to fabricate deep-buried graphitic pillars inside the diamond [108] that can function as 3D detectors for hadron therapy to reach deep tumours without damaging the nearby healthy tissues [163]. Conduction in such detectors occurs through highly conductive buried sp 2 graphitic channels that can be fabricated with a diameter of ~1.5 μm and ~1150 μm long [12]. Non-thermal photo-ablation of diamond produces negative electron affinity (NEA), an effect by which a conduction band is shifted higher in energy than the vacuum level [164]. Diamond surfaces terminated with H-display NEA only up to ∼700 °C (above 700 °C H-is cleaved), making them unsuitable for diamond-based thermionic applications [165], where high temperatures are converted into electricity via an electron emission process. Diamond-based thermionic energy converters that have an emitter (i.e., cathode) made of 'black diamond' [166] absorbing more than 99.9% of the light that strikes them could possibly be fabricated using an fs-laser ablation simultaneously with a nano-textured LIPSS surface that is required to harvest solar radiation in diamond-based proton-enhanced thermionic emission (PETE) devices [167]. Evidently, the fs-laser-induced properties in diamond are technologically significant and await many other industrial applications. Fs-laser processing technology offers new opportunities for unique applications owing to its clean and essentially non-thermal processing nature which produces minimal HAZs, the lowest technologically attainable surface roughness and allows processing of transparent materials through a non-linear absorption mechanism. Compared to the conventional ns-and ps-laser ablation processes, the fs-machining can also generate exceptionally high peak powers and ablate diamond samples with minimal graphitization. 37 6. SUMMARY AND FUTURE RESEARCH PROSPECTS 6.1 The findings of this review can be summarized as follows: 1. Fs-laser irradiation can precisely alter the surface topography and transform the chemical structure of diamond depending on the beam characteristics, its intensity and polarization. The application of multi-photon absorption using a focused fs-pulsed laser with a high spatial and temporal photon density is able to generate high MRRs; 2. The HAZ and Ra can be effectively minimised by using an fs-pulsed laser owing to its ultra-short interaction time. An fs-pulse transfers its energy via a photon-electron coupling prior to phonon-driven thermal diffusion; 3. The laser induced ablation is significantly improved by the presence of intrinsic and extrinsic defects in diamond, which serve as absorption sites; 4. Among all the laser systems, Ti:Sapphire laser can generate the shortest pulses (as short as 5 fs) in mode-locked regime of operation and can be exploited in future applications; 5. The laser process parameters are to be selected carefully to maximise MRRs, attain low Ra and produce minimal sp 2 graphitized region. Among all, the pulse duration, fluence and number of pulses at specific spot are the three (3) most important factors dictating the response of the material to the incident laser energy; 6. The fs-laser processing is a relatively clean process generating minimal sp 2 graphitized layer and requires minimal post-processing; 7. 3D surface topography and Ra can be precisely studied using 3D optical microscopy measurements. MRR can then be calculated from the 3D surface profile. Chemical composition of the micromachined samples can be precisely studied using UV Raman, and especially multiwavelength Raman, spectroscopy; 8. Applications of the fs-laser processing are diverse owing to its unique characteristics, such as non-linear absorption and non-thermal nature, that extend its application to processing of transparent and ultra-hard materials. Future research prospects The future research prospects are identified as follows: 1. Laser process parameters need to be optimized to maximize the MRR, minimize Ra and further reduce the occurrence of HAZs in processed diamond samples; 2. Theoretical framework to accurately describe the response of diamond to the fs-laser pulses need to be further refined; 3. Fs-laser ablation technique needs to be further developed to be able at 250 nm (~5 eV) wavelength to effectively excite sp 3 DOS. AUTHOR CONTRIBUTIONS BAK carried out the search and collection of review data, carried out data analysis, drafted the manuscript. MR designed, conceptualised and coordinated the study, carried out data analysis, drafted and critically revised the manuscript. IVL revised the manuscript. All authors gave the final approval for publication and agree to be held accountable for the work performed therein. CONFLICT OF INTEREST The authors confirm that there are no known conflicts of interest associated with this publication and there has been no significant financial support of this work that could have influenced its outcome.
2021-03-02T02:15:35.301Z
2021-02-25T00:00:00.000
{ "year": 2021, "sha1": "d36aabf70aa8a3d92372b40d825b8d7d1b2e6fc0", "oa_license": "CCBYNCND", "oa_url": "https://research-repository.griffith.edu.au/bitstream/10072/403785/3/Ali478270-Accepted.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "d36aabf70aa8a3d92372b40d825b8d7d1b2e6fc0", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science", "Physics" ] }
269929273
pes2o/s2orc
v3-fos-license
Identification of a methyltransferase-related long noncoding RNA signature as a novel prognosis biomarker for lung adenocarcinoma Background: Lung adenocarcinoma (LUAD) accounts for a high proportion of tumor deaths globally, while methyltransferase-related lncRNAs in LUAD were poorly studied. Methods: In our study, we focused on two distinct cohorts, TCGA-LUAD and GSE3021, to establish a signature of methyltransferase-related long non-coding RNAs (MeRlncRNAs) in LUAD. We employed univariate Cox and LASSO regression analyses as our main analytical tools. The GSE30219 cohort served as the validation cohort for our findings. Furthermore, to explore the differential pathway enrichments between groups stratified by risk, we utilized Gene Set Enrichment Analysis (GSEA). Additionally, single-sample GSEA (ssGSEA) was conducted to assess the immune infiltration landscape within each sample. Reverse transcription quantitative PCR (RT-qPCR) was also performed to verify the expression of prognostic lncRNAs in both clinically normal and LUAD samples. Results: In LUAD, we identified a set of 32 MeRlncRNAs. We further narrowed our focus to six prognostic lncRNAs to develop gene signatures. The TCGA-LUAD cohort and GSE30219 were utilized to validate the risk score model derived from these signatures. Our analysis showed that the risk score served as an independent prognostic factor, linked to immune-related pathways. Additionally, the analysis of immune infiltration revealed that the immune landscape in high-risk groups was suppressed, which could contribute to poorer prognoses. We also constructed a regulatory network comprising 6 prognostic lncRNAs, 19 miRNAs, and 21 mRNAs. Confirmatory RT-qPCR results aligned with public database findings, verifying the expression of these prognostic lncRNAs in the samples. Conclusion: The prognostic gene signature of LUAD associated with MeRlncRNAs that we provided, may offer us a comprehensive picture of the prognosis prediction for LUAD patients. INTRODUCTION Over the epochs, lung cancer has predominantly reigned supreme in global incidence, unequivocally establishing itself as the paramount cause of tumor-induced fatalities [1].Routinely, lung cancer can be classified as many pathological subtypes, such as small cell lung cancer (SCLC) and non-small cell lung cancer (NSCLC), among which the common histological subtype is lung adenocarcinoma (LUAD), with an incidence of about 50% of the total cases of lung cancer [2].Although the diagnosis and treatment technology of LUAD is constantly improving compared with the past, the mortality rate has not decreased significantly [3].Even though molecular targeted therapy has made tremendous advances as well as immunotherapy, the overall survival (OS) remains unsatisfactory on account of a defect of new biological indicators associated with the prognosis of LUAD.In addition, LUAD is nowadays diagnosed in the advanced stage without the opportunity for surgical treatment, while the original tumor focus has already been transmitted to nearby tissue or organ [4].Therefore, it is crucial to identify the key prognostic indicators for LUAD. Recent advancements in genomic and transcriptomic analyses have unveiled the complex landscape of long non-coding RNAs (lncRNAs) and their pivotal roles in various biological processes, including the progression and pathogenesis of LUAD [5,6].Additionally, lncRNAs have biological repertoires in malignant tumor immunology, including tumor antigen expression, immunological escape, immune checkpoint, and infiltration.As a result, they may have a great potential to be a biomarker to determine the prognosis [7].Recently, it was shown that methyltransferase-relevant long noncoding RNA (MeRlncRNA) regulators harness their strengths to promote the occurrence and progress of glioma and are critical for determining prognosis and therapeutic approach [8].A series of enzymes have been proven to target certain specific lncRNAs, such as methyltransferase-like 3 and DNA methyltransferaselike 2 [9].Among these, MeRlncRNAs have emerged as critical players in the epigenetic regulation of gene expression, influencing tumorigenesis, metastasis, and response to therapy in LUAD [10,11].Moreover, the dysregulation of MeRlncRNAs has been correlated with patient prognosis, suggesting their potential as novel biomarkers for LUAD diagnosis and prognosis prediction.For instance, the expression of Methyltransferase-like 1 not only advanced in LUAD but also the degree of increase was inversely proportional to the prognosis of cancer patients [12].Despite their significance, the roles of MeRlncRNAs in LUAD remain inadequately explored, necessitating further investigation to elucidate their mechanisms of action and their implications in lung adenocarcinoma pathophysiology.This study aims to bridge this gap by identifying and characterizing a signature of MeRlncRNAs associated with the prognosis of LUAD patients, thereby contributing to a more comprehensive understanding of their biological functions and clinical relevance. Numerous details on tumors, including gene expression, methylation, mutation, and clinical characteristics, are available via The Cancer Genome Atlas (TCGA) and Gene Expression Omnibus (GEO).Our research identified MeRlncRNAs in LUAD for the first time and then developed a significant MeRlncRNAs-related genes prognostic model using univariate Cox and the least absolute shrinkage and selection operator (LASSO).We also have verified the availability of the model via internal and external cohorts.Besides, this study explored the association of risk score and clinical characteristics, further examination as a prognostic marker and independent of other clinical features, and successfully constructed a nomogram in LUAD.Interestingly, the results of GSEA employed to explore the mechanism of prognostic differences between high-and low-risk sets showed that the immune microenvironment of the highrisk group was inhibited, which probably was the cause of the dreadful prognosis.Finally, we performed an RT-qPCR trial to quantitatively detect the expression of six prognostic lncRNAs in LUAD tissues and control lung tissues and verified the consistency of the quantitative results with the gene database data.Intentionally, based on transcriptome data from public databases and corresponding clinical information, bioinformatics methods were used to establish methyltransferase-related lncRNA gene signatures for envisaging the prognosis for LUAD patients, thereby laying the groundwork for clinical prognosis and pinpointed therapeutic interventions. Identification of differentially expressed genes (DEGs), lncRNA (DELncRNAs), and methyltransferaserelated lncRNAs (MeRlncRNAs) in LUAD Screening the DEGs and DELncRNAs was executed in the 'limma' package [13,14].The threshold for DEGs was adjusted p < 0.05 and |Log2FC| >1.The volcano map was created by the 'ggplot2' package.The 'heatmap' package was utilized to plot the heatmap of top 50 up-regulated and top 50 downregulated genes.The identification of differentially expressed methyltransferase-related genes was achieved through the intersection of the differentially expressed genes (DEGs) and methyltransferase-related genes.The Pearson correlation between differentially expressed methyltransferase-related genes and DElncRNAs was calculated, and the relationship pairs with |Correlation coefficient| >0.3 and p < 0.05 were screened to build the mRNA-lncRNA network and obtain MeRlncRNAs. Construction and verification of the gene signature based on MeRlncRNAs A total of 483 patients diagnosed with LUAD were subjected to random allocation, with 145 patients assigned to the test set and 338 patients assigned to the training set, maintaining a proportion of 7:3.For the identification of prognostic MeRlncRNAs, we implemented a two-step approach: initially, univariate Cox regression analysis was conducted to assess the association between the expression levels of each MeRlncRNA and overall survival in LUAD patients.MeRlncRNAs with a P-value < 0.05 in this analysis were deemed potentially prognostic and subjected to further evaluation using LASSO regression analysis.LASSO regression using the 'glmnet' R package, known for its efficacy in handling high-dimensional data, was applied to refine the selection of MeRlncRNAs by penalizing the regression coefficients, thus preventing overfitting and enhancing the model's predictive accuracy.Riskscore = β1X1 + β2X2 + … + βnXn.was the formula for computing risk scores.β denotes the regression coefficient, and X1 represents the expression level of prognosis-related lncRNA. In our study, patient stratification was based on the median value of the risk score, which served as a critical metric for dividing patients into low-risk and high-risk groups.To visualize and analyze the survival differences between these groups, Kaplan-Meier (K-M) survival curves were generated, and the statistical significance of differences in survival was assessed using the log-rank test.Additionally, the 'survivalROC' package in R was employed to calculate the area under the curve (AUC), providing a quantitative measure of the prognostic signature's accuracy in predicting patient outcomes.To further illustrate the distribution of risk scores and their correlation with patient outcomes, risk plots were created utilizing the 'heatmap' package in R.These plots offered a visual representation of the risk score distribution across patients, alongside key clinical features, thus facilitating a comprehensive analysis of the prognostic model's performance. For external validation of our prognostic model, the GSE30219 cohort was utilized as the validation cohort.In the GEO cohort, participants were divided into low-risk and high-risk groups, using the median as the dividing criterion.This step was crucial for assessing the model's generalizability and reliability across different patient populations, ensuring that our findings hold potential clinical relevance beyond the initial study cohort. Risk score and clinicopathological parameters correlation The association between risk scores and clinical features, including gender, age, pathological stage, and TNM classification, was determined using the Wilcoxon test. Prognostic analysis and nomogram construction Utilizing the 'survminer' package in R, both univariate and multivariate Cox regression analyses were conducted to identify independent predictors of OS.Subsequently, a nomogram incorporating these independent prognostic factors was developed with the 'rms' package in R. Gene set enrichment analysis (GSEA) The 'GSEA' package was employed to identify significantly enriched pathways in LUAD samples compared to high-and low-risk samples based on the expression differences of MeRlncRNAs.We used the Molecular Signatures Database (MSigDB) curated gene sets collection (KEGG and Hallmark gene sets) as the reference for pathway analysis.The analysis parameters were set to 1000 permutations to estimate the enrichment score (ES) significance, with a nominal p-value < 0.05 |NES| >1 considered statistically significant. Estimation of immune cell infiltration For the assessment of immune infiltration landscapes, we applied ssGSEA using the 'GSVA' package in R.This method allows the estimation of pathway activity levels in individual samples based on their gene expression profiles.We used a predefined gene set comprising genes associated with immune cell types and functions.The ssGSEA scores were calculated for each sample to derive the immune infiltration landscape, facilitating the comparison between high-risk and lowrisk groups as defined by the prognostic gene signature.Depictions of varying immune cell penetrations emerged through box plots.Risk score and immune cells were calculated to have a Pearson association. Construction of the lncRNA-miRNA-mRNA regulatory network We first used Miranda to predict the miRNAs targeted by the prognostic lncRNAs and then used Starbase to RNA preparation and quantitative real-time polymerase chain reaction (RT-qPCR) ServiceBio Inc.'s nuclezol ls RNA isolation reagent was used to isolate total RNA from the 20 samples, including 10 normal and 10 LUAD tissue.The SureScript-First-strand-cDNA-synthesis-kit (ServiceBio Inc.) was then used to reverse transcribe total RNA into cDNA.After that, qPCR was carried out using GeneCopoeia's BlazeTaqTM SYBR ® Green qPCR Mix 2.0.A succession of procedures, from RNA reverse transcription to thermocycling, was meticulously undertaken.Primer sequences were elucidated in Table 1.The relative expression level was evaluated by comparative 2 −ΔΔCT approach [15]. Statistical analysis All analyses were conducted using R software.The Wilcoxon test was applied for comparing data between groups.A p-value < 0.05 was considered statistically significant, except in cases where specific circumstances dictate otherwise. Data availability statement All data generated or analysed during this study are available in the TCGA. Identification of methyltransferase-related lncRNAs (MeRlncRNAs) in LUAD Compared with the normal samples, 741 up-genes and 931 under-repressed genes were mined from the LUAD samples, with a total of 1672 DEGs (Supplementary Table 1 and Figure 1A).The top 50 genes each with the most significant differences in up-regulated or down-regulated expression were taken to draw a heat map, as shown in Figure 1B.Then, we obtained 156 methyltransferase-related genes from MsigDB after de-duplication (Supplementary Table 2).Hence, five differentially expressed methyltransferase-related genes (SNRPE, TFB2M, EZH2, MRM1, and METTL1) were discovered by intersecting the 1672 DEGs and 156 methyltransferase-related genes (Figure 2A).Meanwhile, 87 DElncRNAs between normal and LUAD samples were also extracted and listed in Supplementary Table 3.In the following, the Pearson correlation between the above five methyltransferase-related genes and DElncRNAs was calculated.To build the mRNA-lncRNA network, relationship pairs with |Correlation coefficient| >0.3 and p < 0.05 were chosen.(Supplementary Table 4).Finally, an mRNA-lncRNA network containing 36 nodes (32 lncRNAs and 4 mRNAs) and 80 edges was generated (Figure 2B).The 32 lncRNAs in the network were defined as methyltransferase-related lncRNAs (MeRlncRNAs) in LUAD for subsequent analysis. Construction of prognostic signature based on MeRlncRNAs A training set of 338 specimens and a testing set of 145 individuals were created from the 483 LUAD patients in the TCGA cohort.After that, 32 MeRlncRNAs from the training cohort were used in the univariate Cox regression analysis to find out the lncRNAs associated with prognosis.In the training set, the OS of cancer patients was shown to be strongly correlated with 13 out of the 32 MeRlncRNAs.(Figure 3A).Then, the 13 lncRNAs were further submitted to LASSO regression analysis.Six MeRlncRNAs (RP11-251M1.1,RP1-78014.1,LINC01936, LINC00511, RP11-750H9.5, and CTD-2510F5.4) were identified as prognostic lncRNAs (Figure 3B) with lambda.minwas 0.048 (Figure 3C).Details such as regression coefficient can be found in the Supplementary Table (Supplementary Table 5).In the TCGA-LUAD cohort, the training group was divided into a low-risk group and a high-risk group, using the median as the dividing criterion.High-risk patients have worse prognosis than low-risk patients (Figure 4A).In the training set, the AUC values for 1, 3 and 5 years were 0.656, 0.651, and 0.627, respectively (Figure 4B). Figure 4C visually demonstrates the risk score and survival status in the training cohorts.The survival status distribution plot shows that as the risk score increases, patients face a greater risk of death. Figure 4D shows the expression of six prognostic lncRNAs.Transcription analysis results showed that the expression of LINC00511 and CTD-2510F5.4 was upregulated in the high-risk group, while the expression of External validation in the GSE13507 cohort We employed a separate cohort made up of 293 lung cancer patients from the GSE13507 to confirm the model's external applicability.Those with a heightened risk metric evidenced a compromised OS, echoing findings from TCGA cohorts (Figure 5A).Accordingly, the AUC estimations, for one, three, and five years, were recorded as 0.672, 0.656, and 0.671, respectively (Figure 5B). Figure 5C, 5D shows that the number of deaths increases as the risk score increases in the external data validation. Figure 5E shows the difference in expression levels of six prognosis-related lncRNAs in high and low risk groups.These outcomes further supported the ability and reliability of MeRlncRNAs-related risk models to predict 1-year, 3-year, and 5-year survival of patients. Relationship between the gene signature and clinical characteristics Exploring the association between risk score and clinical characteristics can better comprehend the significance of gene signature in the occurrence and development of LUAD.As shown in Figure 6, the risk score was augmented amongst males, individuals surpassing 55 years, and those grappling with high N stage, consolidating the conclusion that the risk score is associated with factors such as gender, age, and tumor malignancy. Independent prognosis and nomogram construction Following that, univariate and multivariate Cox regression analysis was used to determine whether the risk score was a prognostic factor for LUAD patients when independent of other clinical factors in this study.Pathological stage, T phase, N phase, and the risk metric were all intertwined with patient fate (Figure 7A).Multivariate Cox regression analysis results emphasize the possibility that risk score can independently predict the prognosis of patients (Figure 7B).Furthermore, in the multivariate analysis, the pathologic stage was determined to be a significant prognostic predictor.Then, using independent prognostic markers such as pathologic stage and risk score, we built a nomogram that could predict patients OS in one, three, and five years (Figure 7C).The value obtained by calculating the C index of the nomogram was 0.6966719, suggesting an excellent accuracy in predicting patient survival. The landscape of immune cell infiltration between the high-and low-risk group Given the prominence of immune pathways in GSEA, a differential examination of immune cell infiltration immune-related cells.The ensuing data indicated an inverse correlation between risk score and cells like CD8 T cells, Cytotoxic cells, DC, Eosinophils, iDC, Macrophages, Mast cells, pDC, T cells, TFH, and Tgd (Figure 10A-10K).Meanwhile, Th2 cells exhibited a symbiotic relationship with the risk score (Figure 10L). Construct the lncRNA-miRNA-mRNA regulatory network To further explore the ceRNA regulation mechanism based on six prognostic lncRNAs, we attempted to construct a lncRNA-miRNA-mRNA regulatory network.Combining the prediction result of the public database with lncRNAs related-methyltransferase, we finally created a regulatory network containing 6 prognostic lncRNAs, 19 miRNAs, and 21 mRNAs (Figure 11, Supplementary Table 7). Validation of the expression of lncRNAs Data from the TCGA database showed that LINC00511 and CTD-2510F5.4 were up-regulated, as seen in Supplementary Table 3. Down-regulated in LUAD samples were RP11-251M1.1,RP1-78014.1,LINC01936, and RP11-750H9.5.Ten normal and ten LUAD samples were gathered, the RNA was extracted, and RT-qPCR was carried out aand the results further confirmed the findings.RP11-251M1.1, RP1-78014.1,LINC01936, and RP11-750H9.5 were down-regulated, as can be shown in Figure 12.When compared to normal samples, LINC00511 and CTD-2510F5.4 were more highly expressed in LUAD samples.In conclusion, the RT-qPCR results were in line with the information in the public database. DISCUSSION The morbidity of LUAD, defined as the most elementary subtype of lung cancer, has gradually increased in recent decades.Despite a vast majority of therapeutic strategies, such as surgical operation, radiotherapy, chemotherapy, and targeted therapy, having been applied alternatively to LUAD patients, the OS has not been up to acceptable standards.Genetic alteration and immunological dysregulation in the humoral internal milieu are strongly associated with the development, invasion, and recurrence of LUAD [16].Due to the crucial influence of humanity's immune system on carcinoma advance [17], an array of immunotherapeutic therapies was executed to eliminate tumor cells to treat cancer [17].However, attributed to the heterogeneity of biological characteristics of patients with LUAD [18], different individual has a different response to actual clinical immunotherapy, which means some patients might have an unfavorable therapeutic effect.In this pathway study, the mRNA-lncRNA network was constructed to preliminarily identify MeRlncRNA.Then a gene signature is closely related to immunecell infiltration.Finally, we successfully performed RT-qPCR to verify the expression of prognostic MeRlncRNAs, which highly conformed to outcomes from the opening tumor database. The burgeoning field of lncRNAs has garnered significant attention for their pivotal roles in Figure 11.Prognostic lncRNA-miRNA-mRNA regulatory network in GC.AGING tumorigenesis, tumor progression, and metastasis, positioning them as promising therapeutic targets and prognostic biomarkers for various malignancies [19].Notably, aberrant expression of lncRNAs has been intricately linked to the immunopathologic dynamics of LUAD, suggesting their critical involvement in the disease's immune microenvironment [20].In a landmark study, Qian et al. [11] illuminated the landscape of lncRNA involvement in LUAD by identifying and characterizing LCAT3, a novel lncRNA significantly upregulated in LUAD tissues compared to adjacent normal tissues, and associated with a poor prognosis in lung cancer patients.LCAT3's oncogenic potential was evidenced by its capacity to boost lung Epigenetic modifications, involved in regulating gene extensive expression under the transcriptional level [21], consist of RNA methylation, gene silencing, genomic imprinting, and lncRNAs activities, and are thereby participating in tumorigenesis, progress, and metastasis [21].Further investigation revealed that METTL3, a central player in the m6A methyltransferase complex, is upregulated in lung cancer and facilitates the m6A modification of LCAT3 [11].This modification stabilizes LCAT3, elucidating a potential mechanism behind its overexpression in LUAD.The mechanistic pathways of LCAT3 extend to its direct interaction with FUBP1, which in turn upregulates c-MYC expression, a cornerstone oncogenic transcription factor implicated in cell proliferation, differentiation, and metabolism.The silencing of LCAT3 or FUBP1 markedly diminishes c-MYC levels, underscoring the critical LCAT3/FUBP1/c-MYC axis in lung cancer progression.Given the paramount importance of c-MYC in cellular regulatory mechanisms, targeting the LCAT3/FUBP1/c-MYC axis emerges as a novel and promising therapeutic strategy for LUAD.This research not only accentuates the critical role of lncRNAs and m6A modification in the oncological narrative but also charts a course towards the development of targeted LUAD therapies by disrupting the LCAT3/FUBP1/c-MYC network.Moreover, it casts a spotlight on the largely unexplored terrain of MeRlncRNAs and their potential interplay with immune regulation in LUAD's immune microenvironment [21], paving the way for future investigations that could further unravel the complex molecular interactions at play in lung cancer. As for exploring the functions of MeRlncRNAs mediation in the immune system about LUAD prognosis, we finally screened six differently-expressed MeRlncRNAs and created a model for predicting the prognosis.Among the MeRlncRNAs that have been signed, it was found that some lncRNAs were upregulated in the low-risk group, such as RP11-251M1.1,RP1-78014.1,LINC01936, and RP11-750H9.5, which were protective factors for OS.Some of them were also up-regulated in the high-risk group, such as LINC00511 and CTD-2510F5.4,which were risk factors for OS.This proves that their abnormal expression levels may be involved in the progression of cancer, including LUAD [22].Up-expression of LINC01936 contributed highly to the decreased risk of death, with a hazard ratio (HR) of 0.86 [23].LINC00511, a kind of MeRlncRNA, is hugely up-regulated in colorectal carcinoma and is closely associated with the advance of malignancy [24]. In addition, Zhang et al. and Wang et al. demonstrated that LINC00511 was found to be upregulated in LUAD and enhanced LUAD malignancy [25,26], which was consistence with our research.CTD-2510F5.4,associated with lncRNAs, is regarded as a tumor phenotype and a robust biomarker with the function of clinical diagnosis and prognosis in gastric cancer [27].Compared to normal tissue, the expression level of RP1-78014.1 in squamous cell carcinoma and LUAD was lower, which is highly consistent with our outcomes [28].But RP11-251M1.1 and RP11-750H9.5 are emerging novel lncRNAs, which means that our signature has both strong foreshadowing and innovative value. In the study of LUAD patients, we observed a division based on a median risk score, effectively sorting patients into high-risk and low-risk categories.Notably, those categorized as high-risk demonstrated poorer clinical outcomes.Through rigorous analysis using multivariate Cox regression, our lncRNA signature, associated with methyltransferase activity, was identified as an independent predictor of OS.This model outperformed traditional clinical predictors in forecasting survival outcomes for LUAD, as evidenced by ROC curve analysis.To further refine our prognostic model, we developed a nomogram that accurately aligns the predicted OS with observed outcomes at one, three, and five years, showcasing its reliability and the model's predictive precision.This level of concordance underscores the utility of our risk model, which is based on six MeRlncRNAs, as both a robust and accurate tool for future clinical research in LUAD.It opens new avenues for identifying potential biomarkers that could significantly impact the prognosis and therapeutic strategies for patients with LUAD. GSEA enrichment assessments unearthed pathways such as the cell cycle, DNA replication, P53 signaling conduit, oncological trajectories, G2M checkpoints, cytokine receptor dialogues, and JAK-STAT communicative channels, all manifesting pronounced disparities across risk groups.The cell cycle, P53 signaling pathway, cancer pathways, DNA replication, and G2M checkpoint were all significantly greater in the high expression group, which has been linked with a poor prognosis.A transcription factor known as P53 only binds to DNA [29].It can control the expression of certain genes and trigger apoptosis, DNA repair, and cell cycle arrest [30]. The suppression of antitumor immunity in tumors can be caused by increased cell cycle activity, which has important implications for immunotherapy [31].DNA replication ensures that a cell's genetic material is appropriately copied and passed on to its progeny cells AGING [32].However, DNA replication is susceptible to interference and damage under a variety of physiological circumstances, which can cause it to stop, impair the integrity of the genome, and even cause apoptosis, necrosis, and cancer.The fundamental cell cycle step known as the G2M checkpoint can ensure that cells won't enter mitosis until damaged or incompletely duplicated DNA has been completely repaired.It is reported that gene expression involved in the checkpoint pathway is related to the survival results of lung cancer [33].This study additionally discovered that immune-related signal pathways were enriched in the low-risk group, revealing that the immune cell infiltration in the immunology microenvironment has a direct relation to the prognosis of LUAD [34]. This study delved into the relationship between immune cell infiltration and the newly established risk score, conducting a comparative analysis of immune system infiltration in both high-risk and low-risk groups.The comparative findings revealed a greater prevalence of immune cells in the low-risk group, suggesting that suppression of the immune microenvironment in the high-risk group might contribute to poorer prognoses [35].Pearson correlation analysis indicated a negative association between the risk score and various immune cells, including CD8 T cells, cytotoxic cells, dendritic cells (DC), eosinophils, immature DC (iDC), macrophages, mast cells, plasmacytoid DC (pDC), T cells, T follicular helper (TFH), and gamma delta T (Tgd) cells.Conversely, a positive correlation was observed with Th2 cells, known for their immunomodulatory impact on tumor progression.Th2 cells can facilitate tumor cell necrosis by promoting the release of type 2 cytokines within the tumor microenvironment (TME) [36]. The TME, a complex milieu of immune-suppressive and immune-activating cells, varies in the degree of tumor invasiveness across different cancer types or tumor models.Extensive research has underscored the biological relevance of lncRNAs in regulating immunity and the infiltration of immune cells within the non-small cell lung cancer (NSCLC) setting [37].The progression, metastasis, and onset of LUAD are intimately linked with genetic discrepancies and immune function impairments within the TME [16].Considering the pivotal role of the immune system in cancer development [17], various immunotherapeutic approaches have been devised to eradicate tumor cells [38], highlighting the importance of understanding immune cell dynamics and their association with risk scores in LUAD. Lastly, through bioinformatics analysis, we created a ceRNA network tailored to LUAD and chose the hub lncRNA for LUAD.To our knowledge, very few research have examined lncRNAs derived from substantial sample sets.We offer a technique for locating possible lncRNA biomarkers.Additionally, we identified the LUAD ceRNA network, which will help us better comprehend the etiology of this disease. The elucidation of MeRlncRNAs in LUAD marks a pivotal advancement in oncology, with profound implications for enhancing prognostication and refining therapeutic strategies for LUAD patients.Our discovery of a novel MeRlncRNA signature stands to revolutionize prognostic models by integrating biomarkers reflective of the disease's molecular underpinnings, thereby improving survival prediction accuracy and deepening our understanding of tumor biology.This facilitates the identification of high-risk patients, enabling more personalized management approaches.By stratifying patients into distinct risk categories, healthcare providers can tailor follow-up and treatment strategies more effectively, optimizing patient outcomes through either intensified interventions for high-risk individuals or reduced treatment for those at lower risk, thus minimizing side effects.Additionally, the association of specific MeRlncRNAs with immune infiltration and the tumor microenvironment opens new therapeutic avenues, potentially enhancing immunotherapy efficacy and offering hope to those unresponsive to conventional treatments.The insights into MeRlncRNA functions could lead to novel therapeutic agents targeting critical pathways in LUAD pathogenesis, offering more specific and less toxic treatment alternatives.Ultimately, our research propels the field towards personalized medicine, promising LUAD patients more precise prognoses and customized treatments that significantly improve survival rates and quality of life, encapsulating a significant stride towards tailored healthcare in oncology. Notably, the constraints posed by our sample size and potential biases merit attention, as they could influence the generalizability and interpretation of our findings.Despite rigorous methodological approaches, the representation of our sample might limit the extrapolation of our results to broader populations.Additionally, inherent biases, such as selection and measurement bias, could have impacted our analysis.Acknowledging these limitations, we propose that future research endeavors should aim to include larger and more diverse cohorts to enhance the robustness and applicability of findings.Furthermore, implementing advanced statistical techniques to adjust for potential confounders and biases could offer more nuanced insights.Finally, the mechanistic underpinnings of the MeRlncRNA signature's influence on tumor biology and the immune microenvironment in LUAD should be further elucidated through in-depth molecular and cellular studies.AGING The model was constructed by the formula: Riskscore = −0.081695395× RP11-251M1.1 + −0.022561732 × RP1-78014.1 + −0.071522442 × LINC01936 + 0.012186292 × LINC00511 + −0.10909988 × RP11-750H9.5 + 0.105977347 × CTD-2510F5.4. Figure 1 . Figure 1.Identification of differential genes.(A) The red dots in the plot represent up-regulated genes and blue dots represent downregulated genes with statistical significance.Gray dots represent no DEGs; (B) The heatmap of top 50 up-regulated and top 50 downregulated genes in tumor and normal tissue. Figure 2 . Figure 2. Identification of methyltransferase-related lncRNAs.(A) Venn diagram of the intersection between DEGs and methyltransferase-related genes; (B) The mRNA-lncRNA network. Figure 3 . Figure 3. Establishment of methyltransferase-related lncRNAs signature.(A) A forest plot of prognostic methyltransferase-related lncRNAs identified by univariate Cox and Kaplan-Meier survival analysis; (B, C) LASSO regression analysis. Figure 4 . Figure 4.The methyltransferase-related IncRNAs signature was a prognostic biomarker for OS in the TCGA-LUAD cohort.(A) K-M survival of OS according to methyltransferase-related IncRNA signature groups in the training cohorts; (B) AUC of time-dependent ROC curve for the risk score in the training dataset; (C) The OS status and OS risk score plots in the training dataset; (D) The heat map of these 6 methyltransferase-related lncRNAs between the high-and low-risk groups in the training dataset; (E) K-M survival of OS according to methyltransferase-related IncRNA signature groups in the test cohorts; (F) AUC of time-dependent ROC curve for the risk score in the test dataset; (G) The OS status and OS risk score plots in the test dataset; (H) The heat map of these 6 methyltransferase-related lncRNAs between the high-and low-risk groups in the test dataset. Figure 5 . Figure 5. External validation of the risk score in the GSE30219 cohort.(A) The Kaplan-Meier survival analysis; (B) The timedependent ROC analysis for the risk score in predicting the OS of patients in the GSE30219 cohort; (C, D) The risk score distribution and survival status of patients in the GSE30219 cohort; (E) The heatmap analysis. Figure 7 . Figure 7. Independent value of the prognostic risk model.(A, B) Forrest plots of the univariate Cox regression analysis; (B) Forrest plot of the multivariate Cox regression analysis; (C) The nomogram was established based on the independent prognosis model. Table 5 . The regression coefficient.
2024-05-22T06:17:49.446Z
2024-05-20T00:00:00.000
{ "year": 2024, "sha1": "48ba2e26520bb4e5ad4746b932b1e102706a86db", "oa_license": "CCBY", "oa_url": "https://www.aging-us.com/article/205837/pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "b58f69c604cb6a473746bd8c2a32614136c3656f", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
104411568
pes2o/s2orc
v3-fos-license
Cymbopogon nardus Mediated Synthesis of Ag Nanoparticles for the Photocatalytic Degradation of 2 , 4-Dicholorophenoxyacetic Acid Advanced extraction method such as simultaneous ultrasonic-hydrodistillation (UAE-HD) extraction method has been proved to increased extraction yield of plant material yet the application of this method in the preparation of metal nanoparticles has not been studied. In this study, Cymbopogon nardus (C.N) extracted via UAE-HD extraction method was used to synthesis silver (Ag) nanoparticles. XRD and TEM analysis confirms the formation of spherical shape Ag nanoparticles with size ranging between 10-50 nm. FTIR spectra suggest the presence of bioactive compounds in the C.N leaves extract that may responsible to the stabilization and reduction of Ag ions (Ag+) to metallic Ag nanoparticles (Ag0). The TPC analysis successfully proved that huge number of phenolic compound greatly involved in the nanoparticles synthesis process. Next, the catalytic activity of the synthesized Ag nanoparticles was tested towards the degradation of 2,4-Dicholorophenoxyacetic acid herbicide with remarkable degradation performance up to 98%. Kinetic study confirms that surface reaction was the controlling step of the catalytic process. Copyright © 2019 BCREC Group. All rights reserved Introduction The presence of various herbicide and pesticide in wastewater due to industrialization would cause severe environmental and health problems due to the toxicity of these hazardous organic compounds.2,4-Dichlorophenoxyacetic acid (2,4-D) is the most widely used herbicides in the agriculture industries [1].However, due to its high biological and chemical stability, these herbicide was very difficult to decompose and it can causes injury to the heart and central nervous system [2].Therefore, degradation and conversion of this herbicide into harmless mineral is crucial before it can be discharged to the environment [3].Photocatalytic degradation has been considered as one of the most efficient and economical methods to remove organic chemical in wastewater [4].This is due to its advantages including low energy consumption and zero generation of secondary pollution [5]. Nowadays, metal nanoparticles especially silver (Ag) nanoparticles have been widely used as photocatalyst for photocatalytic degradation process.Synthesis of Ag nanoparticles using plant extract has received an increasing attention due to the growing need to expand environmentally friendly and green technologies in material synthesis [6].Plant extract are basically enriched with phenolic compounds such as flavonoids, terpenoids, tannins, and gallic acid that act as a reducing agent as well as capping and stabilizing agent in the synthesis of Ag nanoparticles [7].Previously, the synthesis of Ag nanoparticless using Murraya koenigi, Piper betle, and Plumbago zeylanica leaves extract had been reported in the literature using classical aqueous extraction method [8].However, the synthesis of Ag nanoparticles by using plant phenolic compound extracted from simultaneous ultrasonic-hydrodistillation method is still rare since most of the studies only used conventional aqueous extraction to extract plant phenolic compound for the synthesis of nanoparticles [9][10][11].The combination of ultrasonic method with hydrodistillation technique was known to greatly enhance the yield of phenolic compound extracted from plant material [12].These may largely affect the characteristic of the nanoparticless formed as the synthesis route is significantly contributed by the plant extract compound. Cymbopogon nardus (C.N) is a plant that belongs to the Poaceae (grass) family, which easily grown in Malaysia.It is widely used in the industries of perfumery, foods preservation and aromatherapy.Due to the fact that this plant consists of various phenolic compounds, C.N has become an interesting alternative to be studied as a synthesis media for Ag nanoparticless preparation.Phenolic compounds, such as: flavonoid, terpenoids, tannins, gallic acid and sterols, have been reportedly important for the reduction of Ag ions to Ag nanoparticles during synthesis process as well as capping of nanoparticles [13][14][15][16][17]. Therefore, this paper aims to synthesis Ag nanoparticles via electrochemical method using C.N leaves extract as a media in which the C.N leaves was priory extracted by using simultaneous ultrasonichydrodistillation method.The green Ag nanoparticles was then analyzed by X-ray powder diffraction (XRD), Transmission electron microscopy (TEM) and Fourier transforms infrared spectroscopy (FTIR).Next, the photo-catalytic activity of the synthesized Ag nanoparticles was studied towards the degradation of 2,4-D. Materials The fresh leaves of Cymbopogon Nardus (C.N) were obtained from Jabatan Pertanian Negeri Pahang.The Ag and Pt plates of greater than 99% purity were used as electrodes and were obtained from Nilaco, Japan.2,4-Dicholorophenoxyacetic acid (2,4-D) was purchased from Merck, Malaysia.All chemicals used in this study were high analytical grade, while all the aqueous solution was prepared using deionized water. Preparation of Cymbopogon nardus Leaves Extract and Silver Nanoparticles Cymbopogon nardus (C.N) leaves extract were thoroughly washed using deionized water, dried at 25 °C for 3 days and grinded until it becomes powder.10 g of the powder were immersed in 500 mL of deionized water.The solution was placed in the ultrasonic bath with the ultrasonic frequency of 9 Hz for 30 min.Then, the solution was transferred to a roundbottomed flask in order to carry out the hydrodistillation process for 8 hours.The vapourised mixture in the distillation unit is then routed to a process namely condensation whereby the extracted oil solution was collected in a receiving vessel and stored in a sample bottle.The obtained extract solution C.N leaves was used to synthesis Ag nanoparticles via electrochemical method.Electrochemical cell which consists of a two-electrode configuration of Ag plate (2 cm × 2 cm) anode and a platinum plate (2 cm × 2 cm) cathode was used.Electrolysis was conducted at a constant current of 480×10 -3 A and 273 K under air atmosphere [18].Then, the solution product mixture was immersed in the water bath at 80 ºC before dry overnight in an oven at 110 ºC.The obtained powder was denoted as AgCN.The mentioned electrolysis method was also conducted with the absence of plant extract and the sample was denoted as AgB. Determination of Total Phenolic Contents The amount of total phenolics in C.N leaves extract was determined using Folin-Ciocalteu reagent with gallic acid as a standard.Briefly, 0.5 mL of extract solution was mixed with 2.5 mL of Folin-Ciocalteu reagent (10 %).After 5 min, 2 mL of Na2CO3 (0.75%) was added and the mixture was reacted for 2 h at room temperature.The absorbance was measured at 765 nm using UV-visible spectrophotometer and the total phenolic content was determined per gallic acid equivalent (GAE) mg sample [19]. Characterization of Ag Nanoparticles UV-Visible spectroscopy measurements of green synthesis Ag nanoparticles were performed using Perkin Elemer U-1800 UV-vis Spectrophotometer.The crystalline structure of Ag nanoparticles was investigate using X-ray diffraction (XRD) recorded on a D8 ADVANCE Bruker X-ray diffractometer using Cu-K ra-diation at a 2θ angle ranging from 2° to 90°.The presence of functional group in Ag synthesized using C.N leaves were identified Fourier transforms infrared (FTIR) spectra (Perkin Elmer Spectrum GX FTIR Spectrometer) using KBr method with a scan range of 500-4000 cm - 1 .The morphology and size of Ag nanoparticles was examined using a Transmission electron microscopy (TEM) (JEOL JEM-2100F). Photocatalytic Degradation of 2,4-D The photocatalytic activity of Ag nanoparticles was evaluated on photodegradation of 2,4-D solution under UV light.The experiments were carried out by adding Ag nanoparticles (0.01 g/L) into 500 mL of 2,4-D solution (10 mg/L) in a batch reactor fixed with UV lamp (4 × 9 W; 254 nm) and cooling system.The suspension was stirred constantly at 700 rpm for 30 min in dark condition to achieve adsorption-desorption equilibrium and then the reaction irradiated for 3 hours.At a regular interval of time, 4 mL of the suspension was withdrawn and centrifuged at 13,000 rpm for 10 min.The solution was monitored using UV-VIS spectrometer to measure the absorbance at a wavelength 227 nm. X-ray Diffraction (XRD) Analysis The XRD pattern of the synthesized Ag nanoparticles is shown in Figure 1.Both AgCN and AgB demonstrated the peak appeared at 38.68°, 44.1°, 64.11°, and 77.4° corresponding to 111, 200, 220, and 222 plane that could be indexed to the standard phase of metallic silver (JCPDS file no.893722) [20].XRD pattern for AgCN exhibits some additional peaks that may attribute to the presence of phenolic com- pounds from leaves extract which may be responsible in stabilization of Ag nanoparticles [21].Meanwhile, XRD pattern of AgB shows the absence of any impurity peak indicating the purity of prepared Ag [22].The average sizes of the crystals in each of the samples were determined via Scherrer's formula in Equation ( 1): (1) where D stands for the crystallite size of the powder, k is Scherer's constant (0.9),  is 0.1541 nm which refer to the X-ray wavelength, θ is the Bragg diffraction angle,  is the full width at half maximum (FWHM) intensity in of the (111) plane in radians.FWHM can be determined by taking the highest point of the (111) peak and walk along the slopes on both sides until it trespass half that maximum value.The difference in ordinate (x-axis) of these two points is called FWHM.After that, the difference between these x-axis (∆x) was multiplied by  (in radian) and divided by 180 (β=(∆x × )/180) [23].Based on the calculated value, the size of AgCN was found to be 8.40 nm, while AgB nanoparticles displays a massive size of nanoparticless (83.81 nm).This result might be due to the presence of phenolic compounds in C.N leaves extract encapsulated the surface of AgCN catalyst and keeps the AgCN catalyst away from each other to prevent aggregation and subsequently control the growth of particles [13].However, large size of AgB nanoparti-cles was obtained due to the absence of phenolic compounds to encapsulate and control the growth of nanoparticles [24]. Transmission Electron Microscopy (TEM) The size and morphology of Ag nanoparticles was examined using TEM image as shown in Figure 2. From this image, it was confirmed that the Ag nanoparticles were predominantly spherical in shape.Figure 2(A) demonstrates a well-dispersed Ag nanoparticles ranged between 5-20 nm without any aggregation.As compared to the AgCN, the TEM image in Figure 2(B) revealed a much larger nanoparticless of AgB with an average size around 50-100 nm, with particles agglomeration was observed in the morphology image.This may due to the phenolic compounds present in C.N leaves extract that responsible in capping the AgCN nanoparticles, which then restricts the growth of nanoparticless [25].Previous study also shows that Ag synthesized using Coccinia grandis leaves extract produces small size of particle ranged between 20 to 30 nm [26].Remarkably, this result is also in agreement with the XRD analysis in terms of the size determination. Fourier Transforms Infrared Spectroscopy (FTIR) The FTIR analysis was used to determine the organic compounds present on the nanoparticless and their involvement in the reduction of Ag ions (Ag + ) to the metallic Ag nanoparticles (Ag 0 ).The FTIR spectra of C.N leave extract, AgCN, and AgB was illustrated in Fig. 3.The peak at 3293 cm -1 that corresponded to the phenol -OH stretching in C.N leaves extract shifted to higher frequency of 3300 cm -1 which may due to the involvement of -OH group during reduction of Ag ions to Ag nanoparticles [27].Another peak was present at 1607 cm -1 in the C.N spectra which attributed to the carbonyl group of C=O stretching vibration.However, the peak was disappeared in AgB spectra and shifted to 1605 cm -1 in AgCN spectra, suggesting the binding of C=O functional group with Ag nanoparticles [28].The peak at 1083 cm -1 which attributes to the ether linkage (C-O or C-O-C) stretching vibrations was observed in AgCN, revealed that the phenolic compounds in C.N leaves extract have been successfully absorbed on the surface of Ag nanoparticles [29,30].A new peak appeared at 483 cm -1 in AgCN and AgB spectra confirming the formation of Ag nanoparticles, while the small peak at 1318 cm −1 is refer to the (C-OH) group [31]. Photocatalytic Activity Photocatalytic activity of AgCN and AgB nanoparticles was tested on degradation of 2,4-D under UV light irradiation.The 2,4-D degradation percentage (%) was obtained at different interval of time was calculated using Equation ( 2). (2) where C0 refer to the initial concentration of the reactant and Ct is the reactant concentration after t hours of exposure in light sources [32].The results revealed that the rate degradation of 2,4-D escalated as the reaction time increases.This is mainly owed to the leaves extract that plays a crucial role as a capping agent which successfully produce a diminutive AgCN, which subsequently have a high catalytic activity towards the photodegradation of 2,4-D [33,34].This result is also in agreement with previous studies reported which concluded that the size of nanoparticles have significant effect to the 2,4-D photocatalytic degradation [35].However, the AgB sample shows a much lower degradation percentage of 2,4-D (56%) which confirms the absence of plant extract as a capping agent in the synthesized nanoparticles.Hence, the nanoparticles produced were much larger and resulted to the low photocatalytic activity. In order to further illustrate the crucial role of the plant extract in Ag nanoparticles synthesis, the total phenolic content (TPC) of the plant extract was also determined.It was re- markably found that AgCN nanoparticles contain 6927.56 mg/kg of phenolic compound.The large number of the phenolic content may due to the simultaneous ultrasonic-hydrodistillation extraction method that provided much higher yield as compared to conventional aqueous extraction [12].This results may be explained by the cavitation phenomena and mechanical mixing affect [12,36].During the propagation of ultrasonic waves in ultrasoundassisted extraction, cavitation bubbles were generated at the surface of the solid matrix and causing a disruption of plant cell walls.Therefore, the extractable compounds was released with the increasing of contact surface area between solvent and plant material [37].Consequently, the TPC analysis successfully proved that huge number of phenolic compound are greatly involved in the nanoparticles synthesis process and also significantly assists in the photocatalytic activity.Therefore, it can be concluded that the presence of plant extract ob-tained from the simultaneous extraction method with a huge number of phenolic compounds was essential for efficient nanoparticles synthesis and also degradation of 2,4-D. Kinetic Study Langmuir-Hinshelwood (LH) kinetics model is the most commonly employed kinetic expression to explain the kinetics of the heterogeneous catalytic processes.Based on Langmuir-Hinshelwood (L-H) which was illustrated in Equation ( 3) [38], the degradation rate of 2,4-D was studied and the linear plot of ln (C0/Ct) vs time is shown in Figure 5 (A). ( Where r is the initial photocatalytic degradation rate (mg.L −1 .min−1 ) of 2,4-D, kr the apparent reaction rate constant (mg.L −1 .min−1 ), C0 the initial concentration of 2,4-D (mg.L −1 ), and KLH is the adsorption equilibrium constant (L.mg −1 ) [39].In cases where the chemical concentration, C0 is small, the equation can be rearranged simply to an apparent first-order equation which illustrated in Equation ( 4) [40].(4) Where krK=Kapp, C0 is the initial concentration of 2,4-D (mg.L −1 ), and Ct is the concentration of 2,4-D at time, t.The degradation rate also was deduced as shown in Equation ( 5): (5) From the slope in Figure 5(A), the values of kapp were determined and r0 was calculated.Based on the tabulated data in Table 1, graph of 1/r0 vs 1/C0 was plotted.In addition, the parameter of kr and KLH also can be determined by linearizing the Equation (3) as shown in Equation ( 6 The plot of 1/r0 vs 1/C0 in Figure 5(B) gives a straight line result which proving that Langmuir-Hinshelwood (L-H) kinetics model was appropriate for the degradation of 2,4-D using Ag nanoparticles in leaves extract.From the graph, the values for the intercept of 1/kr and a slope of 1/krKLH, was determined.Due to the value of kr (78.125 mg.L -1 min -1 ) is larger than KLH (0.955 L.mg -1 ), it was suggested that the reaction occurs at the surface of the catalyst [41].Hence, these results can be signified that the AgCN nanoparticles was capable in increasing the rate of reaction for efficient photodegradation of 2,4-D. Reusability Study Reusability and recovery of the AgCN catalysts have been studied for five consecutive runs in the degradation of 2,4-D.As presented in Figure 6, the catalyst revealed a desirable reusability with only minor reduction in its activity and it still could be reused for fifth consecutive cycles after separated from the reaction solution by filtration, washed several times with deionized water and dried in the oven.The result revealed an overall 19% loss in 2,4-D degradation after the fifth cycle, demonstrating the stability and reusability of this catalyst.Referring to Jusoh et al. [18], the photocatalytic efficiency was declined due to decreasing active site of catalyst after adsorption of 2,4-D onto Ag catalyst surface. Conclusion The Ag nanoparticles were successfully synthesized via electrochemical method using Cymbopogon nardus leaves extract as a media.The phenolic compounds present in C.N leaves extract play an important role as a stabilizing and capping as well as reducing agent.Green synthesized Ag nanoparticles were ranged between 10-50 nm and found to be in spherical shape, which confirmed by XRD and TEM analysis.The Fourier-transform infrared (FTIR) spectroscopy results examined the occurrence of bioactive functional groups required for the reduction of Ag ions.The green synthesized Ag nanoparticles showed strong photocatalytic behavior in the degradation of toxic chemicals, which 98 % of 2,4-D was degraded under UV light.The TPC analysis revealed that AgCN nanoparticles contain large number of the phenolic content (6927.56mg/kg) which may due to the simultaneous ultrasonichydrodistillation extraction method.The kinetic study confirms that the reaction process occurred on the catalysts surface and the catalyst was still stable after five cycles.These findings suggest that Cymbopogon nardus leaves extract is remarkably important for efficient nanoparticles synthesis and also degradation of 2,4-D.
2019-04-10T13:13:05.051Z
2019-04-15T00:00:00.000
{ "year": 2019, "sha1": "4c610d1106f902ea386bdd6698adcad2865a4133", "oa_license": "CCBYSA", "oa_url": "https://ejournal2.undip.ac.id/index.php/bcrec/article/download/3321/2274", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "4c610d1106f902ea386bdd6698adcad2865a4133", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
246371310
pes2o/s2orc
v3-fos-license
Indicators of Sustainable Entrepreneurial Ecosystems The purpose of this paper is to develop a system of indicators for assessing the effectiveness of entrepreneurial ecosystems based on platform solutions. These indicators can further be used as a mechanism for tracking the ongoing transformations in business environment. Scientific publications, normative legal acts, and analytical materials of Russian, foreign and international organizations served as a methodological basis of the research. The proposed principles of forming a system of indicators take into account the possibility of achieving the objectives of the entrepreneurial ecosystem and increasing the value created within the ecosystem. INTRODUCTION Global business environment is now shifting to platform solutions and ecosystems that provide new opportunities for sustainable development. The ecosystem is an example of multilateral cooperation that cannot be decomposed into several bilateral interactions. In an ecosystem, the analysis of bilateral relations of actors can lead to false conclusions about the effectiveness or efficiency of interactions occurring in several directions at once, and it is impossible to isolate any actors and analyze only the selected interactions [1]. To assess the performance of an ecosystem it is necessary to have a system of indicators assessing both individual results of the participating actors, and the overall effectiveness of interaction within the ecosystem. Digital platforms often act as a central element of the value creation process through organizing and coordinating the process of multilateral interaction between businesses within the same ecosystem. An important goal for modern ecosystem interaction is to support the sustainability of business processes, create value for businesses and satisfy consumer demand without compromising the livelihood of the future generations. The analysis of scientific publications shows that the business environment or the context for creating consumer value is undergoing radical transformations today, making obsolete the established models and constructs of business environment analysis. The complex nature of the business environment is analysed by the market theories, sociology, and various concepts of strategic management [ The ecosystem concept considers not only actors directly involved in the value chain, such as suppliers or customers, but also all actors who form the value chain, even indirectly. The ecosystem concept includes tangible and intangible assets such as infrastructure, institutions, knowledge and network effects of the interaction. Though a significant number of scientific publications are devoted to the issues of assessing the sustainability of entrepreneurial systems of various scales [10] [11], there is no clear evidence of a set of indicators to assess interaction and added value for a business to become part of the ecosystem. The purpose of this research is to develop a system of indicators for assessing the effectiveness of entrepreneurial ecosystems based on platform solutions, which could become a mechanism for tracking the ongoing transformations in the business environment. MATERIALS AND METHODS Using research publications and analytical reports of consulting companies data was collected as to the value created by platform-based ecosystems for their participants, the ecosystem itself, consumers and society at large. Results are presented in Table 1. Further on this value was translated into indicators to measure the progress towards the goals set. Another round of analysis was focused on the risks involved in ecosystem participation, comparing the benefits of interaction with the risks of additional regulation and governance. Both research and review publications from ScienceDirect by Elsevier analysing the goals, process, and results of network interaction within ecosystems were collected. Finally, the results were set against the triple objectives of sustainable development to identify ways to measure intra-network interaction between ecosystem actors. The creation of entrepreneurial networks has become not only a tool for attracting, but also a mechanism for retaining customers, primarily on the basis of increasing the competitiveness of both network participants and the whole ecosystem. Impact factors influencing the increase of the entrepreneurial ecosystem effectiveness are being discussed by researchers in different sectors of economy [10] [11][12] [13]. The recognition of the ecosystem concept and its rapid spread in economics and management can be explained by the need for new approaches to economic analysis at the aggregate level, the possibility of considering the interrelationships of the management object with various actors and interaction of actors within entrepreneurial ecosystems [14][15] [16]. To conduct a core value alignment analysis, it is necessary to identify (1) a list (structure) of core values; (2) shared values among network actors; (3) a set of core values related to one organization that have a positive or negative impact on another set of core values related to another organization [17], see Table 1. The choice of indicators for assessing the interaction between actors in the entrepreneurial ecosystem is a topic of active scientific discourse. Currently, research on this issue is conducted from two different angles: the position of ecosystem actors [18] or ecosystem leaders [19], and the sustainable development of the ecosystem itself. The system of indicators should reflect the development of connections and interactions of actors within the platform ecosystem [20] [21], its sustainable development not only in economic, but also in social and environmental aspects [22][23] [24] [21], as well as reflect the transformation processes in the distributed use economy represented by the platform based ecosystems [25]. RESULTS AND DISCUSSION When evaluating the effectiveness of entrepreneurial ecosystems, indicators should reflect the specific characteristics of the ecosystem and its structure ( Figure 1). There are independent and dependent focal networks with different level of independence of their actors. Independent focal networks have a high degree of independence in financial terms; the self-organization effect is realized due to the active information and economic transactions, which allow forming and achieving common goals. To achieve the goals of sustainable development in economic, social and environmental aspects, such a structure compared to the dependent focal network has less flexibility at the initial stage of development due to differences in the scale of companies, their corporate cultures, quality of goods and services, technology, as well as the propensity of actors to fractiousness. However, the acceptance of compromise decisions at the expense of coordination and the general control, development of interaction within a set of uniform requirements, allows effective functioning of independent ecosystems. Dependent focal networks are formed of actors completely dependent on the organizer of the ecosystem platform or partly dependent when applying the business model of franchising. For the purposes of sustainable development, such model has advantages in the formation of pricing, social or environmental policies. However, if they dominate the market, there is a threat of limiting access to potential customers for companies outside the ecosystem, thus violating the rules of competition. Smaller players or new market entrants with Table 2 Problems and risks of entrepreneurial networks based on platform solutions Problems Risks The possibility of monopolizing power functions by a platform organizer or a key actor. In a network structure, this will lead to increased institutionalization, increased structuration, and loss of the principal properties of the network The ability of individual actors to manipulate the terms of participation in a network entrepreneurial structure to achieve their own goals Potentially lower governability Risk of loss of management efficiency because of high dependency on information and speed of its transmission Time-consuming consensus building Risk of disintegration of network entrepreneurial structure due to withdrawal of key actors Need to prepare actors for effective interaction High dependency on performance and operability of platform solutions growth potential, providing goods and services of higher quality, creating unique value propositions, in most cases are unable to compete with the platform ecosystem or reach a level that allows them to compete in an environment dominated by the ecosystems. The development of the platform-based ecosystem requires an assessment of the risks of monopolization, the Table 2). Consideration of these circumstances allows the choice of indicators; it should take into account the possibility of assessing the effectiveness of interaction within platform solutions for each actor of the network structure, for the ecosystem as a whole, and for the organizer of the platform. For every actor within the ecosystem, it is expedient to carry out the analysis of transaction costs and indicators of development. For companies in different sectors of economy, indicators may differ, but should correspond to the sustainable development objectives of the entrepreneurial ecosystem as a whole. The indicators of economic activity of actors within the ecosystem should correspond to the inequality 1: Where: Р -Overall effect of the activity of an economic agent in market transactions; с -Overall effect of the activity of an economic agent in a network entrepreneurial structure; ТI р -Transaction costs of an economic agent in the context of market interaction; ТI с -Transaction costs of an economic agent in a network entrepreneurial structure. It is obvious that the effect of participation and interaction in an ecosystem should exceed the effect of market transactions; from the position of the ecosystem organizer, it will increase the actors' motivation to interact both horizontally and vertically, and to adopt the values common to the ecosystem. For an entrepreneurial ecosystem as a whole and its participants, it is necessary to take into account the The system of indicators is based on the following principles: 1. The principle of trinity of strategic goals in the ecosystem in economic, social and environmental aspects ( Figure 2). 2. The purpose decomposition principle in directions of activity of an enterprise ecosystem broken down to such perspectives as "Finance", "External consumers", "Business processes", "Personnel". 3. Principle of solidarity support of strategic goals in an ecosystem by actors and the organizer. 4. The principle of development and preservation of the value of ecosystem services (Table 1) An example showing the formation of a system of indicators in the context of commercial sustainability as one of the subgoals of the ecosystem is shown in Figure 3. This group of indicators can be recommended for actors, organizer and ecosystem as a whole, as it fully corresponds to the (principles of the trinity of goals, decomposition of goals, solidarity support of strategic goals of ecosystem, sustainable development in the interests of consumers. Organizational sustainability is a characteristic that reflects the activity of the organizer of the ecosystem platform. The system of evaluation of its activity includes CONCLUSIONS The goal of meeting consumer needs in various sectors of the economy is influenced by the processes of information and communication globalization and digitalization, as well as by specific trends and patterns of the economy sector. The development of digital economy causes transformations in the technologies, interaction patterns and models of competitiveness of the enterprises. Networking, the use of the online channels, and the distributed use of assets and resources are among the most significant development trends. Network technologies, which have emerged and developed with the spread of mobile devices and Internet, not only make it easier for users to find goods and services, but also allow companies to collect and accumulate information about purchase history and customers' behaviour. This information can be used to form a personalized offer to the customer and assess the reliability and importance of the business partner. A significant amount of information is also formed by ecosystem actors, which using the proposed system of indicators make it possible to assess the effectiveness of actors' participation and interaction in the platform-based ecosystems, as well as assess the sustainability of the entrepreneurial ecosystem as a whole.
2022-01-29T16:04:40.100Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "d7a70d1fa29e556fd8cde56e493b0e33916af501", "oa_license": "CCBYNC", "oa_url": "https://www.atlantis-press.com/article/125968866.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "00fd8caf6ab095e6565f9b40333480745648ec08", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [] }
17940591
pes2o/s2orc
v3-fos-license
Endophytic fungal association via gibberellins and indole acetic acid can improve plant growth under abiotic stress: an example of Paecilomyces formosus LHL10 Background Endophytic fungi are little known for exogenous secretion of phytohormones and mitigation of salinity stress, which is a major limiting factor for agriculture production worldwide. Current study was designed to isolate phytohormone producing endophytic fungus from the roots of cucumber plant and identify its role in plant growth and stress tolerance under saline conditions. Results We isolated nine endophytic fungi from the roots of cucumber plant and screened their culture filtrates (CF) on gibberellins (GAs) deficient mutant rice cultivar Waito-C and normal GAs biosynthesis rice cultivar Dongjin-byeo. The CF of a fungal isolate CSH-6H significantly increased the growth of Waito-C and Dongjin-byeo seedlings as compared to control. Analysis of the CF showed presence of GAs (GA1, GA3, GA4, GA8, GA9, GA12, GA20 and GA24) and indole acetic acid. The endophyte CSH-6H was identified as a strain of Paecilomyces formosus LHL10 on the basis of phylogenetic analysis of ITS sequence similarity. Under salinity stress, P. formosus inoculation significantly enhanced cucumber shoot length and allied growth characteristics as compared to non-inoculated control plants. The hypha of P. formosus was also observed in the cortical and pericycle regions of the host-plant roots and was successfully re-isolated using PCR techniques. P. formosus association counteracted the adverse effects of salinity by accumulating proline and antioxidants and maintaining plant water potential. Thus the electrolytic leakage and membrane damage to the cucumber plants was reduced in the association of endophyte. Reduced content of stress responsive abscisic acid suggest lesser stress convened to endophyte-associated plants. On contrary, elevated endogenous GAs (GA3, GA4, GA12 and GA20) contents in endophyte-associated cucumber plants evidenced salinity stress modulation. Conclusion The results reveal that mutualistic interactions of phytohormones secreting endophytic fungi can ameliorate host plant growth and alleviate adverse effects of salt stress. Such fungal strain could be used for further field trials to improve agricultural productivity under saline conditions. Background Various crops cultivated in arid or semi-arid regions are frequently exposed to wide range of environmental stresses. Among these, salinity severely affects plant growth and metabolism and hence results in reduced biomass production. Plants have the capability to cope with these stresses through many signal transduction pathways adjusting their metabolism [1][2][3]. These adjustments range from changes in ionic/osmotic levels, stomatal closure to changes in phytohormones and secondary metabolites [4]. Sodium ion toxicity trigger the formation of reactive oxygen species (ROS) such as superoxide (O 2 -), hydrogen peroxide (H 2 O 2 ), and hydroxyl radical (•OH) which can ultimately damage; (i) mitochondria and chloroplasts, (ii) water use efficiency, (iii) photosynthesis, and (iv) nutrients uptake whilst disrupting cellular structures [1,4]. To avoid oxidative damage, plants adapt by de novo synthesis of organic compatible solutes acting as osmolytes. Osmolytes like proline serve a free-radical scavenger stabilize subcellular structures and buffer cellular redox potential under stress [5]. In counteracting oxidative stress antioxidant molecules are also involved as defence strategy. Symbioses with beneficial fungi can ameliorate plant growth and its physiological status [6]. Endophytic fungi comprise of fungal symbionts associated with plants living inside tissues without causing any disease symptoms [7][8][9][10][11]. Endophytes have mostly been reported for their behaviour to enhance plant growth as they influence key aspects of plant physiology and host protection against biotic and abiotic stresses [9,10,12]. Besides that, endophytic fungi have been known as an important source of various kinds of bioactive secondary metabolites [8,13]. It has been known recently that some of the strains of endophytic fungi can produce plant hormones especially gibberellins (GAs) [14]. Under extreme environmental conditions, these phytohormone producing endophytic fungi can effect the production of several secondary metabolites like flavonoids [15] along with phytohormones to help the plant to tolerate/avoid stress [8,12,16]. GAs are ubiquitous substances that elicit various metabolic functions required during plants' growth [17,18]. However, little is known about GAs production by endophytic fungi and their role in abiotic stress. Previously, various strains of fungal species including endophytes have been reported to either secrete GAs in their culture medium or have an active GAs biosynthesis pathway. Fungal species like Gibberella fujikuroi, Sphaceloma manihoticola [18], Phaeosphaeria sp., Neurospora crassa [19], Sesamum indicum [20], Phaeosphaeria sp. L487 [21], Penicillium citrinum [14], Chrysosporium pseudomerdarium [22] and Scolecobasidium tshawytschae [23], Aspergillus fumigatus [15] and Penicillium funiculosum [16] have been reported as GAs producers. GAs along with other plant hormones like indole acetic acid (IAA) secreted by fungal endophytes can improve plant growth and crop productivity [24,25]. Aim of the present study was to identify plant hormone (GAs and IAA) secreting endophytic fungal strain and assess its role in host-plant physiology under saline conditions. For this purpose, isolated endophytic fungal strains were initially screened on GAs deficient mutant rice cultivar (Waito-C) and GAs cultivar (Dongjin-byeo) seedlings to differentiate between plant growth promoting/inhibiting and plant hormones producing strain. The best fungal strain identified was examined for its potential role in plant growth under sodium chloride (NaCl) induced salinity stress. To elucidate the mitigation of oxidative stress imposed by NaCl, photosynthesis rate, stomatal conductance, transpiration rate, relative water content (RWC), electrolytic leakage (EL), free proline content, nitrogen assimilation, antioxidant and lipid peroxidation were analyzed. Endogenous ABA and GAs (GA 3 , GA 4 , GA 12 and GA 20 ) were quantified to understand the influence of salt stress and endophytic fungal association on the growth of cucumber plant. Presence or absence of plant growth promoting metabolites in fungal CF was confirmed by performing screening bioassays on gibberellins biosynthesis deficient mutant rice Waito-C and normal GAs cultivar Oryza sativa L. cv. Dongjin-byeo. Waito-C has dwarf phenotype while Dongjin-byeo has normal phenotype. For bioassay experiment, rice seeds were surface sterilized with 2.5% sodium hypochlorite for 30 minutes, rinsed with autoclaved DDW and then incubated for 24 hr with 20-ppm uniconazol (except Dongjin-byeo) to obtained equally germinated seeds. Then pre-germinated Waito-C and Dongjin-byeo seeds were transferred to pots having water: agar medium (0.8% w/v) [14] under aseptic conditions. Both the rice cultivars were grown in growth chamber (day/night cycle: 14 hr-28°C ± 0.3;10 hr -25°C ± 0.3; relative humidity 70%; 18 plants per treatment) for ten days. Ten micro-litter of fungal CF was applied at the apex of the rice seedlings. One week after treatment, the shoot length, chlorophyll content and shoot fresh weight were recorded and compared with negative (autoclaved DDW) and positive controls (Gibberella fujikuroi). The wild-type strain of G. fujikuroi KCCM12329, provided by the Korean Culture Center of Microorganisms, was used as positive control. Upon screening results, bioactive fungal strain CSH-6H was selected for further experiments and identification. Fungal DNA isolation, identification and phylogenetic analysis Genomic DNA was extracted from CSH-6H using standard method of Khan et al. [14]. Fungal isolate was identified by sequencing the internal transcribed region (ITS) of rDNA using universal primers: ITS-1; 5'-TCC GTA GGT GAA CCT GCG G-3' and ITS-4; 5'-TCC TCC GCT TAT TGA TAT GC-3'. The BLAST search program (http:// blast.ncbi.nlm.nih.gov) was used to compare the nucleotide sequence similarity of ITS region of related fungi. The closely related sequences obtained were aligned through CLUSTAL W using MEGA version 4.0 software [26] and a maximum parsimony tree was constructed using the same software. The bootstrap replications (1K) were used as a statistical support for the nodes in the phylogenetic tree. The growth parameters i.e. shoot length, shoot fresh and dry weights were measured for harvested cucumber plants, while chlorophyll content of fully expanded leaves were analyzed with the help of chlorophyll meter (SPAD-502 Minolta, Japan). Dry weights were measured after drying the plants at 70°C for 72 h in oven. Total leaf area was measured with Laser Leaf Area meter (CI-203 model, CID Inc., USA). Portable photosynthesis measurement system (ADC BioScientific LCi Analyser Serial No. 31655, UK) was used to calculate the net photosynthetic rate (μmolm -2 s -1 ), transpiration rate (mMm -2 s -1 ) and stomatal conductance (molm -2 s -1 ) per unit leaf area of fully expanded leaves. For each measurement, readings were recorded in triplicates. For endogenous phytohormonal analysis of cucumber plants, the treated samples were immediately frozen in liquid nitrogen and kept until further use at -70°C . Samples were freezed dried in Virtis Freeze Dryer (Gardiner, NY, USA). Microscopic analysis Cucumber roots inoculated with CSH-6H were sectioned and treated with sodium hypochlorite (2.5%) for 10 min for clarification. Experimental conditions were kept aseptic during analysis. Inoculated roots were treated with 20% KOH for 24 h and rinsed with autoclaved DDW. The roots were then acidified with 10% HCl, stained overnight using 0.05% 0.1% acid fuchsin and 95% lactic acid. Finally, the roots were destained in 95% lactic acid for 24 h. The roots pieces were then subjected to light microscope (Stemi SV 11 Apo, Carl Zeiss). The root parts having active colonization were used for re-isolation of the inoculated CSH-6H with the method as described earlier. RWC, EL, proline, nitrogen assimilation, antioxidant and lipid peroxidation Relative water content (RWC) and electrolytic leakage (EL) were measured following González and González-Vilar [27]. Free proline was estimated following Bates et al. [28]. Plant samples were oven-dried at 65°C and were ground to pass through 1-mm mesh sieves and analyzed for N using CNS analyzer (Carlo-Erba NA1500, Carlo Erba Instruments, Milano, Italy). Antioxidant activity was measured on the basis of radical scavenging activity of 1, 1-diphenyl-2-picrylhydrazyl (DPPH) as described Xie et al. [29]. The extent of lipid peroxidation was determined by the method of Ohkawa et al. [30]. The experiments were repeated three times. GAs extraction from fungal CF and cucumber plants To characterize GAs secreted in the pure fungal culture of bioactive endophyte, it was inoculated in Czapek broth (120 ml) for 7 days at 30°C (shaking incubator-120 rpm) as described previously [14,24]. The culture and mycelium were separated by centrifugation (2500xg at 4°C for 15 min). The culture medium (CF; 50 ml) was used to extract and purify GAs as described by Hamayun et al. [22,23]. Briefly, the pH of the CF was adjusted to 2.5 using 6 N HCl and was partitioned with ethyl acetate (EtOAc). Before partitioning, deuterated GAs internal standards (20 ng; [17, 17-2 H 2 ] GA 1 , GA 3 , GA 4 , GA 8 , GA 12 and GA 24 ) were added in the CF. Tritiated GAs i.e. [1, 2-3 H 2 ] GA 9 and [1,2-3 H 2 ] GA 20 were also added (obtained from Prof. Lewis N. Mander, Australian National University, Canberra, Australia). The organic layer was vacuum dried and added with 60% methanol (MeOH) while the pH was adjusted to 8.0 ± 0.3 using 2 N NH 4 OH. Similarly, endogenous GAs from cucumber plants treated with and without endophytic fungus and salinity stress were extracted from 0.5 g of freeze-dried plant samples according to the method of Lee et al. [31]. About 20 ng each of deuterated [17, 17-2 H 2 ] GA 3 , GA 4 , GA 12 and GA 20 internal standards were added. The CF and plant extracts were subjected to chromatographic and mass spectroscopy techniques for identification and quantification of GAs. Chromatography and GC/MS -SIM for hormonal analysis The extracts were passed through a Davisil C18 column (90-130 μm; Alltech, Deerfield, IL, USA). The eluent was reduced to near dryness at 40°C in vacuum. The sample was then dried onto celite and then loaded onto SiO 2 partitioning column (deactivated with 20% water) to separate the GAs as a group from more polar impurities. GAs were eluted with 80 ml of 95: 5 (v ⁄ v) ethyl acetate (EtOAc): hexane saturated with formic acid. This solution was dried at 40°C in vacuum, re-dissolved in 4 ml of EtOAc, and partitioned three times against 4 ml of 0.1 M phosphate buffer (pH 8.0). Drop-wise addition of 2 N NaOH was required during the first partitioning to neutralize residual formic acid. One-gram polyvinylpolypyrrolidone (PVPP) was added to the combined aqueous phases, and this mixture was slurried for 1 h. The pH was reduced to 2.5 with 6N HCl. The extract was partitioned three times against equal volumes of EtOAc. The combined EtOAc fraction was dried in vacuum, and the residue was dissolved in 3 ml of 100% MeOH. This solution was dried on a Savant Automatic Environmental Speedvac (AES 2000, Madrid, Spain). The dried samples were subjected to high performance liquid chromatography (HPLC) using a 3.9 × 300 m Bondapak C18 column (Waters Corp., Milford, MA, USA) and eluted at 1.0 ml/ min with the following gradient: 0 to 5 min, isocratic 28% MeOH in 1% aqueous acetic acid; 5 to 35 min, linear gradient from 28% to 86% MeOH; 35 to 36 min, 86% to 100% MeOH; 36 to 40 min, isocratic 100% MeOH. Fortyeight fractions of 1.0 ml each were collected (Additional file 1). The fractions were then prepared for gas chromatography/mass spectrometry (GC/MS) with selected ion monitoring (SIM) system (6890N Network GC System, and 5973 Network Mass Selective Detector; Agilent Technologies, Palo Alto, CA, USA). For each GAs, 1 μl of sample was injected in GC/MS SIM (Additional file 2). Full-scan mode (the first trial) and three major ions of the supplemented [17-2 H 2 ] GAs internal standards and the fungal GAs were monitored simultaneously whereas the same was done for endogenous GAs of cucumber plants (Supplementary data 2). The fungal CF GAs (GA 1 , GA 3 , GA 4 , GA 8 , GA 9 , GA 12 , GA 20 and GA 24 ) and the endogenous cucumber plant's GAs (GA 3 , GA 4 and GA 12 ) were calculated from the peak area ratios of sample GAs to corresponding internal standards. The retention time was determined using hydrocarbon standards to calculate the KRI (Kovats retention index) value (Additional file 1). The limit of detection was determined for all GAs. GC/ MS SIM limit of detection was 20 pg/ml for fungal CF and plant samples. The data was calculated in nanograms per millilitre (for fungal CF) or nano-grams per grams fresh weight (for cucumber plants) while the analyses were repeated three times. IAA analysis Samples were analysed with a High Performance Liquid Chromatograph (HPLC) system, equipped with a differential ultraviolet (UV) detector absorbing at 280 nm and a C18 (5 μm; 25 × 0.46 cm) column. Mobile phase was methanol and water (80:20 [v/v]) at a flow rate of 1.5 ml/ min. The sample injection volume was 10 μl. Retention times for the analyte peaks were compared to those of authentic internal standards added to the medium and extracted by the same procedures used with fungal cultures. Quantification was done by comparison of peak area [32]. Endogenous ABA analysis The endogenous ABA was extracted according to the method of Qi et al. [33]. The extracts were dried and methylated by adding diazomethane. Analyses were done using a GC-MS SIM (6890N network GC system, and 5973 network mass selective detector; Agilent Technologies, Palo Alto, CA, USA). For quantification, the Lab-Base (ThermoQuset, Manchester, UK) data system software was used to monitor responses to ions of m/z 162 and 190 for Me-ABA and 166 and 194 for Me-[ 2 H 6 ]-ABA (supplementary data 2). Statistical analysis The analysis of variance and multiple mean comparisons were carried out on the data using Graph Pad Prism software (version 5.0, San Diego, California USA). The purpose of these tests was to identify statistically significant effects and interactions among various test and control treatments. The significant differences among the mean values of various treatments were determined using Duncan's multiple range tests (DMRT) at 95% CI using Statistic Analysis System (SAS 9.1). Effect of fungal CF on Waito-C and Dongjin-byeo rice growth We isolated 31 endophytic fungi from 120 roots of cucumber plants suggesting an abundance level of 3.87 endophytes per root sample. These fungi were grown on Hagem media plates for seven days. The pure culture plates were grouped on the basis of colony shape, height and colour of aerial hyphae, base colour, growth rate, margin characteristics, surface texture and depth of growth into medium [34]. The morphological trait analysis reveals that only nine endophytes were different. The CF of these nine different endophytes were assayed on Waito-C and Dongjin-byeo rice seedlings to differentiate between growth stimulatory or inhibitory and plant hormones producing strains. The growth attributes of dwarf Waito-C (GAs mutant dwarf cultivar) and Dongjin-byeo (normal GAs cultivar) rice seedlings were recorded after a week of treatment and the data is given in Table 1 and Table 2. The results showed that CF application of CSH-6H to Waito-C and Dongjin-byeo rice seedlings exhibit significant growth promotion as compared to the CF of G. fujikuroi and DDW applied control rice seedlings. Endophyte, CSH-6H significantly increased the shoot growth of dwarf Waito-C rice in comparison controls. The CSH-6H applied CF exhibited higher chlorophyll content and shoot fresh weight of rice seedlings than controls (Table 1). A similar growth stimulatory trend of CSH-6H was observed on the Dongjin-byeo rice seedling with active GAs biosynthesis pathway and normal phenotype ( Table 2). In other growth promoting strain, CSH-7C and CSH-7B improved the shoot growth, fresh weight and chlorophyll content of Waito-C and Dongjin-byeo rice seedlings but it was not significantly different than the CF of G. fujikuroi (Table 1 and Table 2). In growth suppressive strains, CSH-1A inhibited the growth of Waito-C and Dongjinbyeo as compared other endophytic fungal strains. Upon significant growth promoting results of CSH-6H, it was selected for identification and further investigation. Identification and phylogenetic analysis of bioactive endophyte After DNA extraction and PCR analysis of ITS regions, phylogenetic analysis of CSH-6H was carried out [14,22,23]. Maximum parsimony (MP) consensus tree was constructed from 16 (15 references and 1 clone) aligned partial ITS regions sequences with 1 K bootstrap replications. Selected strains were run through BLAST search. Results of BLAST search revealed that fungal strain CSH-6H has 100% sequence similarity with Paecilomyces sp. In MP dendrogram CSH-6H formed 86% bootstrap support with Paecilomyces formosus (Figure 1). The sequence was submitted to NCBI GenBank and was given accession no. HQ444388. On the basis of sequence similarity and phylogenetic analysis results, CSH-6H was identified as a strain of P. formosus LHL10. Bioactive endophytic fungal CF analysis for phytohormones The CF of bioactive P. formosus (CSH-6H) was analysed for its potential to produce GAs in the growing medium. We detected 8 different physiologically active and nonactive gibberellins ( Figure 2) using GC/MS selected ion monitor. Among biologically active GAs, GA 1 (1.3 ng/ml), GA 3 (1.1 ng/ml) and GA 4 (18.2 ng/ml) were found in the various HPLC fractions (Additional file 1). Among physiologically in-active GAs, GA 8 (37.2 ng/ml), GA 9 (5.5 ng/ml), GA 12 (1.4 ng/ml), GA 20 (2.2 ng/ml) and GA 24 (13.6 ng/ml) were present in the CF. The quantities of bioactive GA 4 and GA 8 were significantly higher than the other GAs. Besides GAs, we also found IAA in the growing culture medium of P. formosus. The quantity of IAA was 10.2 ± 1.21 μg/ml. Effect of P. formosus association on cucumber growth in salinity stress To assess the role of P. formosus in cucumber plant growth under saline soil condition, the endophyte was inoculated to the host plants. After three weeks of endophyte and host-plant association, NaCl was applied to induce salinity stress. The results reveal that the phytohormone producing P. formosus significantly increased the host-plant growth under normal growth conditions. The endophyte symbiosis increased the shoot length up to 6.89% as compared to non-inoculated control plants (Figure 3). Upon salinity stress of 60 mM, the plants inoculated with P. formosus had 4.5% higher shoot growth as compared to non-inoculated control. When exposed to 120 mM NaCl, endophyteinoculated plants had 15.9% higher shoot length than control plants. P. formosus inoculated enhanced the chlorophyll content, shoot fresh and dry weights, photosynthesis rate, stomatal conductance and transpirational rate both under salinity stress in comparison to the non-inoculated control plants (Table 3). The light microscopic analysis also showed the active association and habitation of P. formosus inside the plant's root (Figure 4abc). Fungal hypha (brownish) has been observed in the cucumber plant roots (Figure 4a). The hypha from the epidermal region into cortex cells forms a dense network at the end in the cortex cells. The P. formosus was also observed in the endodermal cells occupying the pericycle region (Figure 4b). In the periclycle region, hyphae underwent further morphological changes, switching to yeast-like cells or conidia (Figure 4c). The fungus was re-isolated successfully from salinity stressed plants and was again identified through sequencing the ITS regions and phylogenetic analysis as mentioned earlier. Thus, confirming that P. formosus is responsible for establishing ameliorative interaction with host plants during stress conditions. Plant water potential and stress mitigation Relative water potential was not significantly different in P. formosus inoculated plants and non-inoculated plants. Under salinity stress (60 and 120 mM), the endophyteinoculated cucumber plants showed significantly higher water potential as compared to the non-inoculated control plants (Figure 5a). The higher RWC indicates the beneficial endophytic association and rescuing role of P. formosus to curtail the adverse effects salinity stress. The electrolytic leakage (EL) from the cellular apparatus was almost similar in both endophyte associated plants and endophyte-free plants. However, upon salinity stress (60 and 120 mM), the non-inoculated control plants released significantly higher electrolytes as compared to P. formosus associated plants (Figure 5b). It suggests that the endophyte interaction counteracted the adverse effect of salinity by reducing the damage to the cellular membranes of the plants. The mitigating response of P. formosus association in salinity stress was further assessed the extent of lipid peroxidation. The results showed that MDA content was significantly lower in endophyte associated plants than control without NaCl stress. Upon salinity stress (60 and 120 mM), we again observed the significantly reduced levels of lipid peroxidation product (MDA) in the endophyte-inoculated plants than the control plants (Figure 5c). The results showed that free proline quantity was not significantly different in cucumber plants inoculated with P. formosus and control. Treating cucumber plants with 60 mM NaCl stress, P. formosus inoculated plants had higher proline quantity in comparison to control. Cucumber plants when treated with 60 or 120 mM NaCl stress, P. formosus inoculated plants had higher proline quantity in comparison to controls (Figure 5d). To maintain the a high capacity of nitrogen for plant growth during salinity stress, we observed significantly higher assimilation of nitrogen endophyte-associated plant than endophyte free control plants with (60 and 120 mM) or without salinity stress (Figure 5e) that, P. formosus inoculated plants exhibited higher oxidant radical scavenging by producing higher antioxidants as compared to control plants. After 60 and 120 mM NaCl application, the level of antioxidant production was significantly higher in P. formosus treated plants in comparison to non-inoculated control plants (Figure 5f). Effect of P. formosus on endogenous ABA and GAs under stress Our results showed that the stress responsive endogenous ABA content in fungi inoculated plants was not significantly different than control plants. Upon NaCl stress treatments (60 and 120 mM) the cucumber plants with P. formosus association had significantly lower level of ABA content as compared to control plants ( Figure 6). In case of endogenous GAs content, we analyzed the GA 12 , GA 20 , GA 4 and GA 3 of cucumber plants treated with or without salinity stress and P. formosus. We found that GA 12 synthesis is almost same in both endophyte-associated and control plants under normal growth conditions. However, upon salinity stress (60 and 120 mM), the GA 12 was significantly increased in endophyte-associated plants than the endophyte-free control plants (Figure 7). Similarly, GA 20 was not significantly different in endophyte inoculated plants and control plants. After NaCl treatments (60 and 120 mM), the GA 20 synthesis by cucumber plants inoculated with endophyte was significantly higher as compared to control plants (Figure 7). The GA 4 content was significantly up-regulated in P. formosus associated plants than the control plants under normal and salinity stress (60 and 120 mM) conditions. A similar trend was also observed for GA 3 contents (Figure 7). Discussion We used screening bioassays and hormonal analysis of endophytic fungal CF in order to identify bioactive fungal strains, because fungi has been an exploratory source of a wide range of bioactive secondary metabolites [8,25]. In screening bioassays, rice cultivars were used as rice can easily grow under controlled and sterilized conditions using autoclaved water-agar media. Waito-C and Dongjin-byeo rice seedlings grown in hydroponic medium can help in assessment of CF obtained from endophytic fungi [14]. Waito-C is a dwarf rice cultivar with mutated dy gene that controls the 3β-hydroxylation of GA 20 to GA 1 . The Waito-C seeds were also treated with GAs biosynthesis inhibitor (uniconazol) to further suppress the GAs biosynthesis mechanism [35]. Dongjin-byeo, on the other hand, has normal phenotype with active GAs biosynthesis pathway [35]. Since Waito-C and Dongjin-byeo growth media were devoid of nutrients, therefore, the sole effect of CF on rice was easily determined. Current study confirmed earlier reports stating that rice shoot growth stimulation or suppression can be attributed to the activity of plant growth promoting or inhibiting secondary metabolites present in the fungal CF [22,23]. The effect of CF from P. formosus was similar to that of G. fujikuroi, which possess an active GAs biosynthesis pathway [18]. Waito-C and Dongjin-byeo growth promotion triggered by the CF of P. formosus was later rectified as it contained physiologically active GAs and IAA. Upon significant growth promotive results in comparison to other fungal isolates, P. formosus was selected for identification and further investigation. The endophytes releasing plant growth hormones, in present case, GAs and IAA can enhance plant growth. In current study, detection of GAs in the growing medium of P. formosus suggests that during interaction GAs were secreted causing growth promotion and also conferred ameliorative capacity to cucumber plants under salinity stress. Previous reports also confirm that fungal endophytes produce phytohormones. For instance, Hassan [24] reported that Aspergillus flavus, A. niger, Fusarium oxysporum, Penicillium corylophilum, P. cyclopium, P. funiculosum and Rhizopus stolonifer have the capacity to produce GAs, while F. oxysporum can secrete both GAs and IAA. Similarly, Khan et al. [16] reported that P. funiculosum can produce bioactive GAs and IAA. Phaeosphaeria sp. L487 was also found to possess GAs biosynthesis apparatus and can produce GA 1 [21]. The CF of our fungal isolate also contained IAA, which is a molecule synthesized by plants and a few microbes [32], and has been known for its active role in plant growth regulation [36], while its biosynthesis pathway has been elucidated in bacterial strain [37]. The presence of IAA in P. formosus clearly suggests the existence of IAA biosynthesis pathway as reported for some other classes of fungi by Tuomi et al. [38]. Plants treated with endophytes are often healthier than those lacking such interaction [7][8][9][10][11][12][13][14], which may be attributed to the endophyte secretion of phytohormones such as IAA [16,36] and GAs [14][15][16]18,[21][22][23][24]. In endophyte-host symbioses, secondary metabolites may be a contribution of the endophytic partner for such mutualistic relationship [9]. Endophytic fungi residing in root tissues and secreting plant growth regulating compounds are of great interest to enhance crop yield and quality. Such growth regulating compounds can influence plant development as well as rescue plant growth in stressful environments. Like many other plants, cucumber is more susceptible to salt stress [39,40]. Current study showed that P. formosus inoculation significantly improved plant growth and alleviated salinity induced stress. The presence of IAA and GAs in the CF of the fungus further rectifies our results, as both of them promote plant growth and development [41]. The presence of P. formosus in the cortical cells and their successful re-isolation by us further strengthens the active role of P. formosus in the host cucumber plants. The mutualistic relations of P. formosus with cucumber plant may have helped the host plant to mitigate the adverse effects of salinity stress. Similarly, recently Redman et al. [42] reported that IAA producing endophytic fungi can enhance rice plant growth under salinity, drought and temperature stress. Previously, Khan et al. [15,16] confirmed that GAs producing endophytic fungal strains (P. funiculosum and Aspergillus fumigatus) can ameliorate soybean plant growth under moderate and high salinity stress. Hamayun et al. [22,23] also reported that GAs secreting fungal endophytes promote soybean growth components. Many other studies also reported similar findings narrating that fungal interaction can enhance plants growth under stress conditions [9,12,43,44]. Plant growth and development depend upon leaf water contents, as salt stress trigger water deficit inside the plant tissues [4], and measurement of RWC helps to indicate stress responses of plant and relative cellular volumes [27]. Our current findings confirm earlier studies [43,44], suggesting that the fungal inoculated plants not only avoid stress but also help the plant to fetch higher water contents from sources usually inaccessible to control plants. Abiotic stresses cause higher electrolyte discharge (like K + ions) through displacement of membrane-associated Ca from plasma lemma. Resultantly, cellular membrane stability is damaged and aggregating higher efflux of electrolytes inside the plant tissues [27]. Our findings showed that plants associated with P. formosus had lower electrolytic leakage than control plants under salt stress. This indicated a lower permeability of plasma membrane attributed to the integrity and stability of cellular tissues due to endophyte-plant interaction as compared to control treatments [45]. On the other hand, antioxidant scavengers can enhance membrane thermostability against ROS attack, while MDA content can be used to assess injuries to plants [45]. It has been shown that peroxides of polyunsaturated fatty acids generate MDA on decomposition, and in many cases MDA is the most abundant individual aldehydic lipid breakdown product [30]. The higher MDA level is perceived with higher ROS production and cellular membrane damage. In our study, low levels of lipid peroxidation in P. formosus treated plants showed reduced cellular damage to cucumber plants growing under salinity stress as compared to control. Similarly, in saline conditions, osmo-protectants like proline accumulate to provide an energy source for plant growth and survival by preventing ionic and osmotic imbalances [46]. We observed significant aggregation of proline in P. formosus associated plants growing under salinity stress, suggesting a decline in ionic influx inside the cellular masses and rescuing cucumber plants to maintain its osmotic balance. Similarly, higher nitrogen uptake by endophyte-inoculated plants under salinity suggested the regulation of sodium ion toxicity to indirectly maintain chlorophyll and osmotic balance [47]. Sodium and chloride ion toxicity can trigger the formation of ROS which can damage cellular functioning [45][46][47][48]. Resultantly, accumulation of antioxidants inside plant can extend greater resistance to oxidative damage [48]. Higher DPPH radical scavenging activity in P. formosus inoculated plants suggest greater oxidative stress regulation than non-inoculated plants [4]. Several studies have suggested that fungal symbiosis helps plants to mitigate stress by increasing antioxidant activities [29,46,48]. Under salinity stress, phytohormones like ABA can protect plants by stomatal closure to minimize water loss and then mediates stress damage [49]. It is widely described that ABA contents in plants increase under salt stress [1,50]. However, our finding shows significantly lower ABA level in endophyte-associated plants as compared to endophyte-free plants. Previously, Jahromi et al. [51] observed the same findings after association of Glomus intraradices with lettuce plants. Similarly, when soybean were given salinity stress in the presence of phytohormones producing endophytic fungi (Penicillium funiculosum and Aspergillus fumigatus), ABA levels were declined [15,16], whilst the plants experienced lesser amount of stress. Since ABA is involved in the regulation of stress signalling during plant growth therefore, its biosynthesis can be affected by the presence of fungal interaction in abiotic stress. Although other studies suggests that fungal inoculation have increased the ABA content in leaves and roots compared with non-inoculation control plants [52]. However, the effect may fluctuate among difference class of microorganisms and plant species as some earlier reports have elaborated this [44,53]. There are several studied which narrates the same findings of low ABA levels under stress and fungal association [44]. Exogenous application of GA 3 improved soybean salinity stress tolerance by increasing plant biomass while accumulating lesser ABA [54]. Iqbal and Ashraf [55] observed that GA 3 application can results in altered level of ABA under salinity stress in Triticum aestivum L. Although, higher ABA in salinity is correlated with inhibition of leaf expansion and shoots development in different species [56] however, P. formosus inoculated plants counteracted adverse effects of stress by significantly increasing leaf area and shoot length as compared to control plants. Similarly, in case of cucumber plant's endogenous GAs, endophytic fungal application have rescued GAs biosynthesis as the level of bioactive GAs were much pronounced compared to sole NaCl treated plants. Phytohormones, like GAs have been widely known for their role in plant growth and various developmental processes during plant's life cycle [1,3,57]. Normal response of a plant to stress is to reduce growth by inter alia increasing ABA content and reducing GAs [56,57]. GA-deficient plants are more susceptible to stress than those with higher levels of this hormone [56]. The higher amount of GA 12 in endophyte-treated plant under salinity stress elucidates the activation of GAs biosynthesis pathway, while higher production of GA 3 and GA 4 confirm plant growth maintenance during stress condition. Thus, by maintaining GAs and, therefore, growth under stressful conditions, the endophyte is having a detrimental effect on the plant long-term survival. There are many previous reports showing the ameliorative effects of exogenous application of GAs (GA 3 / GA 4 ) and IAA on cucumber growth under abiotic stress [58][59][60], while very little or no information's are available Figure 6 Effect of NaCl induced salt stress on endogenous ABA content of the cucumber plants in the presence of P. formosus inoculation. Each value is the mean ± SE of 3 replicates per treatments. Different letter indicates significant (P < 0.05) differences between P. formosus inoculated plants and non-inoculated control plant as evaluated by DMRT. Figure 7 Influence of salinity stress on the GAs (GA 3 , GA 4 GA 12 and GA 20 ) contents of the plant's leaves with or without P. formosus inoculation. Each value is the mean ± SE of 3 replicates per treatments. Different letter indicates significant (P < 0.05) differences between P. formosus inoculated plants and non-inoculated control plant as evaluated by DMRT. on the regulation of plant endogenous hormones in association with phytohormones producing endophytic fungi under abiotic stress conditions. Some physiological evidence suggests that plants infected with endophytic fungi often have a distinct advantage against biotic and abiotic stress over their endophyte-free counterparts [61]. Beneficial features have been offered in infected plants; including drought acclimisation [62,63] enhanced tolerance to stressful factors such high salinity [12]. Foliar application of GAs has been known for its role in plant stem elongation and mitigation of abiotic stress [54][55][56][57][58][59][60] while the same was observed in current study that endophytes producing GAs triggered the adverse effect of salinity stress. Conclusion P. formosus LHL10 produced many physiologically active and inactive GAs and IAA, which helped the Waito-C and Dongjin-byeo rice plants to grow well and significantly mitigated the negative impacts of salinity stress on cucumber plants. The P. formosus LHL10 also minimized the lethal effects of salt stress on cucumber leaf tissues as evidenced from EL, RWC, photosynthesis rate, nitrogen assimilation, antioxidant activity and lipid peroxidation. The cucumber plants inoculated with P. formosus LHL10 have ameliorated their growth by possessing lower levels of stress responsive endogenous ABA and elevated GAs contents. Current study reveals that such endophytic fungal interactions can improve the quality and productivity of economically important crop species. However, the favourable role of this fungus still needs to be investigated under field conditions. Additional material Additional file 1: GC/MS-SIM analysis of HPLC fractions of pure culture filtrate of P. formosus. The table contains retention times of various purified GAs through HPLC and GC/MS SIM data of GAs KRI values and ion numbers.
2014-10-01T00:00:00.000Z
2012-01-12T00:00:00.000
{ "year": 2012, "sha1": "2654ab7c8837395621da4dc241aae9c0d5ad191f", "oa_license": "CCBY", "oa_url": "https://bmcmicrobiol.biomedcentral.com/track/pdf/10.1186/1471-2180-12-3", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2654ab7c8837395621da4dc241aae9c0d5ad191f", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
14272543
pes2o/s2orc
v3-fos-license
Thermoelectric Transport Properties in Disordered Systems Near the Anderson Transition We study the thermoelectric transport properties in the three-dimensional Anderson model of localization near the metal-insulator transition [MIT]. In particular, we investigate the dependence of the thermoelectric power S, the thermal conductivity K, and the Lorenz number L_0 on temperature T. We first calculate the T dependence of the chemical potential from the number density of electrons at the MIT using averaged density of state obtained by diagonalization. Without any additional approximation, we determine from the chemical potential the behavior of S, K and L_0 at low T as the MIT is approached. We find that the d.c. conductivity and K decrease to zero at the MIT as T ->0 and show that S does not diverge. Both S and L_0 become temperature independent at the MIT and depend only on the critical behavior of the conductivity. Introduction The Anderson-type metal-insulator transition (MIT) has been the subject of investigation for decades since Anderson formulated the problem in 1958 [1]. He proposed that increasing the strength of a random potential in a three-dimensional (3D) lattice may cause an "absence of diffusion" for the electrons. Today, it is widely accepted that near this exclusively-disorder-induced MIT the d. c. conductivity σ behaves as |E − E c | ν , where E c is the critical energy or the mobility edge at which the MIT occurs, and ν is a universal critical exponent [2]. Numerical studies based on the Anderson Hamiltonian of localization have supported this scenario with much evidence [2,3,4,5,6]. In measurements of σ near the MIT in semiconductors and amorphous alloys this behavior was also observed with varying values of ν ranging from 0.5-1.3 [7,8,9]. It is currently believed that these different exponents are caused by interactions in the system [10]. Indeed, an MIT may be induced not only by disorder but also by interactions such as electron-electron and electron-phonon interactions, among others [11]. Nevertheless, the experimental confirmation of the critical behavior of σ allows the use of the Anderson model in order to describe the transition between the insulating and the metallic states in disordered systems. Besides for the conductivity σ, experimental investigations can also be done for thermoelectric transport properties such as the thermoelectric power S [8,12,13], the thermal conductivity K and the Lorenz number L 0 . The a e-mail: villagonzalo@physik.tu-chemnitz.de behavior of these quantities at low temperature T in disordered systems close to the MIT has so far not been satisfactorily explained. In particular, some authors have argued that S diverges [12,14] or that it remains constant [15,16] as the MIT is approached from the metallic side. In addition, |S| at the MIT has been predicted [16] to be of the order of ∼ 200 µV/K. On the other hand, measurements of S close to the MIT conducted on semiconductors for T ≤ 1 K [13] and on amorphous alloys in the range 5 K≤ T ≤ 350 K [8] yield values of the order of 0.1-1 µV/K. They also showed that S can either be negative or positive depending on the donor concentration in semiconductors or the chemical composition of the alloy. The large difference between the theoretical and experimental values is still not resolved. The objective of this paper is to study the behavior of the thermoelectric transport properties for the Anderson model of localization in disordered systems near the MIT at low T . We clarify the above mentioned difference in the theoretical calculations for S, by showing that the radius of convergence for the Sommerfeld expansion used in Refs. [14,15] is zero at the MIT. We show that S is a finite constant at the MIT as argued in Refs. [15,16]. Besides for S, we also compute the T dependence for σ, K, and L 0 . Our approach is neither restricted to a low-or high-T expansion as in Refs. [14,15], nor confined to the critical regime as in Ref. [16]. We shall first introduce the model in Sec. 2. Then in Secs. 3 and 4 we review the thermoelectric transport properties in the framework of linear response and the present formulations in calculating them. In Sec. 5 we shall show how to calculate the T dependence of these properties. Results of these calculations are then presented in Sec. 6. Lastly, in Sec. 7 we discuss the relevance of our study to the experiments. The Anderson Model of Localization The Anderson model [1] is described by the Hamiltonian where ǫ i is the potential energy at the site i of a regular cubic lattice and is assumed to be randomly distributed in the range [−W/2, W/2] throughout this work. The hopping parameters t ij are restricted to nearest neighbors. For this system, at strong enough disorder and in the absence of a magnetic field, the one-particle wavefunctions become exponentially localized at T = 0 and σ vanishes [2]. Illustrating this, we refer to Fig. 1 where we show the density of states ρ(E) obtained by diagonalizing the Hamiltonian (1) with the Lanczos method as in Ref. [17,18]. The states in the band tails with energy |E| > E c are localized within finite regions of space in the system at T = 0 [2]. When the Fermi energy E F is within these tails at T = 0 the system is insulating. Otherwise, if |E F | < E c the system is metallic. The critical behavior of σ is given by where σ 0 is a constant and ν is the conductivity exponent [2]. Thus, E c is called the mobility edge since it separates localized from extended states. At the critical disorder W c = 16.5, the mobility edge occurs at E c = 0, all states with |E| > 0 are localized [3,4] and states with E = 0 are multifractal [3,17]. The value of ν has been computed from the non-linear sigma-model [19], transfermatrix methods [2,6], Green functions methods [2], and energy-level statistics [5,20]. Here we have chosen ν = 1.3, which is in agreement with experimental results in Si:P [9] and the numerical data of Ref. [5]. More recent numerical results [2,6], computed with higher accuracy, suggest In an open circuit, a temperature gradient ∇T induces an electric field E in the opposite direction which opposes the thermal flow of electrons. that ν = 1.5 ± 0.1. As we shall show later, this difference only slightly modifies our results. We emphasize that the Hamiltonian (1) only incorporates the electronic degrees of freedom of a disordered system and further excitations such as lattice vibrations are not included. For comparison with the experimental results, we measure σ in Eq. (2) in units of Ω −1 cm −1 . We fix the energy scale by setting t ij = 1 eV. Hence the band width of Fig. 1 is comparable to the band width of amorphous alloys [21]. Furthermore, the experimental investigations of the thermoelectric power S in amorphous alloys [8] have been done at high electron filling [22] and thus we will mostly concentrate on the MIT at E c . Definition of the Transport Properties Thermoelectric effects in a system are due mainly to the presence of a temperature gradient ∇T and an electric field E [23]. We recall that in the absence of ∇T with E = 0, the electric current density j flowing at a point in a conductor is directly proportional to E, By applying a finite gradient ∇T in an open circuit, electrons, the thermal conductors, would flow towards the low-T end as shown in Fig. 2. This causes a build-up of negative charges at the low-T end and a depletion of negative charges at the high-T end. Consequently, this sets up an electric field E which opposes the thermal flow of electrons. For small ∇T , it is given as This equation defines the thermopower S. In the Sommerfeld free electron model of metals, S is found to be directly proportional to −T [23]. Note that the negative sign is brought about by the charge of the thermal conductors. For small ∇T , the flow of heat in a system is proportional to ∇T . Fourier's Law gives this as where j q is the heat current density and K is the thermal conductivity [23]. At low T , the phonon contribution to σ and K becomes negligible compared to the electronic part [23]. As T → 0, σ approaches a constant and K becomes linear in T . One can then verify the empirical law of Wiedemann and Franz which says that the ratio of K and σ is directly proportional to T [24,25]. The proportionality coefficient is known as the Lorenz number L 0 , where e is the electron charge and k B is the Boltzmann constant. For metals, it takes the universal value π 2 /3 [23,25]. Strictly speaking, the law of Wiedemann and Franz is valid at very low T ( 10 K) and at high (room) T . This is because in these regions the electrons are scattered elastically. At T ∼ 10 − 100 K deviations from the law are observed which imply that K/σT depends on T . The Equations of Linear Response A more compact and general way of looking at these thermoelectric "forces" and effects is as follows: the responses of a system to E and ∇T up to linear order [26] are and The kinetic coefficients L ij are the keys to calculating the transport properties theoretically. Using Ohm's law (3) in Eq. (7), we obtain σ = L 11 . Also from Eq. (7), S, measured under the condition of zero electric current, is expressed as With the same condition, Eq. (8) yields From Eq. (6) L 0 is given as Therefore, we will be able to determine the transport properties once we know the coefficients L ij . We note that in the absence of a magnetic field, as considered in this work, the Onsager relation L 21 = L 12 holds [26]. Eliminating the kinetic coefficients in Eqs. (7) and (8) in favor of the transport properties, we obtain and Here, j q /T is simply the entropy current density [26]. Hence, the thermopower is just the entropy transported per Coulomb by the flow of thermal conductors. According to the third law of thermodynamics, the entropy of a system and, thus, also j q /T will go to zero as T → 0. We can check with Eqs. (13) and (14) that this is satisfied by our calculations in the 3D Anderson model. Application to the Anderson Transition In general, the linear response coefficients L ij are obtained through the Chester-Thellung-Kubo-Greenwood (CTKG) formulation [25,27]. The kinetic coefficients are expressed as and where A(E) contains all the system-dependent features, µ(T ) is the chemical potential and is the Fermi function. The CTKG approach inherently assumes that the electrons are noninteracting and that they are scattered elastically by static impurities or by lattice vibrations. A nice feature of this formulation is that all microscopic details of the system such as the dependence on the strength of the disorder enter only in A(E). This function A(E) can be calculated in the context of the relaxation-time approximation [23]. However, an exact evaluation of L ij is difficult, if not impossible, since it relies on the exact knowledge of the energy and T dependence of the relaxation time. In most instances, these are not known. In order to incorporate the Anderson model and the MIT in the CTKG formulation, a different approach is taken: We have seen in Eq. (9) that the d.c. conductivity is just L 11 . Thus, to take into account the MIT in this formulation, we identify A(E) with σ(E) given in Eq. (2). The L ij in Eqs. (15)- (17) can now be easily evaluated close to the MIT without any approximation, once the T dependence of the chemical potential µ is known. Unfortunately, this is not known for the experimental systems under consideration [7,8,9,12,13], nor for the 3D Anderson model. Thus one has to resort to approximate estimations of µ, as we do next, or to numerical calculations, as we shall do in the next sections. Sommerfeld expansion in the metallic regime Circumventing the computation of µ(T ), one can use that −∂f /∂E is appreciable only in an energy range of the order of k B T near µ ≈ E F . The lowest non-zero T corrections for the L ij are then accessible by the Sommerfeld expansion [23], provided that A(E) is nonsingular and slowly varying in this region. Hence, in the limit T → 0, the transport properties are [28] and consequently In the derivations of S, K, and L 0 , the term of order T 2 in Eq. (19) has been ignored as is customary. We remark that the terms of order T 2 in Eqs. (21) and (22) are usually dropped, too. In this case in the metallic regime, L 0 reduces to the universal value π 2 /3 [23]. The above approach was adopted in Refs. [14] and [15] to study thermoelectric transport properties in the metallic regime close to the MIT. From Eq. (20), the authors deduce In the metallic regime, this linear T dependence of S agrees with that of the Sommerfeld model of metals [23]. However, setting A(E) = σ(E) at the MIT [14] in Eq. (2) is in contradiction to the basic assumption of the Sommerfeld expansion, since it is not smoothly varying at Exact calculation at µ(T ) = E c A different approach taken by Enderby and Barnes is to fix µ = −E c at finite T and later take the limit T → 0 [16]. Thus, again without knowing the explicit T dependence of µ, the coefficients L ij can be evaluated at the MIT. For the transport properties they obtain, and Here and ζ(ν) the usual gamma and Riemann zeta functions. We see that at the MIT, S does not diverge nor go to zero but remains a universal constant. Its value depends only on the conductivity exponent ν. This is in contrast to the result (23) of the Sommerfeld expansion. In addition, we find that σ ∝ T ν and K ∝ T ν+1 as T → 0. Hence, σ and K/T approach zero in the same way. This signifies that the Wiedemann and Franz law is also valid at the MIT recovering an earlier result in Ref. [29] obtained via diagrammatic methods. However, at the MIT, L 0 does not approach π 2 /3 but again depends on ν. We emphasize that Eqs. (24)- (27) are exact at T values such that µ(T ) − E c = 0 [16]. Thus the T dependence of σ, S, K, and L 0 for a given electron density can only be determined if one knows the corresponding µ(T ). High-temperature expansion In this section, we will study the lowest-order corrections to the results obtained before with µ(T ) = E c . We do this by expanding the Fermi function (18) for |E c − µ(T )| ≪ k B T . In addition, we assume µ(T ) ≈ E F for the temperature range considered. This procedure gives For the thermopower, the leading-order correction can be obtained without expanding f (E, µ, T ) in L 11 and L 12 . This yields a constant for S at the MIT [15]. We obtain For K and L 0 , we again have to use the expansion of f (E, µ, T ) as in (28) in order to get non-trivial terms. The resulting expressions are cumbersome and we thus refrain from showing them here. We remark that the basic ingredients used in the high-T expansion are somewhat contradictory, namely, the expansion is valid for high T such that At present, we thus have various methods of circumventing the explicit computation of µ(T ). However, their ranges of validity are not overlapping and it is a priori not clear whether the assumptions for µ(T ) are justified for S or any of the other transport properties close to the MIT. In order to clarify the situation, we numerically compute µ(T ) in the next section and then use the CTKG formulation to compute the thermal properties without any approximation. The Numerical Method In Eqs. (15)- (17), the explicit T dependence of the coefficients L ij occurs in f (E, µ, T ) and µ(T ). More precisely, knowing µ(T ), it is straightforward to evaluate the L ij . We recall that, for any set of noninteracting particles, the number density of particles n can be determined as where ρ(E) is again the density of energy levels (in the unit volume) as in Fig. 1. Vice versa, if we know n and ρ(E) we can solve Eq. (30) for µ(T ). The density of states ρ(E) for the 3D Anderson model has been obtained for different disorder strengths W as outlined in Sec. 2. We determine ρ(E) with an energy resolution of at least 0.1 meV (∼ 1 K). Using ρ(E), we first numerically calculate n at T = 0 for the metallic, critical and insulating regimes using the respective Fermi energies Next, keeping n fixed at n(E F ), we numerically determine µ(T ) for small T > 0 such that |n(E F ) − n(µ, T )| is zero. Then we increase T and record the respective changes in µ(T ). Using this result in Eqs. (15)- (17) in the CTKG formulation, we compute L ij by numerical integration and subsequently determine the T dependent transport properties (9)- (12). We consider the disorders W = 8, 12, and 14 where we do not have large fluctuations in the density of states. These values are not too close to the critical disorder W c , so that we could clearly observe the MIT of Eq. (2). The respective values of E c have been calculated previously [3] to be close to 7.0, 7.5, and 8.0. Within our approach, we choose E c to be equal to these values. Results and Discussions Here we show the results obtained for W = 12 with E c = 7.5. The results for σ, K, and L 0 are the same at −E c and E c since they are functions of L 11 , L 22 and L 2 12 , only. On the other hand, this is not true for S. The Chemical Potential In Fig. 3, we show how µ(T ) behaves for the 3D Anderson model at E F −E c = 0, and ±0.01. To compare results from different energy regions we plot the difference of µ(T ) from E F . We find that µ(T ) behaves similarly in the metallic and insulating regions and at the MIT for both mobility edges at low T . In all cases we observe µ(T ) ∝ T 2 . Furthermore, we see that µ(T ) at −E c equals −µ(T ) at E c . This symmetric behavior with respect to E F = µ reflects the symmetry of the density of states at E = 0 as shown in Fig. 1. For comparison and as a check to our numerics, we also compute with our method µ(T ) of a free electron gas. The density of states is [23] ρ(E) = 3 2 and we again use E F = E c = 7.5. We remark that this value of the mobility edge is in a region where ρ(E) increases with E in an analogous way as ρ(E) for the Anderson model at −E c . Thus, as shown in Fig. 3, µ(T ) of a free electron gas is concave upwards as in the case of the Anderson model at −E c . We also plot the result for µ(T ) obtained by the usual Sommerfeld expansion for Eq. (30), We see that our numerical approach is in perfect agreement with the free electron result. The d.c. Conductivity In Fig. 4 we show the T dependence of σ. The values of E F we consider and the corresponding fillings n are given in Tab. 1. The conductivity at T = 0 remains finite in the metallic regime with σ/σ o = |1 − E F /E c | ν , because (−∂f /∂E) → δ(E − E F ) in Eq. (15) as T → 0. Correspondingly, we find σ = 0 in the insulating regime at T = 0. In the critical regime, σ(T → 0) ∼ T ν , as derived in Ref. [16], see Eq. (24). We note that as one moves away from the critical regime towards the metallic regime one finds within the accuracy of our data that σ ∼ T 2 . We observe that in the metallic regime σ increases for increasing T . This is different from the behavior in a real metal where σ decreases with increasing T . However, as explained in Sec. 2, the behavior of σ in Fig. 4 is due to the absence of phonons in the present model. We also show in Fig. 4 results of the Sommerfeld expansion (19) and the high-T expansion (28) for σ. Paradigmatic for what is to follow we see that the radius of convergence of the Sommerfeld expansion decreases for E F → E c and in fact is zero in the critical regime. On the other hand, the high-T expansion is very good in the critical regime down to T = 0 at E c = E F . The small systematic differences between our numerical results and the high-T expansion for large T are due to the differences in µ(T ) and E F . The expansion becomes worse both in the metallic and insulating regimes for larger T . All of this is in complete agreement with the discussion of the expansions in Sec. 4. The Thermopower In Fig. 5, we show the behavior of the thermopower at low T near the MIT. In the metallic regime, we find S → 0 as T → 0. At very low T , S ∝ T as predicted by the Sommerfeld expansion (23). We see that the Sommerfeld expansion is valid for not too large values of T . But upon approaching the critical regime, the expansion becomes unreliable similar to the case of the d.c. conductivity of Sec. 6.2. This behavior persists even if we include higher order terms in the derivation of S such as the O(T 2 ) term of Eq. (19) as shown in Fig. 5. Before discussing the critical regime in detail, let us turn our attention to the insulating regime. Here, S becomes very large as T → 0. We have observed that it even appears to approach infinity. A seemingly divergent behavior in the insulating regime has also been observed for Si:P [30], where it has been attributed to the thermal activation of charge carriers from E F to the mobility edge E c . However, there is a simpler way of looking at this phenomenon. We refer again to the open circuit in Fig. 2. Suppose we adjust T at the cooler end such that ∇T remains constant. As T → 0 both σ and K vanish in the case of insulators -for K we show this in the next section. This implies that as T decreases it becomes increasingly difficult to move a charge from T to T + δT . We would need to exert a larger amount of force, and hence, a larger E to do the job. From Eq. (4), this implies a larger S value. In the critical regime, i.e., setting E F = E c , we observe in Fig. 5 that for T → 0 the thermopower S approaches a value of 228.4 µV/K. This is exactly the magnitude predicted [16] by Eq. (25) for ν = 1.3. In the inset of Fig. 5, we show that the T dependence of S is linear. The nondivergent behavior of S clearly separates the metallic from the insulating regime. Furthermore, just as for σ, the Sommerfeld expansion for S breaks down at E F = E c , i.e., the radius of convergence is zero. Thus, the divergence of Eq. (23) at E F = E c reflects this breakdown and is not physically relevant. On the other hand, the high-T expansion [15] nicely reflects the behavior of S close to the critical regime as also shown in Fig. 5. For E F = E c , the high-T expansion (29) assumes a constant value of S for all T due to setting µ(T ) = E F . This is approximately valid, the differences are fairly small as shown in the inset of Fig. 5. We stress that there is no contradiction that S > 0 in our calculations whereas S < 0 in Ref. [16]. In Fig. 6, we compare S in energy regions close to E c and to −E c [31]. Clearly, they have the same magnitude but S < 0 at −E c and S > 0 at E c . The two cases mainly differ in their number density n. At −E c the system is at low filling with n = 2.26% while at E c the system is at high filling with n = 97.74%. The sign of S implies that at low filling the thermoelectric conduction is due to electrons and we obtain the usual picture as in Fig. 2 where the induced field E is in the direction opposite to that of ∇T . At high filling, S > 0 means that E is directed parallel to ∇T . This can be interpreted as a change in charge transport from electrons to holes. We remark that this sign reversal also occurs in the insulating as well as in the critical regime. In Fig. 7, we take the data of Fig. 5 and plot them as a function of µ− E c . Our data coincides with the isothermal lines which were calculated according to Ref. [16] by numerically integrating L 12 and L 11 for a particular T to get S. We observe that all isotherms of the insulating (µ > E c ) and the metallic (µ < E c ) regimes cross at µ = E c and S = 228.4 µV/K. Comparing with Eq. (23), we again find that the Sommerfeld expansion does not give the correct behavior of S in the critical regime. The data presented in Fig. 7 suggest that one can scale them onto a single scaling curve. In Fig. 8, we show that this is indeed true, when plotting S as a function of (µ − E c )/k B T . We emphasize that the scaling is very good and the small width of the scaling curve is only due to the size of the symbols. The result for the high-T expansion is indicated in Fig. 8 by a solid line. It is good close to the MIT. In the metallic regime, the Sommerfeld expansion correctly captures the decrease of S for large negative values of (µ − E c )/k B T . We remark that a scaling with (E F − E c )/k B T as predicted in Ref. [15] is approximately valid. The differences are very small as shown in the inset of Fig. 8. The Thermal Conductivity and the Lorenz Number In Fig. 9, we show the T dependence of the thermal conductivity K. We see that K → 0 as T → 0 whether it be in the metallic or insulating regime. We note again that this simple behavior is due to the fact that our model does not incorporate phonon contributions. The T dependence of K varies whether one is in the metallic regime or in the insulating regime and how far one is from the MIT. Directly at the MIT, we find that K → 0 as T ν+1 confirming the T dependence of K as given in Eq. (26). Near the localization MIT, the T dependence of K/T is thus the same as for σ in agreement with Ref. [29]. Again, we see that the Sommerfeld expansion (21) is reasonable only at low T in the metallic regime. As for σ and S, we see that the high-T expansion is again fairly good in the vicinity of the critical regime. At this point we are able to determine the behavior of the entropy in the system as T → 0. In the metallic regime, S and K vanish as T → 0, while in the critical and insulating regime, σ and K vanish as T → 0. Applying these results to Eqs. (13) and (14) yields that for all regimes the entropy current density j q /T vanishes as T → 0. Therefore, we find that the third law of thermodynamics is satisfied for our numerical results of the 3D Anderson model. Next, we present the Lorenz number (6) as a function of T in Fig. 10. In the metallic regime, we obtain the universal value π 2 /3 as T → 0. Note that for a metal this value should hold up to room T [23]. However, our results for the Anderson model show a nontrivial T dependence. One might have hoped that the higher-order terms in Eq. (22) could adequately reflect the T dependence of our L 0 data. However, this is not the case as shown in Fig. 10. This indicates that even if we incorporate higher order T corrections the Sommerfeld expansion will not give the right behavior of L 0 near the MIT. We emphasize that the radius of convergence of Eq. (22) is even smaller than for σ, S and K. Similarly, the high-T expansion is also much worse than previously for σ, S and K. Thus in addition to the results for the critical regime, we only show in Fig. 10 the results for nearby data sets in the insulating and metallic regimes. The T dependence of L 0 is linear as shown in the inset of Fig. 10. As before for S, the high-T expansion does not reproduce this. At the MIT, L 0 = 2.4142. This is again the predicted [16] ν-dependent value as given in Eq. (27). In the insulating regime, one can show analytically by taking the appropriate limits that L 0 approaches ν + 1 as T → 0. In agreement with this, we find that L 0 = 2.3 at T = 0 in Fig. 10. At first glance, it may appear surprising that a transport property in the insulating regime could be determined by a universal constant of the critical regime such as ν. However, in the evaluation of the coefficients L ij , the derivative of the Fermi function for any finite T decays exponentially and thus one will always have a nonzero overlap with the critical regime. In the evaluation of Eq. (12), this ν dependence survives in the limit T → 0. In real materials, we expect the relevant high-energy transfer processes to be dominated by other scattering events and thus L 0 should be different. Nevertheless, for the present model, this ν dependence holds. Possible Scenarios in the Critical Regime The results presented in Sec. 6.3 for the thermopower at the MIT show that S = 228.4 µV/K for ν = 1.3. This value is 2 orders of magnitude larger than those measured near the MIT [8,12,13]. However, as mentioned in the introduction, the conductivity exponents found in many experiments are either close ν = 0.5 or to 1 [7] and one might hope that this difference may explain the small experimental value of S. Also, recent numerical studies of the MIT by transfer-matrix methods together with nonlinear finite-size scaling find ν = 1.57 ± 0.03 [6]. In Tab. 2 we summarize the values of S and L 0 at the MIT for these conductivity exponents. We see that all S values still differ by 2 orders of magnitude from the experimental results. Furthermore, we note that our results for S and L 0 are independent of the unit of energy. Even if, instead of 1 eV, we had used t ij = 1 meV, which is appropriate in the doped semiconductors [7,9,13,30], we would still obtain the values as in Tab. 2. Thus our numerical results for the thermopower of the Anderson model at the MIT show a large discrepancy from experimental results. This may be due to our assumption of the validity of Eq. (2) for a large range of energies, or due to the absence of a true Anderson-type MIT in real materials, or due to problems in the experiments. A different scenario for a disorder driven MIT has been proposed by Mott, who argued that the MIT from the metallic state to the insulating state is discontinuous [32]. Results supporting such a behavior have been found experimentally [11,33]. According to this scenario, σ drops from a finite value σ min to zero [32] for T = 0 at the MIT. This minimum metallic conductivity σ min was estimated by Mott to be where a is some microscopic length of the system such as the inverse of the Fermi wave number, a ≈ k −1 F . As summarized in Ref. [11], experiments in non-crystalline materials seem to indicate that σ min > 300 Ω −1 cm −1 . Let us assume the behavior of σ(E) close to the MIT to be with σ min = 300 Ω −1 cm −1 . Using the numerical approach of Sec. 5, we obtain S = 119.5 µV/K at the MIT. This value is still rather large and thus the assumption of a minimum metallic conductivity as in Eq. (35) cannot explain the discrepancy from the experimental results. We remark that the order of magnitude of S is not changed appreciably, even if we add to the metallic side of Eq. (35) a term as given in Eq. (2) with σ 0 a few hundred Ω −1 cm −1 and ν = 1. Lastly, we note that the transport properties calculated for W = 8 and 14 do not differ from those obtained for W = 12 in both the metallic and insulating regions provided we are at temperatures T 100K. For S and L 0 at the MIT we obtain the same values as for W = 12. Again we observe that both S and L 0 approach these values linearly with T , but with different slopes. Our results show that the higher the disorder strength the smaller the magnitude of the slope. Conclusions In this paper, we investigated the thermoelectric effects in the 3D Anderson model near the MIT. The T dependence of the transport properties is determined by µ(T ). We were able to compute µ(T ) by numerically inverting the formula for the number density n(µ, T ) of noninteracting particles. Using the result for µ(T ), we calculated the thermoelectric transport properties within the Chester-Thellung-Kubo-Greenwood formulation of linear response. As T → 0 in the metallic regime we verified that σ remains finite, S → 0, K → 0 and L 0 → π 2 /3. On the other hand, in the insulating regime, S → ∞. This we attribute to both σ and K going to zero. Thus, it becomes increasingly difficult to achieve equilibrium and, hence, the system requires E → ∞. For L 0 , we obtained a universal value of ν +1 even in the insulating regime. Directly at the MIT, the thermoelectric transport properties agree with those obtained in Ref. [16]. Namely, as T → 0, we found σ ∼ T ν , K ∼ T ν+1 , while L 0 → const. The thermopower S also remains nearly constant in the critical regime and, in particular, it does not diverge at the MIT in contrast to earlier calculations using the Sommerfeld expansion at low T [14]. Here we showed that the difference is not so much due to an order of limits problem, but rather reflects the breakdown of convergence of the Sommerfeld expansion at the MIT [15]. Our result is supported by scaling data for S at different values of T and E F onto a single curve which is continuous across the transition. Some of the experiments [8,12] for S have been influenced by the Sommerfeld expansion such that the authors plot their results as S/T . We remark that in such a plot the signature of the MIT is hard to identify, since S/T at the MIT diverges as T → 0 solely due to the decrease in T . Our results suggest that plots as in Figs. 5 and 7 should show the MIT more clearly. The value of S is at least two orders of magnitude larger than observed in experiments [8,12,13]. This large discrepancy may be due to the ingredients of our study, namely, we assumed that a simple power-law behavior of the conductivity σ(E) as in Eq. (2) was valid even for E ≪ E c and E ≫ E c . Furthermore, we assumed that it is enough to consider an averaged density of states ρ(E). While the first assumption is of course crucial, the second assumption is of less importance as we have checked: Local fluctuations in ρ(E) will lead to fluctuations in the thermoelectric properties for finite T , but do not lead to a different T → 0 behavior: S remains finite with values as given in Tab. 2. Moreover, averaging over many samples yields a suppression of these fluctuations and a recovery of the previous behavior for finite T . In this context, we remark that -naively assuming all other parts of the derivation are unchanged -implications of many-particle interactions such as a reduced single-particle density of states at E F [34], will only modify the T dependence of µ. Consequently, the T dependencies of S, σ, K, and L 0 may be different, but their values at the MIT remain the same. Our results also suggest that the critical regime is very small. Namely, as the filling increases slightly from n = 97.74% to 97.80%, the behavior of the system changes from metallic to critical and finally to insulating. Up to the best of our knowledge, such small changes in the electron concentration have not been used in the measurements of S as in Refs. [8,12,13]. We emphasize that such a fine tuning of n is not essential for measurements of σ as is apparent from Fig. 4. Of course, one may also speculate [16] that these results suggest that a true Anderson-type MIT has not yet been observed in the experiments.
2014-10-01T00:00:00.000Z
1999-04-26T00:00:00.000
{ "year": 1999, "sha1": "93072d0576011406cd1420a4e0a63f8b87982e57", "oa_license": null, "oa_url": "http://arxiv.org/pdf/cond-mat/9904362", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "dfe00fc53aab585205bf1d0e8517712ebfa8c571", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
536825
pes2o/s2orc
v3-fos-license
Spontaneous or induced regression of cancer a novel research strategy for ayurvidya. Regression of cancer has been an interiguiny factor for medicinal science. This article is bringing out some interesting data on this issue with a view to generate in the Ayurvedic researchers to see the possibilities of Ayurveda in induced regression of cancer. Indeed, if a phenomenon appears just once in a certain aspect, we are justified in holding that, in the same conditions; it must always appear in the same way. If, then, it differs in behavior, the conditions must be different…..". I said, indeed, that we must never neglect anything in our observation of fact, and I consider it indispensable.. Nothing is accidental, and what seems to us accident is only an unknown fact whose explanation may furnish the occasion for a more or less important discovery". Claude Bernard 1 . A sudden disappearance or a massive regression of cancer has been well documented, in the medical literature, for more than a century. According to the Catholic Church records, St. Peregrine was spontaneously cured of his leg cancer, after his intense faithful prayers. He lived a fruitful life upto 80 years and died in the year 1346 A.D 2 . He was canonized as the patron saint of cancer and malignant diseases. In India, two great saints Bhagwan Sri Ramakrishna Paramhansa and Sri Ramana Maharsi died of laryngeal cancer and sarcoma respectively. They are our patron saints. Sir William Osier, in 1901, published a paper entitled, "The Medical Aspects of Carcinoma of the Breast, with a Note on the Spontaneous Disappearance of Secondary Growths" 3 . In conclusion, Osler stated that the phenomena observed, "are among the most remarkable which we witness in the practice of medicine, and the truth of the statement that no condition, however desperate, is quit hopeless". Such well-documented regressions of cancer are "whispers of nature" which we must listen more closely and with a total absorption. Then there is a chance that we may be able to follow Claude Bernard's instructions to understand and create the very conditions which led to the natural regression of cancer. This is the research trail we have followed for many years. It was on 20 th April 1966 that one of the authors (ABV) noted down, in his journal, a hypothesis to explain the regressions due to A.C.S.-an anticancer substance 4 . How the principles and practices in Ayurveda can dovetail into this hypothesis is the partial story that we would love to share with all of you. That path emerges in Ayurvidya-Ayurveda and Life Sciences. In the manuscript of Bhrigu-Samhita, cancer has been described and, to Maharsi Bhrigu, Shukracharya has expressed, "etadSasya ih icaik%saa vaOva/ Nasya ih. Baarto jaayato dova. SauBaarMBaao ip vaa Bavaot\. tdahM BaartM (anyaM manyao naa~ihsaMSaya:.." ODeva! If therapy for such a lesion (cancer) is discovered in India or even a beginning is made then I'll doubtlessly consider Bharat to be a blessed nation. "This aspect was covered in the M.D. Dissertation of one of the authors (ABV), "The Medical Aspects of Bhrigu -Samhita". Even certain remedies were described for cancer in the manuscript of Bhrigu-Samhita 5 . The remedies described were Ashwagandharishta, Yogarajguggulu, Vasantkusumakar Rasa, Ganges water, Heerak Bhasma, auto-urine therapy etc. In Agastya Nadi Reading, at Chennai, the following prescription for cancer was successfully employed in a woman with metastatic cancer: Crocus sativus, Shorea robusta, Pongamia pinnata, Eclipta alba, Centella asiatica, Lippia nodiflora and Papaver somniferum. The women then had cancer regression and lived for two years more 6 . A leading surgeon, Tilden C. Eversen from the University Illinois College of Medicine, reported that over 1000 cases of spontaneous of cancer were known in literature, in 1964 7 . The number has steadily increased over the last four decades, with a better documentation. Eversen chose to analyse 130 cases, which had a robust histopathologic and clinical documentation. Over 50% of these 130 were represented by four types of cancer: (1) neuroblstoma, (2_hypernephroma (3) choriocarcinoma and (4) malignant melanoma. Eversen, after studying the conditions in each case, suggested the following as putatively responsible for the regression: (1) Hormonal withdrawal, (2) Unusual response to usually inadequate therapy, (3) Fever or acute infections (4) Allergic and immune reactions (5) Removal of the carcinogenic agent (6) Interference with the nutrition of the cancer (7) Complete surgical removal and (8) Incorrect histological diagnosis. This was a bold but too broad an attempt. Hence there were few takes of the hypotheses for further research. But the review clearly established that the host response-immunological or other dose exist in the biological control of cancer. The immunology of cancer got a major boost 8 . But the practical consequences were not commensurate with the basic research efforts. However, certain major advances have already been made. While several heroic attempts have been made to cure cancer, one of the landmark discovery has been the work of William B.Coley. He devoted a lifetime to understand how infections can cause partial or temporary regressions of cancer. He induced erysipelas and succeeded in getting complete regressions in some patients with sarcoma etc 9 . But the obstacles were immense, as therapy itself was quite hazardous. One would strongly recommend to all cancer-scientists, the review of the influence of bacterial infections and of bacterial products (Coley's toxins) on malignant tumors in man 10 . One of the authors-H.C. Nauts, 23 years later, reported another case of metastatic melanoma, in whom intralesional injections of bacterial toxins led to a regression and the patient was clinically well 5 ½ years after he had lung metastates 11 . The further work on bacterial lipopolysaccharides and the induction of fever, proinflammatory cytokines and enhanced immune responses has vindicated the pioneering work of William B.Coley, an unsung hero of medicine. Jwara in Ayurveda, is considered Rog-Rajthe king of all diseases. Jwara is not merely a rise in the body temperature-pyrexia 12 . The current research on the molecular mechanisms of fever has more than vindicated the Ayurvedic experience and treatises on Jwara. Bacterial lipopolysaccharides (LPS) induces several pro inflammatory cytokines-Tumour-necrosing factor (TNF -α ), Interleukin (IL-1 β), interferon γ ect 13 . These cytokines do activate tumour-killing. Recently, monoclonal antibodies that blocked TGF beta and TL-10 receptors led to tumour rejection in mice, infected with Friend Leukemia virus. So the steps in the management of Jwara do also become very relevant to tumour regression. Immunotherapy of cancer has to consider the basic understanding of Jwara, in Ayurveda, to attain a grater degree of success. Antitumour effects of challenging antigens were demonstrated when the responses were associated with hypersensitivity to the agents. Hence mast cells, eosinophils, IGE etc, besides killer T cells, do play a role in cancer regression 14 . Dendritic cells loaded with tumour antigens led to regression of malignant melanoma in patients. Ayurvedic udvartan may have immune enhancing activity. A shift in the paradigm can be made by incorporating the correlation of the ancient insights into Jwara and the modern discoveries in the molecular mechanisms of fever. This approach may open up a novel understanding of the pathogenesis, Progression and management of cancer. We have to learn how periodically moderate fever may enhance the host immunological surveillance against the cancer cells 15 . May be the over-use of antibiotics and antipyretics, for even a mild fever, may negatively influence the hostcontrol of the transformed cells. Recorded cases of regression of cancer after fever or infections do suggest a need to look in this direction 16 . The fact that TNF-γ, IFN γ, apoptosis-inducers etc. do have a role in cancer cell death, emphasizes this research path. In Ayurvidya, we have to address this question at multiple levels of biological organization-from a cancer cell line to metastatic cancer in man. We have to reassess Cooley's approach with state ofthe -art Ayurvidya (Life Sciences + Ayurveda) 17 . Not only controlled induction of fever but appropriate attention to Brinhan and Langhana is essential, as per the individual cancer patient's status. Fasting has been shown to benefit in reducing the tumour growth in animals and regressions have been reported, in cancer patients, after fasting. "Big eaters" have been stated to be prone to cancer, particularly with a larger intake of salt 18 . But it is equally desirable to investigate the regressions reportedly induced by Ayurvedic or herbal remedies. There has been a massive screening of medicinal plants and herbs for anticancer activity viz. Cytotoxic properties. The vinca alkaloids of Catharanthas roseus, taxol from Taxus baccata, camptothecins, podophyllotoxins etc have evolved as drugs 19 . But the screening of Ayurvedic remedies / Plants to enhance the immune surveillance by the host, to prevent carcinogenesis or to regress the metastates have not been encouraged much by the funding agencies. It is interesting to note that as early as in 1925, K.K. Chatterji, a surgeon from Calcutta, published in The Lancet some dramatic regressions of cancer with the Neem oil and copper salts of margosic acid 20 . He had also studied ethyl esters of various oils for his studies. He said, "In some cases treatment with ethyl esters, copper margosate has cleared up the growth and removed all evidence of malignancy". The injections were also given locally in Pages 75 -83 the tumour mass. Chatterji did refer to the properties of Neem, in Ayurveda. Table 1 lists some of the Ayurvedic plants which show promise for a team effort of research to induce cancer prevention, relief and regression. The approach has to take the Reverse Pharmacology Pathstarting from Ayurvedic experience that is well -documented 21 . We have to move from experience to exploratory studies and then to well-designed experiments-basis and applied. Mackay cited a case of regression in breast cancer with metastatic pleural effusion and ascites 26 . She could not swallow even liquids and starved. The pleural and the peritoneal fluid got absorbed and the tumours regressed dramatically. While publishing the case, Mackay gave an interesting title, "A case that seems to suggest a clue to the possible solution of the cancer problem". It is proposed by us that the putative anticancer substance sequestered in the malignant fluids, reached a critical plasma and tissue level on reabsorption of the fluids. This probably led to apoptosis of the malignant cells. Novak, in 1960, showed regression of metastases and primary tumours in several cases after injection of ether extract of urine 27 . He also published the X rays showing regression of metastases. However, our hypothesis of anti-cancer substance in human urine emerged because Cole had reported that urinary bladder cancers spontaneously regressed in several patients after ureterosigmoidostomy. One of us (ABV) had proposed that A.C.S. extracted in the urine, after ureteric implantation in the sigmoid, got absorbed from the colon and reached critical plasma levels to eventually induce the regression through triggering a cascade of vascular, allergic and apoptotic events. To reinforce this hypothesis, we came across a case of spontaneous regression at Yale Medical School, Mrs. Elizabeth Morrow (Unit No. 71-42-56) was admitted and operated partially for abdominal wall leiomyosarcoma in 1960. She underwent another operation for her stress incontinence. She developed vesicovaginal fistula, which could not be repaired despite several surgical attempts. Finally when her fistula was repaired, she presented with metastases, in 1967. So the six long years her tumour stayed regressed (suppressed) apparently due to a continuous absorption of ACS from her vagina, due to the fistula. We have come across another case-report of leimysarcoma, this time of the bladder. Bladder was resected and the ureters were reimplanted. For six years, there was no recurrence 28 . This was ascribed to better prognosis in children. But it could be that A.C.S. played a role. In the Year, 1947, Williams and Pages 75 -83 Walthers had shown inhibition of tumours with the extracts of urine 29 . In 1963, Szent-Gyorgyi et al proposed in "Science" a possible new approach to cancer therapy. They detected in human children's urine, two natural substances which they termed retine (for retarding cancer) and promine (for promoting cancer). In mice, solid Krebs 2 ascites tumour were transplanted behind the scapula. Daily subcutaneous injection of retine, in 0.1 ml of peanut oil, were given for 10 days. Retine tended to stop the further growth of the tumours and could make cancer, already developed, regress. In a further study with adult human urine, they could confirm the results with retine 22 . However, all their efforts to isolated and characterize retine have failed 30 . Sarkar et al had shown an inhibitory effect on DMBA-induced mammary tumours in Holtzman rat, from India, with human urinary extract 31 . They ascribed it to LH-like fraction. In India several cases of cancer regression have seen anecdotally reported 32,33 . However, a systematic experiential study with cow's urine has been recently initiated at Bhavan's SPARC and some other centers. We had started observing cancer patients, who themselves opted for autourine therapy, way back in 1966. In a male patient (RJ), with cervical metastases, auto-urine therapy for 31 days led to a 65% reduction in size of the lymphnodes, which became soft. However, the patient was lost to follow-up but on enquiry death was reported within a year. Another female patient (RB), had ovarian cancer diagnosed in 1992. She was a very well documented case (Tata BJ 15570, HN 9252744). She was operated twise, given a full course of cancer chemotherapy, radiation etc. But she developed metastases and malignant ascites. There was no response to the continued treatment. She then used to fast often. She started taking her urine five times in a day and also had urine packs on her body. She had continued autourine, three times in a day, for seven years. She took 5-7 small pieces of Tinospora cordifolia and prepared tea-like extract, which she took instead of water and continued for 2 ½ years. She had totally changed her food habits and took figs, vegetables, soup, mung and chapatti. Her ascites disappeared and the metastases regressed completely. When we saw her, it was seven years after the first definitive diagnosis. We now understand that it is very difficult to conduct clinical therapeutic research in cancer patients by non-cancer investigators. But if a team and collaborative approach is evolved for exploring the role of Ayurvedic modalitiesplants, Gomutra, Pathya etc-there is a chance that we may successfully apply the insights gained from spontaneous regression for control of malignancy. Then we may all realize that often cancer is a chronic disease and needs longterm care through life-style changes, dietary care and biological modalities to enhance host control of tumours. From Asadhya roga we may move to Kashtasadhyata and eventually sadhyata. Ayurveda would have a major global impact even if we partially succeed.
2014-10-01T00:00:00.000Z
2003-01-01T00:00:00.000
{ "year": 2003, "sha1": "80407b1ab9ec797bc307e39066884da500c7bb99", "oa_license": "CCBYNCSA", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "80407b1ab9ec797bc307e39066884da500c7bb99", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
214193698
pes2o/s2orc
v3-fos-license
Social responsibility and competitiveness in hotels: The role of customer loyalty Article history: Received: October 9, 2019 Received in revised format: November 25 2019 Accepted: December 31, 2019 Available online: December 31, 2019 The concept of Social Responsibility (SR) has been evolved during the last decade and most organisations care about their society and the environment. While incorporating SR programs through organizational strategies would be mostly optional, the benefits of “doing a good business” may enhance competitiveness. Hospitality industry is not far from this where hotel sector can achieve several desirable outcomes adopting SR initiatives. Despite the fact that many services are intangible by their nature and they are evaluated based on perceptions, the willingness created by social initiatives can lead to an advantage through brand image. In return, a “good social reputation” can create and maintain customer loyalty in hotels and hospitality. This paper investigates the role of SR on competitiveness through the mediating role of customer loyalty. A model is developed and examined through hotel industry in Qatar. Statistical results show that all research hypotheses were accepted and customer loyalty could be retained by incorporating SR values affecting competitiveness in hospitality industry in Qatar. © 2020 by the authors; licensee Growing Science, Canada Introduction Customers may be attracted towards organisations by perceiving how these organisations core values cope with social behaviours (Hui Tsai et al., 2015). Social Responsibility and Sustainability (SR&S) can be referred as how organisations would incorporate their social commitments in their core strategies and operations (Pino et. al., 2015). As long as the level of SR&S affects business and community can be enormous in short-and long-term, it is necessary that the level of the SR awareness is highlighted and recognized in different organisations (Kang et. al., 2010). As such shift recognised, organisations need to create value by incorporating SR&S into their core strategies and operational practices, ultimately contributing to society. In accordance, an increased awareness has emerged among organisations on the concepts of Social Responsibility and how it affects performance and competitiveness (Font et al., 2016). Zappala (2003) argues that organisations; in different fields, would plan to pertain socially responsible and keep chances of survival in modern environment competition. Tsai et al. (2015) state that organisations who are able to incorporate environmental concerns would be able to advance safety, efficiency, and reduce their expenses. This can be reflected on an enhancement of competitive advantage and thus may influence consumer behaviour and loyalty (Kang & Lee, 2010;Lee et al., 2013). In other words, organisations can turn social responsibility into a regular routine by listening to consumers (Romani et al., 2013). This is the way many studies focused on studying how to align social responsibility to be in line with business strategies (Kamaei, 2015). On line, this research investigates the impact of Social Responsibility on competitiveness exploring the role of customer loyalty with evidences from the hotel industry in Qatar. Theoretical background on Social responsibility is presented followed by model development and hypotheses testing. Social Responsibility and Sustainability: emergence of a concept Social Responsibility and Sustainability (SR&S) may be considered as a crucial factor in generating and sustaining company status to enhance competitive advantage of these organisations . Menichini and Rosati (2014) argue that this wave is of great importance in such current world concerns as organisations need to reconsider their relationship and consideration with environmental concerns to create a short-term link with consumers and the society and to incorporate themselves as effective members within a society. Walker (2007) argue that the first attempts to conceptualize Social Responsibility can be dated back to the 1960s. In turn, Hui Tsai et al. (2015) defined Social Responsibility as the ethical responsibilities of organisations to enhance its encouraging influences while decreasing its undesirable impact on its community surroundings. In other words, Social Responsibility can be seen as these economical, ethical, legal, and open tasks of an organisation towards the community (Font et al., 2016). Ultimately, social responsibility can be a sense of responding to the community and the capacity of administrating connections between an organisation and community (Herzig & Moon, 2013). Davis (1973) presented a more comprehensive model on Social Responsibility. Davis explained why and how organizations should act so improving and developing both themselves and the society to which they belong (Davis, 1973). In another more modern trend, Wood (1991) suggested three main outcomes of Social Responsibility in order to trace the concept; namely: Social Policies, Social Programs and Social impact. Carroll (1999) later argued that the social responsibility in any organisation can be divided into four viewpoints: Legal, economical, altruistic and ethical. Wood again (2010) reconsidered these viewpoints are to be associated to not only fiscal disquiets but also to social and communal worries. Needless to shed a light on economical role organisations play to enhance the economy where production of goods and services serves societal needs as a primary target of any organisation. Additionally, organisations need to cope with legal requirements where communities expect that such organisations will always comply with rules and regulations rather than focusing solely on making profit. Moreover, actions that include norms and expectations should be met by organisations while dealing ethically with different stakeholders (Carroll, 1999). furthermore, altruistic requirements are related to these organisations to perform and meet communal hopes such as any good civilian (Akter, 2015). Social Responsibility and Customer loyalty Customer loyalty may be perceived through consumers word of mouth reactions, intention to buy, intention to support, and satisfaction (Kang & Hustvedt, 2013). Consumers are intended to be the most crucial entity of organizational activities (Marin et al., 2009). Baber et al. (2016) argue that "word of mouth" is a concept that designates consumers preference which in turn may shape these customers attitudes turning it into loyalty. Pino et al. (2015) state that if one consumer is to indorse a specific brand, this is to slants other customers minds to that brand while making a decision of buying. On the same manner, Romani et al. (2013) concluded that customers appreciate these organisations that are famous of their socially responsible actions and would share positively such experience. On the same domain, Kang and Hustvedt (2013) confirmed that organisational efforts to be socially responsible would be appreciated by customers touches their behaviours and loyalty towards such organisation. Al-Hawari (2006) added that satisfied consumers create ultimate behavioural outcomes including an intention to repurchase. This is why Loureiro et al. (2012) conclude that Social Responsibility positively affects loyalty of consumers. In accordance, examined a cogent link between consumer attitudes and communal support. Lombart and Louis (2014) also showed that loyalty of consumers lead to satisfaction and affect directly Social Responsibility. In conformity, Marin et al. (2009) explored a substantial confident connection between Social Responsibility and loyalty of consumers. Social Responsibility and Competitiveness An organisational obligation to its communal tasks then can have a notable effect on loyalty of customers and thus performance (Sandhu & Kapoor, 2010). Herzig and Moon (2013) argue that Social Responsibility would lead to a longstanding effect on performance and competitiveness. Many authors highlighted the effect of Social Responsibility on competitiveness and performance (Chen & Carey, 2009, Kaplan, 2004, Issa, 2011, Omidi & Shaifee, 2018. Font et al. (2016) confirm that Social Responsibility would increase share of the market which is a corporate measure in advancing competitiveness. Moreover, Vázquez and Hernandez (2014) focused on exploring the role of Social Responsibility can play in enhancing ability to sell which improve competitiveness and performance. By comparing American and Japanese organizations, Kaplan (2004) displayed that sales and profitability are aligned directly to Social Responsibility. Vázquez and Hernandez (2014) Concluded that Social Responsibility has a straight, progressive effect on competitive and performance. Social Responsibility, Customer loyalty and competitiveness, model development Social Responsibility then can affect both competitiveness and customer loyalty. Fig. 1 illustrates conceptual model developed after theoretical background. Social Responsibility would directly affect competitiveness and customer loyalty. In addition, customer loyalty may also play a mediating role in affecting competitiveness in organisations. It is already reported that Social Responsibility affects Corporate Social Performance (CSP) and some considered it as an indicator to Social Responsibility outcomes (Hui Tsai et al., 2015). Lizarzaburu (2012) stated that organisations need to emphasis on recognizing and reacting to their communal needs to improve their social recognition and prestige. Such socially responsible organisations need to build their economic, social and environmental levels to create some positive effects on their communities and thus their own performance (Omidi & Shafiee, 2018). Wang et al. (2014) indicated that organisations higher competitiveness and performance can be seen as a direct outcome of Social Responsibility actions. In line, Issa (2011) showed that such Social responsible competitiveness can be built through building reputation where organisations need to achieve value of competitive advantage which can be reflected by mental image or superiority. In such manner, organisations would be would be able to enhance their market shares to increase their abilities to recruit and administer a positive situation when it comes to their human resource different activities including training and development, recruiting, compensation and even through their appraisal system (Farooq et al., 2014). This in return will be reflected in a loop of satisfaction in society and may lead to a status of satisfaction among customers leading to sort positive reactions and support in an image of loyalty (Limpanitgul et al., 2013). This is why Chen and Carey (2009) stated that organizational performance can be developed by changing and renovating the social responsibility and through loyal customers. In accordance, a research conceptual model is suggested in Fig. 1 where research hypotheses can be as follows: H1: There are mutual effects between Social Responsibility and Customer loyalty. H2: There are mutual effects between Social Responsibility and organisational Competitiveness and Performance. H3: There are mutual effects between Customer Loyalty and organisational Competitiveness and Performance. Methodology In order to test research hypotheses, a survey questionnaire was developed and distributed to managers and customers in hotels in Qatar. With more than 300 hotels, mostly four and five stars, a sample of 540 respondents participated in this study. A Structure Equation Model (SEM) was used to analyse data using AMOS where Principal Component Analysis (PCA) was completed so minimize the number of measured components and remove influences. In addition, A Confirmatory Factor Analysis (CFA) was adopted for controlling validated element. This also helps in recognizing and settling factors of higher effect. Three main variables are adopted in this research, namely: Social Responsibility, Consumers loyalty, and Competitiveness and Performance (Fig. 2). The first variable; Social Responsibility (SR) can be seen in three levels; Economics (ECO), Environmental (ENV) and Social (SOC) (Vázquez & Hernandez, 2014). The second variable; Consumer Loyalty (CL) can be seen from two perspectives including Customer reaction (CR) and Support (S) behaviour (Marin et al., 2009;Romani et al., 2013). Competitiveness and performance as a third variable in this study can be divided in three main components including reputation and popularity, recruitment and employment desirability (Tsai et al., 2015) and also though higher market share (Vázquez and Hernandez, 2014). Social Responsibility: Economic, Social, Environmental Competitiveness & Performance: Reputation, Market share, recruitment Customer Loyalty: Table 1 shows validity and reliability of measured model. In order to confirm model, both Composite Reliability (CR) and Average Variance Extracted (AVE) measured. As seen in Table 1, all research concepts have exceeded 0.50 which indicates that model can be confirmed. In conformance, all CR values 0.60. In validating the model, all factors are higher than 0.05 where p is less than 0.01. In other words, convergent validity can be assured (Fig. 2). Table 1 Reliability and validity of research constructs Following to confirming validity and reliability of model, the measurement model was used for confirming significance. As in Table 2, it can be seen model can provide satisfactory fit of data. Additionally, Fig. 2 shows that all factor loadings including SR, LOY and COMP are more than 0.05 which confirm significance of model. In other word, significance of the model is notable. Fig. 2 and as there is no problem in estimation while Standard Estimation Coefficient (β) is significant for all routes with bootstrapping at p=0.05, it can be concluded that estimates would be considered fitting. Next, it is time to confirm both reliability and validity of the whole model. In order to do so, both AVE and CR used as shown in Table 3 where both validity and reliability can be confirmed. As long as model is fit, regression is measured examining links among research variables along with Standard Error, Critical Value in order to confirm significance (Table 4). As seen, all relations among research variables look significant. Discussion and Conclusion It can be concluded that different Social Responsibility extents including Economics, Social and Environmental Responsibilities significantly affect both competitiveness and performance including Popularity, Market Share and also Recruitment and other Human Resource activities in the hotels sector. The model not only focused on this direct link but also extended to explore mutual effects between Customer Loyalty including Customer Reaction and Supportive behaviour and Social Responsibility from one perspective and from Customer Loyalty and competitiveness from the other perspective which were all proven significant. It is notable that these results cope with others in different sectors and/or less formatted models (e.g. (2014)). So, Customer loyalty can play a mediating role between Social Responsibility and Hotels performance. In other words, competitiveness and performance of an hotel can be achieved when this hotel turns reputable and popular in serving its society leading to loyalty and so higher competing position. Needless to prove that Social Responsibility can act a corner stone in achieving strong image of hotels and hospitality industry in general. While hotels are to adopt Social Responsibility plateau, consumers would be affected indirectly leading to loyalty. Customers would engage a favourable behaviour that supports this hotel directly and even indirectly. Once customers turn devoted to this hotel, they would respond positively affecting all over performance of this hotel. Concluding, Social Responsibility efforts can intensely affect hotels performance. In such manner, hotels may think of getting closer to their societies and customers by enhancing accountability toward this society and hotel guests. In addition, continuous consideration to service quality while observing ecological rules and looking after fiscal conditions of society as well can guarantee a steady, sustainable success of this hotel. Realising that costumers are not only considering services, but also how this service affect their social life would lead to a significant response by hotels incorporating Social Responsibility efforts achieving communal satisfaction and success of their hotels. However, a clear need for more researches can be suggested in this promising field especially in hotels, hospitality and services in general. Though Social Responsibility can be of tremendous importance in competitiveness, it has not received enough consideration in hotels and services in general. It looks critical then that hotels may need to get themselves familiar with notions of Social Responsibility. More in depth researches also may be needed to better understand notion of Customer Loyalty even with the numerous researches in the field, but the possible links between Social Responsibility and Customers loyalty need more attention to explore possible mutual links. More specifically, there is a need to listen more to the society so hotels can incorporate proper Social Responsibility strategic actions in their plans.
2020-01-30T09:03:55.884Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "8368aa74d126f66db576c911c23e61005d9cda97", "oa_license": "CCBY", "oa_url": "https://doi.org/10.5267/j.msl.2019.12.039", "oa_status": "GOLD", "pdf_src": "Unpaywall", "pdf_hash": "8368aa74d126f66db576c911c23e61005d9cda97", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Business" ] }
25460282
pes2o/s2orc
v3-fos-license
Historic air pollution exposure and long-term mortality risks in England and Wales: prospective longitudinal cohort study Introduction Long-term air pollution exposure contributes to mortality but there are few studies examining effects of very long-term (>25 years) exposures. Methods This study investigated modelled air pollution concentrations at residence for 1971, 1981, 1991 (black smoke (BS) and SO2) and 2001 (PM10) in relation to mortality up to 2009 in 367 658 members of the longitudinal survey, a 1% sample of the English Census. Outcomes were all-cause (excluding accidents), cardiovascular (CV) and respiratory mortality. Results BS and SO2 exposures remained associated with mortality decades after exposure—BS exposure in 1971 was significantly associated with all-cause (OR 1.02 (95% CI 1.01 to 1.04)) and respiratory (OR 1.05 (95% CI 1.01 to 1.09)) mortality in 2002–2009 (ORs expressed per 10 μg/m3). Largest effect sizes were seen for more recent exposures and for respiratory disease. PM10 exposure in 2001 was associated with all outcomes in 2002–2009 with stronger associations for respiratory (OR 1.22 (95% CI 1.04 to 1.44)) than CV mortality (OR 1.12 (95% CI 1.01 to 1.25)). Adjusting PM10 for past BS and SO2 exposures in 1971, 1981 and 1991 reduced the all-cause OR to 1.16 (95% CI 1.07 to 1.26) while CV and respiratory associations lost significance, suggesting confounding by past air pollution exposure, but there was no evidence for effect modification. Limitations include limited information on confounding by smoking and exposure misclassification of historic exposures. Conclusions This large national study suggests that air pollution exposure has long-term effects on mortality that persist decades after exposure, and that historic air pollution exposures influence current estimates of associations between air pollution and mortality. INTRODUCTION While the impact of air pollution on mortality in the short term (days) and medium term (<10 years) is now well established, there are relatively few studies assessing the long-term (>10 years) impact of air pollution 1-10 with even fewer assessing the very long term (25+ years). 2 3 8-10 Only a small number of these [4][5][6] had exposure data at more than one time point. Like many other developed countries the UK experienced high levels of air pollution in the past, including the infamous London smog episode of December 1952, 11 since when air pollution levels have fallen to much lower levels. Changes in air pollution concentrations in the UK are well documented as, uniquely, the UK had a comprehensive national air quality monitoring network running from the 1950s to the 1990s measuring black smoke (BS) and sulfur dioxide (SO 2 ) arising from domestic and industrial coal and fossil fuel combustion, then major sources of emissions. Thereafter, networks switched to monitor nitrogen dioxide (NO 2 ) (from the early 1990s) and particulate matter with a diameter of 10µm or less (PM 10 ) (from the mid-1990s), as transport emissions became the largest source of air pollution. 12 13 The present study uses a very large nationally representative British cohort to consider impact of air pollution over 38 years of follow-up. Three a priori hypotheses were investigated: 1. Historic air pollution (ie, of several decades previously) is associated with later mortality risk. 2. The mortality risks associated with a given exposure decrease over subsequent decades. 3. Air pollution exposures in previous decades interact with recent exposures to affect mortality risk. METHODS This investigation used a long-running census-based study, the Office for National Statistics (ONS) Open Access Scan to access more free content Key messages What is the key question? ▸ What is the impact of very long-term (>30 years) air pollution exposure on mortality? What is the bottom line? ▸ Historic air pollution exposure has long-term effects on mortality that persist over 30 years after exposure and these potentially also influence current estimates of associations between air pollution and mortality. Why read on? ▸ This is one of the longest running studies to look at health effects of air pollution, using air pollution estimates independently assessed at multiple time points using contemporaneous monitoring data in a large cohort followed for 38 years. Longitudinal Study, which contains linked census and life events data on a representative 1% sample of the population of England and Wales. The initial sample was drawn from the 1971 census. 14 15 For this investigation, the study was restricted to members of the cohort of all ages present at the 1971 census, who were either present at each subsequent census (1981, 1991 and 2001) and either traced up to 2009 or had died, who were not identified through general practice (GP) registration as having left the country. Exclusions (figure 1) were made for data inaccuracies, those who died in 1971 and those not UK born (who may have had different previous air pollution exposures). By constructing a closed cohort, we were able to estimate air pollution exposures across the entire period of their life 1971-2009 for each individual. Air pollution exposures in 1971, 1981, 1991 and 2001 Land use regression techniques were used to model BS and SO 2 annual concentrations in 1971, 1981 and 1991 at 1 km grids. Models for BS and SO 2 have been described in detail previously 12 but were developed with a range of variables including information on land cover, major and minor roads, and X-Y coordinates of each monitoring site. Models were developed against concentration data from national monitoring station sites where operational days in the year exceeded 75%, which involved a total of 966 sites for BS and 825 sites for SO 2 . Model building used 80% of network sites; the remaining independent, randomly stratified 20% sample was retained for model validation. The validation statistics from the independent subset gave r 2 values for BS of 0.41, 0.38 and 0.34 for 1971, 1981 and 1991, respectively, and for SO 2 of 0.57, 0.26 and 0.31. Values of mean (fractional) bias were low in all years (ie, <−0.1), which suggests that predicted values were within 20% of observed monitored concentrations. Land use regression techniques were used to model PM 10 at 100 m grids in 2001. 16 Leave-one-out validation statistics for PM 10 in 2001 gave r 2 value of 0.37. Integration of air pollution and confounder data into the longitudinal survey Air pollution exposure estimates were produced for UK grids and wards (ONS small area geographical units) and Centre for Longitudinal Study Information & User Support (CeLSIUS) staff matched these to individuals; precise geolocation of longitudinal survey (LS) participants is not made available to researchers. For 1971, individuals were assigned the annual average BS and SO 2 concentration of the 1 km grid in which their residence was located. For other years (1981, 1991 and 2001) the pollutant surfaces (ie, regular grids) were intersected with ward boundaries and area-weighting was used to calculate the average values of each pollutant within each ward. Information on smoking was not available at individual level in the cohort, so smoothed district-level lung cancer mortality relative risk 2002-2009 (International Classification of Diseases (ICD)-10 codes C33-C34) as a proxy measure for cumulative smoking over the past 20+ years 17 were used. Lung cancer risks using Bayesian smoothing methods were calculated from ONS data held by the Small Area Health Statistics Unit. Statistical analysis Statistical analyses of individual-level data were conducted in person at the ONS offices using Stata V.11 (Stata, College Station, Texas, USA). Descriptive analyses were conducted for all variables. Logistic regression analyses (died/survived) were used to investigate associations between BS and/or SO 2 exposure in 1971, 1981 and 1991 and PM 10 in 2001 and risk of death in 1972-1981, 1982-1991, 1992-2001 and 2002-2009, respectively. Mortality outcomes were all-cause excluding accidents; cardiovascular (CV) and respiratory mortality and by constituent subgroups of coronary heart disease (CHD), stroke, respiratory infections, COPD and lung cancer. For ICD codes used, see the online supplementary appendix A table A1. We conducted analyses by decade to align with census years, the periodicity of which reflects marked changes in air pollution sources in the UK, from predominantly fossil fuel burning in the 1970s to domination by traffic-based sources with increasing contribution from diesel engines by 2001. Adjustments were made: (i) for age and sex; (ii) additionally for social class of individual (Registrar General occupation) and area (quintiles of Carstairs deprivation index), population density (not used in any of the exposure models) and geographical region (in 1971). Sensitivity analyses were conducted adjusting for the smoking proxy (lung cancer risks), restricting analyses to non-movers in the 5 years prior to the 1971 census and adjusting for exposures in other years. All air pollution variables, population density and age variables were centred prior to regression analyses. To evaluate whether past BS or SO 2 exposure modified the effect of PM 10 on mortality in 2002-2009 we introduced interaction terms in statistical models between tertiles of BS/SO 2 and PM 10 and examined risks by exposure tertiles. As SO 2 was highly correlated with the same-year BS, we did not conduct two pollutant analyses. Finally, through a piecewise linear model for the tertiles of BS/SO 2 at 1971 and the three main outcomes (all-cause mortality, all respiratory mortality and all CV mortality) we visually assessed the presence of a concentration-response. Descriptive analyses The analyses included 367 658 individuals followed from 1971 to 2009 with non-missing data (figure 1), comprising 71.67% of the initial cohort. The main reasons for exclusion were emigration (n=77 265) and missing or incomplete data (n=25 992). Those excluded from the analysis were significantly ( p<0.001) younger in 1971 than those included (mean age 25 vs 38 years), more likely to be male (54% vs 48%), more likely to have lived in most deprived areas in 1971 (23% vs 20%) and to have moved between 1966 and 1971 (59% vs 39%) (see online supplementary appendix A table A2). The mean PM 10 exposure in 2001 was 20.7 μg/m 3 (10-90th centile 18-24). The highest exposures were seen in urban metropolitan areas: BS was highest in the northern regions of England and Wales, and highest SO 2 and PM 10 exposures were seen in London (see online supplementary appendix A table A3). All exposures decreased with increasing individual-level social class and increased with increased deprivation of area of residence (see online supplementary appendix A table A3). PM 10 exposure in 2001 was weakly correlated with BS and SO 2 in earlier years (all r <0.45) (table 2). Within-year BS and SO 2 exposures were highly correlated (r>0.7). Correlations were also moderate to high (r∼0.6-0.7) for BS exposures between years, but there was a greater range for SO 2 (r∼0.45-0.7). BS exposures in 1971-1991 There were statistically significant associations between BS exposure in 1971 and all-cause mortality in all subsequent decades through to 2002-2009 (figure 2 and table 3); CV and respiratory mortality showed similar patterns as all-cause mortality. BS exposures in 1981 and 1991 were also significantly associated with all-cause, CV and respiratory mortality in subsequent decades (see figure 2 and online supplementary appendix B table B1). Figure 3 shows stronger effects for more recent BS exposures. Largest associations were between BS exposure in 1991 and respiratory mortality in 1991-2001 with OR 1.38 (95% CI 1.22 to 1.57) (see online supplementary appendix B table B1). In the subgroup analyses (see online supplementary appendix B table B1), risks were marginally higher for CHD than stroke mortality and higher for COPD and lung cancer than for respiratory infections especially for more recent exposures. The highest risks observed were for COPD mortality. COPD mortality remained significantly associated with past exposures up to the most recent decade, while respiratory infections generally reduced in magnitude and became nonsignificant over time. Adjustment for confounders slightly reduced ORs for exposures in 1971 (table 3) Additional adjustment for past exposures to BS in 1971, 1981 and 1991 reduced ORs for each outcome and the OR for CV mortality (CV) lost statistical significance. The impact of adjusting for past air pollution exposures was greater than that produced by adjusting for individual social class and area-level deprivation. Similar effects were seen whether past BS, SO 2 or both were adjusted for, as expected from the high correlations DISCUSSION This study investigated air pollution exposures in 370 000 individuals in a national census-based cohort followed for 38 years. In line with our prior hypotheses, we found that historic exposures to BS and SO 2 were associated with increased risks of allcause, CV and respiratory mortality in England and Wales over 30 years later, and mortality risks associated with a given exposure generally decreased over time. Subgroup analyses showed highest risks for COPD and lung cancer mortality. Adjusting for past BS or SO 2 exposures resulted in slightly lower observed mortality associations with recent PM 10 exposure (suggestive of confounding), but there was no clear evidence that higher air pollution exposures in earlier life resulted in greater or lesser susceptibility to PM 10 (effect modification). We saw highest associations with respiratory mortality, consistent with other UK-based studies investigating long-term BS exposures in the 1950s and 1960s, 18 1970s, 7 1980s and 1990s 5 and PM 10 in the 2000s 19 and with a population-registry-based study in the Netherlands examining PM 10 exposure in 2001. 20 In contrast, studies of large American cohorts have found highest associations of particulates with CV mortality, 4 6 as did a recent study in Rome, 21 while the large European Study of Cohorts and Air Pollution Effects (ESCAPE) analyses found associations with allcause 9 but not non-malignant respiratory 8 or CV mortality. 10 Reasons for these differences are unclear, but may include differences in death certification practices between countries. 22 23 Our effect sizes for BS were very similar to those of recently reported previous British studies 5 although our CIs overlap. The ESCAPE study was able to adjust for smoking, which we were not able to do for this outcome as our smoking proxy was lung cancer. Some earlier UK studies of air pollution in the 1970s, 1 25 including one using the LS, 1 did not find associations between mortality and particulate air pollution. This may relate to previous less accurate air pollution assessment based on nearest monitoring station. Few studies have examined long-term effects of SO 2 exposures. Studies conducted in the UK 5 19 25 and the American Cancer Society in the USA 26 have generally found statistically significant associations of mortality with SO 2 , while studies in other parts of Europe 2 3 27 have not. In the present study, BS and SO 2 levels were highly correlated (both originated from fossil fuel combustion), so it is difficult to clearly attribute mortality effects to one pollutant. Was apparent persistence of air pollution mortality risk due to highly correlated exposures? Our results showed continued effects air pollution from 1971 on mortality in subsequent decades up to 2002-2009, suggesting long-term persistence of risk. Since 1966 (5 years prior to the start of the study in 1971) and 2001, 72% of individuals had moved at least once, so results are not merely a function of living in the same place. Air pollution exposures were assigned to an individual's ward of residence (or 1 km grid of residence) at each census year. BS and SO 2 air pollution exposures were moderately correlated between decades (r∼0.6-0.7), so an alternative explanation for the apparent persistence of risks to 2002-2009 is that exposure in 1971 may have been acting as a proxy for more recent exposures. However, while adjustment for BS in subsequent years (1981 and 1991) did reduce effect estimates, those for all-cause and respiratory mortality remained statistically significant (see online supplementary appendix B table B2). It is also possible that 1971 levels of exposure may correlate highly with and be a proxy for earlier life exposures when levels were much higher. Mortality risks associated with a given exposure decrease over time Mortality risks were generally highest in decades immediately following exposure. This is consistent with a previous UK study finding that in the last 4 years BS and SO 2 exposure gave higher risks of respiratory mortality than exposures 12-16 years prior. 5 It is also consistent with the Harvard Six Cities follow-up study, 4 which found similar risks for annual compared with mean air pollution over the study period (1980)(1981)(1982)(1983)(1984)(1985)(1986)(1987)(1988)(1989)(1990)(1991)(1992)(1993)(1994)(1995)(1996)(1997)(1998), suggesting that air pollution during the last year may be important. The follow-up of the American Cancer Society study 6 did not find clear differences in risks for average PM 2.5 and SO 2 concentrations 1-5, 6-10 and 11-15 years before death, but high correlations between the time periods may have reduced the ability to detect differences. Increased risks per unit pollutant were observed for more recent exposures even though air pollution levels fell markedly (fourfold lower mean BS concentrations between 1971 and 1991). This might imply a steeper concentration-response curve at lower exposures, 28 although this was not the case for BS exposure in 1971 where a steeper concentration-response curve was seen in higher exposures (figure 4); or be due to more accurate exposure estimates for more recent periods. However, the latter seems unlikely because exposure models performed better for earlier periods. Alternatively, changes in air pollution sources over time, with reductions in industry and household emissions and increases from road traffic, 12 13 may have led to changes in toxicity. Due to qualitative changes in particulate composition over time, we did not adopt a conversion factor between PM 10 and BS. However, the two measures are moderately highly (r=0.5-0.8) correlated 29 and a number of previous studies have found associations between BS and CV 30 and respiratory mortality 31 with comparable effect sizes expressed per IQR of exposure. 29 The larger size of the PM 10 associations relative to the BS per unit mass in the present study may suggest greater toxicity of recent particles or changes in population susceptibility. A 'harvesting' effect of sensitive individuals in our closed cohort might at least partly account for waning effects of air pollution over time, but would be inconsistent with increased risks for more recent exposures, unless sensitivity also increased over time. Interaction of past with recent air pollution exposures We did not find evidence for effect modification by past exposures to BS and SO 2 , or put another way, we did not find that higher exposures in earlier life has a multiplicative effect on mortality risk associated with more recent PM 10 exposure. Our results did suggest that the relationship between mortality and PM 10 exposure was confounded by past exposure (ie, that past air pollution exposures are independently associated with both PM 10 exposure and with mortality outcomes, suggesting that not accounting for past exposures will affect observed risk estimates for more recent exposures), although the overall impact of this was small. This is one of the first studies directly to investigate this question. This confounding is unlikely to be solely due to correlation between the exposures as PM 10 exposure in 2001 was not strongly correlated with BS and SO 2 in earlier years (all r<0.45). Exposure assessment Previous studies investigating very long-term air pollution exposures have had limited historical exposure information. ESCAPE studies 8-10 relied on back-extrapolation from modelled exposures in 2008-2011, studies in Stockholm used emissions data 3 32 to estimate historic SO 2 exposure without independent measured concentrations to evaluate models, while other studies have used data from the nearest monitoring station. 2 5 26 We used land-use regression models, validated against contemporaneous monitored concentrations. Model performance was moderate (r 2 0.3-0.5) with weakest performance for SO 2 in 1981 (r 2 =0.26) and best performance for SO 2 in 1971 (r 2 =0.57), 12 but the lower r 2 values in later years may be partly related to lower variability in concentrations rather than model performance. We used air pollution estimates to the highest spatial resolution available, which were on 1 km (BS and SO 2 ) and 100 m (PM 10 ) exposure grids. This may have contributed to some exposure misclassification, though this is likely to be nondifferential and result in bias towards the null. Because of limited access to location of individuals in the LS cohort, we were unable to adjust for spatial autocorrelation, but other studies suggest that impact of this is small. 6 19 Other study limitations While the cohort was originally a population-based sample, losses to follow-up occurred that were higher in those living in more deprived areas. We consider that this is more likely to have led to an underestimate than overestimate of associations between air pollution and mortality. We cannot rule out an effect of residual confounding on the observed associations but previous work suggests that it is unlikely to have a large impact on our conclusions. The LS does not have information on individual-level smoking, so, as in some other studies, 20 we used area-level lung cancer risk as a proxy; adjustment for this did not affect observed effect estimates (see online supplementary appendix B table B2). While some comparable studies with individual-level information on smoking have found larger effects in non-smokers, 26 other studies have found no or only small confounding effects of smoking, 7 9 19 or that smoking was not related to the exposure, 21 with larger impacts from adjustments for socioeconomic status. 19 Association of smoking with air pollution exposures is likely to be through deprivation with which it is highly correlated 33 and, in the present study, we adjusted for socioeconomic status at both individual and area level. Conclusions This study suggests that air pollution exposure may have persistent long-lasting impacts on mortality risk but that more recent air pollution exposures is associated with higher relative risks than past exposures. Concentration-response function estimates for recent long-term exposures may be slightly overestimated if previous exposures are not taken into account. Findings may be particularly relevant to countries such as China experiencing high but declining levels of particulate concentrations, with a transition from coal to cleaner fuels and increases in emissions from traffic. Open Access This is an Open Access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY 4.0) license, which permits others to distribute, remix, adapt and build upon this work, for commercial use, provided the original work is properly cited. See: http://creativecommons.org/ licenses/by/4.0/
2018-04-03T00:59:39.082Z
2016-02-01T00:00:00.000
{ "year": 2016, "sha1": "9a612eca4353aa74c12320a657b020ad8ad4b586", "oa_license": "CCBY", "oa_url": "https://thorax.bmj.com/content/thoraxjnl/71/4/330.full.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "9a612eca4353aa74c12320a657b020ad8ad4b586", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
118664179
pes2o/s2orc
v3-fos-license
Viscoelastic Models of Tidally Heated Exomoons Tidal heating of exomoons may play a key role in their habitability, since the elevated temperature can melt the ice on the body even without significant solar radiation. The possibility of life is intensely studied on Solar System moons such as Europa or Enceladus, where the surface ice layer covers tidally heated water ocean. Tidal forces may be even stronger in extrasolar systems, depending on the properties of the moon and its orbit. For studying the tidally heated surface temperature of exomoons, we used a viscoelastic model for the first time. This model is more realistic than the widely used, so-called fixed Q models, because it takes into account the temperature dependency of the tidal heat flux, and the melting of the inner material. With the use of this model we introduced the circumplanetary Tidal Temperate Zone (TTZ), that strongly depends on the orbital period of the moon, and less on its radius. We compared the results with the fixed Q model and investigated the statistical volume of the TTZ using both models. We have found that the viscoelastic model predicts 2.8 times more exomoons in the TTZ with orbital periods between 0.1 and 3.5 days than the fixed Q model for plausible distributions of physical and orbital parameters. The viscoelastic model gives more promising results in terms of habitability, because the inner melting of the body moderates the surface temperature, acting like a thermostat. Introduction No exomoons have been discovered yet, but these measurements are expected in the next decade. Bennett et al. (2014) present a candidate, which has been detected via the MOA-2011-BLG-262 microlensing event. The best-fit solution for the data implies the presence of an exoplanet hosting a sub-Earth mass moon. This measurement however needs confirmation, since an alternate solution is also presented. Nevertheless, this measurement indicates that the era of exomoon detections is about to begin. The most favorable method for exomoon discoveries is photometry. An exoplanetary transit may enlighten the presence of a moon in the light curve. Details of this method are thoroughly discussed in the literature (Simon et al. 2007;Kipping 2009a,b;Simon et al. 2010;Kipping et al. 2012;Simon et al. 2012). In addition, habitability of exomoons is under examination as well (see e.g. Kaltenegger 2010; Heller & Barnes 2013;Heller et al. 2014). Hinkel & Kane (2013) investigated the influence of eccentric planetary orbits on moons, and concluded that a moon with sufficient atmospheric heat redistribution may sustain suitable temperature for life on its surface even if it orbits a planet that moves temporarily outside of the habitable zone (HZ) at each orbital period. Solar System analogs may serve as useful examples for different exomoon types. The satellites in the Solar System are diverse and life on them is a puzzling question. The icy surface of Europa and Enceladus probably covers water ocean, which may provide a suitable environment for life (Carr et al. 1998;Kargel et al. 2000;Collins & Goodman 2007;Iess et al. 2014). Tidal and radiogenic heat keeps the interior of the body warm, hence maintain the water in liquid state. In fact, these internal heat sources drive to the eruption of plumes on Enceladus, and similar phenomenon was discovered on Europa as well (Porco et al. 2006;Roth et al. 2014). The idea of a circumplanetary, tidally-heated habitable zone has emerged and was investigated by several authors (e.g. Reynolds et al. 1987;Scharf 2006;Heller & Barnes 2013). For the first time, we apply a viscoelastic model for studying tidal heat in exomoons. This work aims to give a detailed study of the circumplanetary Tidal Temperate Zone, and discusses the differences with other models. Advantages Tidal heat rate of a moon is usually calculated by the following expression (e.g. Reynolds et al. 1987;Meyer & Wisdom 2007): where G is the gravitational constant, M p is the mass of the planet, R m , n, e and a are the radius, mean motion, eccentricity and semi-major axis of the moon, respectively. Q is the tidal dissipation factor and k 2 is the second order Love number: where µ is the rigidity, ρ is the density and g is the surface gravity of the satellite. This calculation method is called the fixed Q model, because Q, µ and k 2 are considered to be constants. The fixed Q model is broadly used in tidal calculations, but highly underestimates the tidal heat of the body Meyer & Wisdom 2007). Moreover, both Q and µ are very difficult to determine, and vary on a large scale for different bodies: from a few to hundreds for rocky planets, and tens or hundreds of thousands for giants (see e.g. Goldreich & Soter 1966). In addition, these parameters are not constants, since they strongly depend on the temperature (Fischer & Spohn 1990;Moore 2003;Henning et al. 2009;Shoji & Kurita 2014). As a consequence, tidal heat flux has a temperature dependency, as well: it reaches a maximum at a critical temperature (T c ) as can be seen in Fig. 1. Between the solidus and the liquidus temperature (T s and T l , respectively) the material partially melts. Above the breakdown temperature (T b ) the mixture behaves as a suspension of particles. The dashed curve represents the convective heat loss of the body. Circles indicate equilibria, for example, the solid circle between T c and T b is a stable equilibrium point. If the temperature increases, convective cooling will be stronger than the heat flux, resulting in a cooler temperature. In case of decreasing temperature, the tidal heat flux will be the stronger, hence the temperature increases, returning the system to the stable point. The stable equilibrium between the tidal heat and convection is not necesserely located between T c and T b , in fact, there are cases, when the two curves do not have intersection at all (see Henning et al. 2009, Fig. 6). In these cases tidal heat is not strong enough to induce convection inside the body. In contrast to the fixed Q model, viscoelastic models take into account the temperature dependency of the body, hence are more realistic. Description In viscoelastic models k 2 /Q is replaced by the imaginary part of the complex Love number Im(k 2 ), which describes structure and rheology in the satellite (Segatz et al. 1988): Note that in this expression the mass of the planet and the semi-major axis of the moon are eliminated by the mean motion (n = GM p /a 3 ). Henning et al. (2009) gives the value of Im(k 2 ) for four different models (see Table 1. in their paper). In this work we use the Maxwell model: − Im(k 2 ) = 57ηω where η is the viscosity, ω is the orbital frequency and µ is the shear modulus of the satellite. The viscosity and the shear modulus of the body strongly depend on the temperature. Below the T s the shear modulus is constant: µ = 50 GPa and the viscosity follows an exponential function: where η 0 = 1.6 · 10 5 Pa s (reference viscosity), E is the activation energy, R is the universal gas constant and T is the temperature of the material (Fischer & Spohn 1990). Between T s and T b the body starts to melt. The shear modulus changes by where µ 1 = 8.2 · 10 4 K and µ 2 = −40.6 (Fischer & Spohn 1990). The viscosity can be expressed by where φ is the melt fraction which increases linearly with the temperature between T s and T l (0 ≤ φ ≤ 1) and B is the melt fraction coefficient (10 ≤ B ≤ 40) (Moore 2003). At T b the grains disaggregate, leading to a sudden drop in both the shear modulus and the viscosity. Above this temperature the shear modulus is set to a constant value: µ = 10 −7 Pa. The viscosity follows the Roscoe-Einstein relationship so long as it reaches the liquidus temperature (where φ = 1) (Moore 2003): Above T l the shear modulus stays at 10 −7 Pa, and the viscosity is described by (Moore 2003) In our calculations rocky bodies are considered as satellites, and for this reason we follow the melting temperatures of Henning et al. (2009), namely: T s = 1600 K, T l = 2000 K. We assume that disaggregation occurs at 50% melt fraction, hence the breakdown temperature will be T b = 1800 K. Internal structure and convection The structure of the moon in the model is the following: the body consists of an inner, homogenous part, which is convective, and an outer, conductive layer. If the tidal forces are weak, the induced temperature will be low, resulting in a smaller convective region and a deeper conductive layer. But in case of strong tidal forces, the temperature will be higher, hence the convective zone will be larger with a thinner conductive layer. For calculating the convective heat loss, we use the iterative method described by Henning et al. (2009). The convective heat flux can be obtained from where k therm is the thermal conductivity (∼ 2W/mK), T mantle and T surf are the temperature in the mantle and on the surface, respectively, and δ(T ) is the thickness of the conductive layer. We use δ(T ) = 30 km as a first approximation, and then for the iteration is used, where d is the mantle thickness (∼ 3000 km), a 2 is the flow geometry constant (∼ 1), Ra c is the critical Rayleigh number (∼ 1100) and Ra is the Rayleigh number which can be expressed by Here α is the thermal expansivity (∼ 10 −4 ) and κ is the thermal diffusivity: κ = k therm /(ρ C p ) with C p = 1260 J/(kg K). For detailed description see the clear explanation of Henning et al. (2009). Because of the viscosity of the material the thickness of the boundary layer and convection in the underlying zone changes strongly with temperature. The weaker temperature dependencies of density and thermal expansivity are neglected in the calculations. The iteration of the convective heat flux lasts until the difference of the last two values is higher than 10 −10 W/m 2 . Calculations of tidal heat flux and convection are made for a fixed radius, density, eccentricity and orbital period of the moon. We assume that with time, the moon reaches the equilibrium state. Henning et al. (2009) showed that planets with significant tidal heating reach equilibrium with convection in a few million years. However, change in the eccentricity can shift, or destroy stable equilibria. After finding the stable equilibrium temperature, the tidal heat flux is calculated, from which the surface temperature can be obtained using the Stefan-Boltzmann law: where σ is the Stefan-Boltzmann constant. This is the first time of using a viscoelastic model for obtaining the tidal heat induced surface temperature on exomoons. Results The satellite's surface temperature is calculated for different orbital periods and radii, at a fixed density and eccentricity. Stellar radiation and other heat sources are not considered, and have been neglected. The orbital period and the radius of the moon varies between 2 and 20 days, and between 250 km and 6550 km, respectively. It is common to consider Earth-mass moons in extrasolar systems when speaking of habitability, however, their existence is not proven. In the Solar System the largest moon, Ganymede has only 0.025 Earth mass. But the mass of satellite systems is proportional to the mass of their host planet. Canup & Ward (2006) showed that this might be the case for extrasolar satellite systems as well, giving an upper limit for the mass ratio at around 10 −4 . This means, that 10 jupiter-mass planets may have Earth-mass satellites. Besides accretion, large moons can also form from collisions, as in the case of the Earth's Moon. Other possibility is the capturing of terrestrial-sized bodies through a close planetary encounter, as described by Williams (2013). For these reasons, we also take Earth-like moons into account. The results can be seen in Fig. 2, where the density of the moon is that of Io, and its eccentricity is set to 0.1. Different colours indicate different surface temperatures. In the white region there is no stable equilibrium between tidal heat and convective cooling. In other words, tidal heat is not strong enough to induce convection. For comparison, a few -11 -Solar System moons are plotted that have similar densities to Io's. Yellow contour curves denote 0 and 100 • C. The green area between these curves indicates that water may be liquid on the surface of the moon (atmospheric considerations were not applied). We define this territory as the Tidal Temperate Zone (TTZ). Interestingly, the location of the TTZ strongly depends on the orbital period, and less on the radius of the moon. Low radii are less relevant, since smaller bodies are less capable of maintaning significant atmospheres. The dependency on the eccentricity can be seen by comparing Fig. 2 and 3. In the case of the latter figure the moon's eccentricity is 0.01. For most of the orbital period-radius pairs there is no solution (white area). Due to this drastic difference, Europa analogues get out of equilibrium for smaller eccentricities, and the TTZ becomes narrower and shifts to shorter orbital periods. Note, that radiogenic heat is not considered in the model, which could push back the moon into equilibrium state, and would result in higher surface temperature. Similar calculations were made for the density of the Earth and Titan (left and right panel of Fig. 4, respectively). The densities do not have high influence on the tidally induced surface temperature, however, the TTZ slightly shifts to lower orbital parameters for higher densities. (The density of Earth, Io and Titan are 5515 kg/m 3 , 3528 kg/m 3 and 1880 kg/m 3 , respectively.) In the left panel of Fig. 4 an example Earth-like moon is plotted inside the TTZ. This hypothetic body has the same mean surface temperature (288 K), radius (6370 km) and density as the Earth, hence its orbital period is 2.06 days. In the right panel a few Solar System satellites are plotted that have similar densities to that of Titan. The stellar flux for moons with ambient temperatures of ∼100 K (which is similar to 3. Comparison to the fixed Q model Method It is clear from the results, that the viscoelastic model does not give solution in case of small tidal forces. In other words, the amount of heat that is produced by tidal interactions is insufficient to induce convective movements inside the body, and for this reason there is no equilibrium between them. In contrast, the fixed Q model provides solution in these cases, as well. However, the viscoelastic model describes the tidal heating of the body more realistically than the fixed Q model, due to the temperature dependency of the Q and µ parameters. How are the results of the two models related to each other? For comparing the results of the two kinds of models, we use the expression of Eq. (7) from Peters & Turner (2013) for the fixed Q calculation: where T surf is the surface temperature of the moon induced by tidal heating, G is the gravitational constant, σ is the Stefan-Boltzmann constant, R m is the radius, ρ is the densitiy, µ is the elastic rigidity, and Q is the dissipation function of the moon, e is the eccentricity of the moon's orbit, and β is expressed with the semi-major axis (a) and the mass of the planet (M p ): where a R is the Roche radius of the host planet. These equations can be used for calculating the surface temperature of the moon heated solely by tidal forces. The viscoelastic model is described in details in Sections 2.2 and 2.3. The satellite's mean motion can be expressed from β by which makes the comparison of the two models easier. It is noticeable that the red curve is less steep than the green curve, and larger portion of it is located inside the TTZ, especially in Figs which is the ratio of the moon's semi-major axis and the planet's Roche-radius. It can be clearly seen that the red and green curves have a peak, which means that the probability of having suitable surface temperature has a maximum at a certain orbital period. The viscoelastic model predicts a much more efficient heating than the fixed Q model, i.e. a much larger fraction of the hypothetical moons have their surface temperature between 0 Surface temperature and 100 • C. The ratio of the integral under the red to that under the green curve is 2.8, meaning that 2.8 times more exomoons are predicted in the TTZ with the viscoelastic model. For the viscoelastic model the maximum percentage appears around 1 day orbital period, and here the probability for the moon of being inside the TTZ is almost 80%. For higher orbital periods, this probability rapidly falls down, which is in contrast with the fixed Q model. The latter has its peak around 1.5 days, and it has less than 20% chance for satellites being in the TTZ. Despite of the high probabilities achieved by the viscoelastic model for small orbital periods, the fixed Q model give more promising results for those moons that have their orbital periods at 2 days or more. For detailed study, the 0 and 100 • C temperature contours were plotted in the radius-eccentricity plane for a few, specific orbital periods, namely P = 0.5 day (top panel), P = 1 day (middle panel) and P = 1.5 days (bottom panel) (see Fig. 9). Again, red and green colours represent the viscoelastic and the fixed Q model, respectively. Between the contour curves the region of the TTZ is filled with light red and light green colours. The result shows that the viscoelastic model mostly favours the small moons, especially at high eccentricities, but also some large moons at small eccentricities over the fixed Q model. This suggests that the viscoelastic model is less sensitive to the parameters of the moon, and holds the temperature more steady than the fixed Q model. This is due to the melting of the inner material of the moon that leads to a less elevated temperature, as discussed by Peters & Turner (2013). On the other hand, the lower temperature implies that the total irradiated flux of the moon will be also lower, hence making the detection of the moon more difficult. We were also interested in 'Earth-twins' as satellites and in the probability of their 'habitability'. For this reason we made similar calculations, but the radius and the density of the hypothetical moons were set to be close to that of the Earth: R m = 6378 km(±5%) and ρ = 5515 kg/m 3 (±5%). The radius and density values were chosen randomly from these intervals. The eccentricity was altered similarly as in the previous case (uniformly between 0.001 and 0.1). Altogether 200000 cases were considered for each orbital period. The temperature limits were set to −20 and 60 • C, which are the probable limits for life on Earth. Fig. 10 shows the results of this calculation. Note, that the peaks of the red solid (viscoelastic model) and the green dashed (fixed Q model) curves are shifted to higher orbital periods, comparing to Fig. 8. This is in part caused by the changed temperature limits, and in part by the much shorter radius range. The maximum probabilities are also higher, that is especially visible in the case of the fixed Q model, which reaches more than 40 % at the curve's peak (in the previous case it was less that 20 %). As one would expect, it suggests that larger moons are more probable of maintaining warm surfaces. The ratio of the areas under the red and the green curves is 2.3. In general, it can be concluded that the viscoelastic model is not just more realistic than the fixed Q model, but also gives more promising results for exomoons, since much larger fraction of the hypothetical satellites have been found in the TTZ. In those cases when the viscoelastic model does not give solution for the equilibrium temperature, one can use the fixed Q model instead, however, the values of Q and µ are highly uncertain. The value of Qµ With the product of Q and µ, one can easily calculate the tidally induced surface temperature of a moon without using a complex viscoelastic model. Using Eq. 14 is a fast way to obtain T surf , but a good approximation is needed for the Qµ value. Note, that the used model can be applied to rocky bodies, but for icy satellites, such as Enceladus or Europa, the results may be misleading, because of the more complex structure and different behaviour of icy material. From the surface temperature of the moon, that was calculated from the viscoelastic model, the Qµ value was determined using Eq. 14 for six orbital period-eccentricity pairs. In Table 1 the tidally induced surface temperatures and the logarithm of the Qµ values can be seen. The radius and density of the satellites are those of the corresponding Solar System bodies (see the first column), and the values are from Murray & Dermott (1999). The eccentricities are set to 0.01 and 0.1, and the orbital periods to 1, 2 and 3 days. 'N.a.' indicates that there was no solution (weak tidal forces). Scaling the Galilean satellite system Since no satellite has been discovered so far outside the Solar System, we used the where a is the semimajor axis of the moon, M p and M s are the masses of the planet and the moon, respectively. The fixed orbital periods guarantee that the satellites approximately stay in resonances. This calculation resulted in constant β values for all scale parameters. Using both the fixed Q and the viscoelastic models, the warmth of tidal heat was investigated in each case. The tidal heat induced surface temperature can be seen in For Io, in the scale = 1 (Solar System) case the fixed Q and the viscoelastic models give 60 K and 160 K, respectively. The observed surface heat flux induced by tidal heat on Io is around 2 W/m 2 , which is a lower limit (Spencer et al. 2000). In other words, tidal forces produce at least 77 K heat on the surface of Io. The fixed Q model resulted in a lower value than this limit, but note that Q and µ are very difficult to estimate. The viscoelastic model gave much higher temperature than the observation, but keep in mind, that the heat is concentrated in hotspots, and is not evenly distributed on the surface of Io. The temperature of the most warm volcano, Loki is higher than 300 K (Spencer et al. 2000). The viscoelastic model gives solution only for Io (orange curves) and Europa (light blue curves), but not for all scale values, as shown in Fig. 11. In those cases when the densities are the twice than those in the Solar System (dashed curves), the surface temperatures are just slightly higher. In fact, in the viscoelastic model, higher densities result in less tidal heat because of the imaginary part of the second order Love number. Doubling the eccentricity instead of the density (dotted curves) makes the surface temperature higher in each case. Conclusions We have used, for the first time, a viscoelastic model for calculating the surface temperature of tidally heated exomoons. The viscoelastic model gives more reliable results than the widely used fixed Q model, because it takes into account that the tidal dissipation factor (Q) and rigidity (µ) strongly depend on the temperature. Besides, these values are poorly known even for planets and satellites in the Solar System. Using the viscoelastic model for exomoons helps to get a more realistic estimation of their surface temperature, and to determine a circumplanetary region, where liquid water may exist on them. It may help future missions in selecting targets for exomoon detections. We have defined the Tidal Temperate Zone, which is the region around a planet where the surface temperature of the satellite is between 0 • C and 100 • C. No sources of heat were considered other than tidal forces. Assuming, that the planet-moon system orbits the star at a far distance, or the stellar radiation is low due to the spectral type, tidal heat can be the dominant heat source affecting the satellite. We have investigated such systems, and found that the TTZ strongly depends on the orbital period, and less on the radius of the moon. For higher densities or eccentricities of the moon, the location of the TTZ is slightly closer to the planet. Comparing this model to the traditionally used fixed Q model revealed that there are huge differences in the results. Generally, the viscoelastic model is less sensitive to moon radius than the uniform Q model, keeping the surface temperature of the body more steady. The reason is that higher tidal forces induce higher melt fraction which results in a lower temperature than the fixed Q model. The viscoelastic model demonstrates the way in which partially melting of a moon can act as a thermostat and tend to fix its temperature somewhere near its melting point over a wide range of physical and orbital parameters. As a consequence, the statistic volume of the TTZ is much larger in the viscoelastic case, which is favourable for life. But this lower temperature also means that the detectability of such moons is lower in the infrared. In addition, for low tidal forces there is no equilibrium with the convective cooling; hence, only the fixed Q model provides solution. In these cases the challenge is to determine the values of Q and µ. For a few characteristic cases the product of the tidal dissipation factor and rigidity was calculated from the viscoelastic model, in order to help in quick estimations of tidally heated exomoon surface temperatures. Since the viscoelastic model is more realistic because of the inner melting and the temperature dependency of the parameters, but the fixed Q model is easier to use, these Qµ values (along with the surface temperature) are provided in Table 1. By inserting Qµ into Eq. 14, one can get the estimation of the tidally induced surface temperature of a moon. Connection between the quality factor (Q) and the viscoelastic parameters (viscosity and shear modulus) was given for the Maxwell model, too, by Remus et al. (2012). Earth-like bodies were also investigated as satellites, and in these cases the −20 and 60 • C temperatures were used as limits of habitability. The results are similar, but the volume of this habitable zone is larger than that of the TTZ for wide range of satellite radii. This habitable zone includes atmospheric considerations of the moon, but stellar radiation was neglected in the calculations. In case of significant radiation from other sources, the surface temperature of the moon will be higher. Additional heat sources (such as stellar radiation, radiogenic processes, reflected stellar and emitted thermal radiation from the planet), and the effects of eclipses, or the obliquity of the satellite are thoroughly discussed by Heller & Barnes (2013). For simulating realistic systems, the Galilean moons were used as a prototype. Their surface temperature was calculated with both models for different, scaled up masses. The mean motion resonance between the satellites helps to maintain their eccentricity, and consequently to maintain the tidal forces. By raising their masses, the temperatures of Io and Europa elevates less drastically in the viscoelastic model, than in the fixed Q model (see Fig. 11). At scale = 5 (masses are five times as in the Solar System case) the surface temperature of Europa is ∼ 150 K calculated from the viscoelastic model. Assuming that its density does not change, its radius will be approximately 0.25 Earth radii. In case of an additional 100-120 K heat (e. g. from stellar radiation), the ice would melt, and this super-Europa would become an 'ocean moon', covered entirely by global water ocean. The used viscoelastic model might not be adequate, and can be oversimplified for such bodies that consist of rocky and icy layers, as well. Salty ice mixtures may also modify the results. The applied model ignores the structure, pressure and other effects, and applies melting for the whole body. However, it provides a global picture of the tidally heated moon. Even with a more detailed viscoelastic model, that describes Enceladus as a three layered body (rocky core, ocean and ice shell), Barr (2008) have found that tidal heat is ∼ 10 times lower than that was observed by the Cassini Composite Infrared Spectrometer. Similarly, Moore (2003) concluded that observed heat flux on Io is about an order of magnitude higher than that can be explained with a multilayered, viscoelastic model. These results suggest that tidal heat can be much more relevant than what is predicted by models. We thank Amy Barr
2015-02-25T09:21:26.000Z
2015-02-25T00:00:00.000
{ "year": 2015, "sha1": "4bdc09b4b127823cc71620acd8c8faa859cdb5d3", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1502.07090", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "4bdc09b4b127823cc71620acd8c8faa859cdb5d3", "s2fieldsofstudy": [ "Physics", "Geology" ], "extfieldsofstudy": [ "Physics" ] }
7063617
pes2o/s2orc
v3-fos-license
A Diffusion-Based Approach to Geminate Recombination of Heme Proteins with Small Ligands A model of postphotodissociative monomolecular (geminate) recombination of heme proteins with small ligands (NO, O2 or CO) is represented. The non-exponential decay with time for the probability to find a heme in unbound state is interpreted in terms of diffusion-like migration of ligabs physics/0212040 and between protein cavities. The temporal behavior for the probability is obtained from numerical simulation and specified by two parameters: the time \tau_{reb} of heme-ligand rebinding for the ligand localized inside the heme pocket and the time \tau_{esc} of ligand escape from the pocket. The model is applied in the analysis of available experimental data for geminate reoxygenation of human hemoglobin HbA. Our simulation is in good agreement with the measurements. The analysis shows that the variation in pH of the solution (6.0<pH<9.4) results in considerable changes for \tau_{reb} from 0.36 ns (at pH=8.5) up to 0.5 ns (pH=6.0) but effects slightly on the time \tau_{esc} (\tau_{esc} ~ 0.88 ns). I. INTRODUCTION The binding reactions between myoglobin (M b) or hemoglobin (Hb) and small ligands (N O, O 2 or CO) are objects of extensive investigations during many decades because of the great functional importance of the heme proteins for living systems [1][2][3]. In the investigations a special attention is paid to the heme-ligand recombination process going after fast photodissociative bond breaking between a ligand molecule and an ion F e + + located in the center of a heme (F e-protoporphyrin IX complex). The kinetic study of the postphotodissociative recombination allows to obtain detailed information on the protein-ligand interaction mechanism, the protein structure, the allosteric effect and the medium influence on the recombination efficiency (see, for example, references [4][5][6][7][8][9]). The heme is well-wrapped in protein helixes, which prevent the iron from the solvent and hinder the ligand migration through protein matrix. On a sufficiently long time scale (at t ≤ 100 ns in the case of Hb) after dissociation, when the ligand is not managed to leave the protein and to move significantly away from the parent heme, the recombination is a monomolecular reaction designated usually as a geminate recombination (GR) [5]. Schematically, the GR can be written as [10]: where A is the bound heme-ligand state. The substates B and {C 1 , . . . , C n } form the unbound state. Each of these substates corresponds to ligand localization in an individual cavity of protein. The substate B answers to the residence of ligand inside the heme pocket (the cavity nearest to the iron on the distal side of heme). The rate constants k reb and k esc specify two competing processes: the irreversible heme-ligand rebinding for the ligand localized inside the heme pocket (that is, the transition from the substate B to the state A) and the migration of unbound ligand between the heme pocket and other protein cavities (the transitions between B and the substates {C 1 , . . . , C n }). Immediately after photodissociation the unbound ligand is in the substate B. Therefore the quantity k esc can be associated with ligand escape from the pocket. In general, the GR is essentially determined by the specificities of heme-ligand interaction (including the spin restriction effect, the position and the orientation of ligand with respect to the heme plane) [5,[11][12][13] and the effect of residues surrounding the heme [14][15][16][17][18][19]. Important factors for the heme-ligand rebinding are also the state of tertiary [2,[20][21][22] and quaternary [2,[23][24][25] structures of protein, the conformation transitions in protein [2,10,[26][27][28] and the solvent impact [23,[29][30][31]. As a consequence, the kinetic curve (that is, the probability P (t) to find the heme in unbound state) of GR is a non-exponentially decaying function of time [32][33][34][35]. After realization of the geminate stage a portion P s of the hemes remains in unbound state: P s ≤ 0.01 for N O, P s ∼ 0.1 ÷ 0.2 for O 2 and P s ∼ 0.5 ÷ 1.0 for CO. The quantity P s characterizes the efficiency of ligand escape from the protein to the solvent. Molecular dynamics simulations [11,[36][37][38][39][40] show that the movement of unbound N O, O 2 or CO ligands in heme protein can be associated both with ligand trapping for a significant time in individual cavities and with rare jump-like transitions between adjacent cavities. It implies a fast establishment of equilibrium for the probability distribution of ligand within individual cavities. The establishment occurs on a time scale comparable to the mean time interval τ w between the collisions of ligand with cavity walls. At room temperatures the time τ w lies in the subpicosecond range (τ w ∼ 0.1 ps for N O in the heme pocket of M b [41]). The ligand redistribution between protein cavities is observed on a longer time scale ranging from several tens of picoseconds (∼ 40 ps for N O in M b [41]) up to several tens of nanoseconds (∼ 50 ns for CO in Hb [42]). Unfortunately, in practice the detailed molecular dynamics simulation can not be implemented to the GR due to enormous computational efforts. In the study we apply an alternative approach based on the diffusion approximation to ligand migration in protein. Such an approximation is valid for times t ≫ τ w when the deterministic nature of ligand motion can be ignored. Here the interval τ w can be recognized as a correlation time. The diffusion-like character of ligand migration in the heme proteins can be a reason of the non-exponential temporal dependence for the probability P (t) [43][44][45]. For instance, a two-dimensional diffusion is demonstrated for CO in M b [45] to explain the powerlaw kinetics to be observed in the experiment. Generally, reaction (1) can be represented in a three-dimensional diffusion approximation by equation with the diffusion coefficient D = D(x, y, z). The quantity n = n(x, y, z, t) is the probability density of unbound ligand in the protein. The stepwise function R reb = R reb (x, y, z) specifies the heme-ligand rebinding and equals to k reb inside the heme pocket or to zero otherwise. In order to solve diffusion equation (2) and to follow the evolution of GR we use a simple model proposed recently in [46]. The model reproduces dynamics of random walk of particle in porous media (such, for instance, as glass-like matrices [47][48][49]) and takes into account an initial retention of ligand inside the heme pocket (that is, in the substate B). In the absence of heme-ligand rebinding the substate B is realized at times t < τ esc (τ esc = 1/k esc is the time of ligand escape from the heme pocket to others cavities). Only on a longer time scale (t > τ esc ) the ligand succeeds to leave the pocket and to migrate over protein cavities. Due to the diffusion nature of the migration the time τ esc can be specified in terms of the diffusion coefficient D. The approach is implemented with the help of a numerical simulation where the unbound ligand is represented by a structureless particle. For simplicity, in the simulation we make some assumptions. The ligand migration is assumed to be restricted to the distal side of heme. The ligand motion (realized on a short time scale t ≤ τ w ) inside the heme pocket is represented by a unforced displacement of the particle within a restricted hemispheric region of space. At τ w ≪ t ≪ τ esc the ligand trajectories are effectively mixed in the configurational space, resulting in a homogeneous distribution for the ligand inside the cavities. Hence, the probability of irreversible heme-ligand rebinding is accepted to be uniform for the whole heme pocket. We take into account also that on the time scale t ≫ τ w the fast intracavity displacements of ligand for the substates {C 1 , . . . , C n } do not influence essentially on the GR kinetics and can be ignored in the simulation. Therefore the ligand displacement exterior to the heme pocket is simulated as a random walk (that is, as a Brownian-like motion) of the particle outside the hemispheric region. This walk is a spatially homogeneous diffusion with the diffusion coefficient D. We neglect also the structural transformations (such as a shift of the iron with respect to the porphyrin ring plane) at the conformational transition of protein between the unliganded and liganded states. According to the model, the temporal behavior for the probability P (t) can be specified in terms of two parameters: the time τ esc and the time τ reb = 1/k reb of heme-ligand rebinding. The description of the model is represented in Section 2. In order to demonstrate the usefulness of such an approach to the GR of heme proteins we apply the model to the analysis of available experimental data. We analyze the measured recombination kinetics and the efficiency for a postphotodissociative GR of human hemoglobin HbA [50,51]. These measurements were carried out at various pH values of the solution. Here we determine the times τ reb and τ esc as functions of pH and estimate the influence of solution properties on the heme-oxygen rebinding, the migration of oxygen molecule in hemoglobin and the efficiency of oxygen escape from the protein. The association of the times τ reb and τ esc with the time of a bimolecular recombination process for hemoglobin is analyzed. The results of simulation and their analysis are represented in Section 3. II. DIFFUSION-BASED MODEL OF GEMINATE RECOMBINATION The movement of unbound ligand is considered in a Cartesian coordinate system xyz attached rigidly to the heme group of atoms. The system origin is superposed on an iron atom located in the middle of heme porphyrin ring. The x and y axes are aligned with the heme plane. The positive direction for the z axis corresponds to the distal side of heme. The ligand migration in protein is simulated as a probability redistribution for the ensemble of structureless particles over a three-dimensional hemispheric space with z > 0. As in [52], in our simulation the heme pocket is represented by a hemispheric region (designated here as a cage) of radius ρ. At an initial time instant the particle is uniformly distributed inside the cage. The individual particle to be exposed to a sequence of δ-shaped uncorrelated kicks executes a random walk in the space. As for the Brownian particle, each kick results in an abrupt change in the particle velocity. Between the kicks the particle is in unforced motion. On a time interval ∆t k = t k+1 − t k (t k is the time instant of action for k-th kick) between adjacent kicks the particle is specified by the velocity v k and the length L k of free path (note that ∆t k = L k / |v k | ). Then the radius vector r(t k+1 ) of particle for the time point of k+1-th kick can be obtained from iteration procedure where the radius vector r(t k )is given for the time instant of k-th kick. The projections v j,k (j = x, y, z) of the velocity v k onto the coordinate axes and the length L k are accepted to be independent random quantities, new values of which are generated at each kick. The quantities v j,k is obtained from the Maxwell distribution Here m is the particle mass and T is a protein temperature. At an attainment of the z = 0 plane bounding the space, a new particle velocity with v z > 0 is regenerated in accordance with distribution (4). The choice of free path length is dictated by the particle location in the space. Within the hemispheric cage the particle displacement is unforced and the particle undergoes no kicks. The length L k is determined then from the ballistic trajectory of particle between the cage boundaries. In this case the length is comparable to the cage size ρ. We accept here that the mean time τ h = ∆t h,k , during which the particle crosses the cage, can be associated with the time interval τ w between the collisions of ligand with heme pocket walls: τ h ∼ τ w . Exterior to the cage, the particle is exposed to uncorrelated kicks. The absence of correlation between the kicks implies that the quantity L k is distributed according to the exponential law: where λ = L k is the mean length of free path for the particle displacement outside of the cage. The mean time τ c between adjacent kicks and the length λ are related to the diffusion coefficient D = L 2 k /6τ c by equations: Thus, the spatial displacement of the particle is obtained from iterative equation of motion (3) and depends on the random sampling of variables v x,k , v y,k , v z,k and L k , the statistical distributions for which are specified by the parameters m/T , D and ρ. As mentioned above, under the conditions typical for the heme proteins (that is, the temperature, the ligand mass and the distinctive sizes of heme pocket) the times τ c and τ h to be accepted here as correlation times are negligibly short as compared to the characteristic times of GR. The length λ is essentially small as against the size ρ of hemispheric cage. Hence, the temporal behavior for the probability redistribution of ligand in heme protein can be described in terms of the diffusion-based approach. Our model reproduces dynamics of ligand migration over protein cavities. Initially, the ligand is retained inside the heme pocket and the root-mean-square displacement S(t) = |r(t) − r(0)| 2 of ligand from the initial position does not exceed the characteristic size of the pocket. In a sense such a retention is analogous to the so called cage-effect to be observed for single atoms or small molecules in porous glass-like matrices [47][48][49]. The time scale, on which the retention is realized, is limited by a time point τ esc . This time is a lifetime for the ligand inside the heme pocket in the absence of rebinding and specifies thereby a ligand escape from the pocket. Only on a longer time scale (when the ligand succeeds to leave the pocket and to migrate over the protein) the ligand displacement S(t) starts to increase significantly. According to the model, we associate the time τ esc with the time of particle localization in the hemispheric cage. In the simulation the particle displacement S(t) does not exceed the cage radius ρ on the short time scale t < τ esc . At longer times the quantity S(t) increases with time. Due to the diffusion nature of the particle displacement the increase in S(t) 2 is a linear function of time and S(t) 2 ≈ 6Dt at t ≫ τ esc . The relation between the time τ esc and the diffusion coefficient D can be then determined from the requirement ρ 2 ∼ S(τ esc ) 2 = 6Dτ esc : Fig. 1 demonstrates a typical temporal dependence for the relative particle displacement S(t) 2 /ρ 2 simulated within the framework of our model. The displacement S(t) is shown in the figure to be constant (S(t) ∼ ρ) at τ h ≪ t ≪ τ esc . On a longer time scale (t ≫ τ esc ) the quantity S(t) approaches asymptotically the diffusion law: S(t) 2 ≈ 6Dt = tρ 2 /τ esc . Notice that for the time scale τ h ≪ t ≪ τ esc the temporal behavior of relative displacements S(t)/ρ is specified by the only parameter τ esc . In the following, we will adjust the parameter τ esc in the simulation. For definiteness, this adjustment will be carried out by means of variation in the diffusion coefficient D. The particle mass m, the temperature T and the cage radius ρ will take fixed values typical for the ligand and the protein. The heme-ligand rebinding is accepted to be an irreversible process occurring when the ligand is localized inside the heme pocket. Therefore this process is simulated as a random 'death' for the particle within the hemispheric cage. The particle with |r(t k )| < ρ is 'obliterated' if ξ k ≤ ∆t h,k /τ reb . Here ξ k is a random quantity to be generated for each period ∆t h,k when the particle crosses the cage. The quantity ξ k is distributed uniformly in the interval [0, 1]. The 'obliterated' particle is excluded from the following consideration. The probability P (t) to find the heme in unbound state is found as the ensemble-averaged relative number of the 'non-obliterated' particles at a time instant t. In contrast to the relative displacement S(t)/ρ, the temporal behavior for the probability P (t) depends not only on the diffusion properties, but on the rate of heme-ligand rebinding as well. Hence, the behavior of P (t) can be specified in terms of the times τ esc and τ reb . In general, the probability P (t) is a monotonously decreasing function of time, which approaches asymptotically a steady value P s at t → ∞. This value gives a portion of the hemes remaining in unbound state after realization of GR. As in diffusion equation (2), in our model the quantity P s is a function dependent merely on the ratio between τ esc and τ reb . The analysis of simulated data shows that for a wide range of values τ esc and τ reb satisfying the requirement τ reb /τ esc < 20 (that is, under conditions typical for the GR) the best approximation of the dependence can be represented by relation where the coefficient C s is obtained from mean square fitting. At m = 32 amu, T = 300K and ρ = 4Å the fitting gives a value C s = 0.43. The inset of Fig. 1 demonstrates a good agreement between approximation (9) and the simulated data. III. GEMINATE RECOMBINATION OF HUMAN HEMOGLOBIN WITH OXYGEN We use the described model in order to analyze available experimental data for a postphotodissociative reoxygenation of human hemoglobin HbA. The data include the measured recombination kinetics and the efficiency of oxygen escape from the protein to the solvent for the monomolecular (geminate) and bimolecular stages of recombination reaction According to the model, in the analysis of reoxygenation reaction (10) the temporal decay for the probability P (t) is interpreted as a result of two competing processes: the heme-oxygen rebinding for the oxygen molecule localized inside the heme pocket and the diffusion-like migration of the oxygen between hemoglobin cavities. Here we determine the times τ reb and τ esc , which specify the processes. We determine the times as functions of pH and analyze the effect of solution properties on the processes to be considered. Due to the tetramer arrangement of hemoglobin (the Hb molecule consists of heme containing α-and β-chains) the observed kinetic curve represents a reoxygenation kinetics summarized over the chains. Here we make no distinction for reaction (10) between the α-and β-chains and determine thereby chainaveraged times. A. Reoxygenation kinetics for hemoglobin The analysis of reoxygenation kinetics for the hemoglobin is based on the estimation of the times τ reb and τ esc . The times are found with the help of a numerical simulation, the iterative procedure for which is described above (see Section 2). In the simulation the masse of walking particle is accepted to equal the mass of oxygen molecule. The temperature T is 300K. The size ρ of hemispheric cage is 4Åthat corresponds to the time τ h < 1 ps. The times τ reb and τ esc are chosen from an interval of values from 0.1 up to 5 ns. The correlation time τ c and the mean length λ of free path are determined by relations (6) and (7). They are negligibly small in comparison with τ reb , τ esc or ρ. The simulated dependences for the probability P (t) are obtained from ensemble averaging for more than 10 6 particles. In the simulation the parameters τ reb and τ esc are so adjusted that the ensemble-averaged temporal dependence of simulated probability P (t) is the best agreement with a measured kinetic curve. The agreement is specified by the relative root-mean-square deviation R between the simulated and experimental curves. The simulated dependence for P (t) is shown in Fig. 2 to reproduce well kinetic measurements on the considered time scale. The minimal deviation R achieved in our calculations for each of the fixed pH values does not exceed the measurement error (R ≤ 0.01). Such an agreement testifies that the non-exponential dependence for P (t) with time can be explained by a diffusion-like migration of ligand over protein matrix. Hence, the parameters τ reb and τ esc can be used for the analysis of the processes, which are responsible for the GR. The influence of solution properties on the migration and the rebinding of oxygen molecule in hemoglobin is assessed from a pH dependence for the obtained times τ reb and τ esc . Our simulation demonstrates a significant variation in the rate of heme-oxygen rebinding with pH (see Fig. 3). The increase of quantity pH from 6.0 to 8.5 results in a shortening for the time τ reb by a factor of 1.4 (from 0.5 down to 0.36 ns). With the following rise of pH to 9.4 the parameter τ reb appears to increase up to 0.4 ns. The minimum magnitude of τ reb is observed at pH = 8.5. Despite the considerable pH effect for the heme-oxygen rebinding, the variation in pH influences slightly on the oxygen escape from the heme pocket. The time τ esc is shown in Fig. 3 to be within a range of values from 0.82 to 0.92 ns and to be weakly dependent on pH. The pH-averaged magnitude of τ esc is approximately 0.88 ns. Notice that this magnitude is larger than τ reb by a factor of 2 ÷ 3. The obtained values for the times τ reb and τ esc are in good agreement with the experimental study of the alkaline Bohr effect (the variation of the recombination rate 1/τ s with pH for the bimolecular stage of GR) [50,51]. The behavior of pH-dependence for the time τ reb is demonstrated in Fig. 3 to be similar to one for the time τ s of bimolecular rebinding. Such an agreement testifies that for the monomolecular GR the variation of the rebinding rate with pH can be associated with the same structural transformation as for the bimolecular stage of reaction (10). Histidine imidazoles of C-terminal sites and α-amides of N -terminal sites seem to be the aminoacid residues, which are responsible for this transformation [2,8,53]. Specifically, in the alkaline Bohr effect the interaction between the solvent and the β146His residue (a C-terminal histidine of β-chain) is one of the most probable reasons for the heme structure modification and the rearrangement of neighboring aminoacid residues [2]. Our simulation confirms that the variation in pH can result in essential structural transformations in immediate proximity from the iron atom. The strong pH-dependence for the times τ reb and τ s is a consequence of the transformations. The ligand penetration from the solvent into the heme pocket is shown for O 2 or N O in Hb [5] to be a process restraining the rate of bimolecular recombination (10). Therefore, the similarity between the pH-dependences for τ reb and τ s testifies that the change in pH has a slight effect on the oxygen migration in hemoglobin at the mono-and bimolecular stages of recombination (10). The weak pH-dependence for the obtained times of oxy-gen escape from the pocket confirms this assumption. Such a pH-invariant behavior for the oxygen migration can be interpreted by the independence of mobility for the hemoglobin side chains (which seem to be responsible for the ligand transitions between cavities of heme protein [38]) on pH of the solution. B. Efficiency of oxygen escaping from hemoglobin The obtained times τ reb and τ esc are used then in order to estimate the efficiency of oxygen escape from hemoglobin as a function of pH. The efficiency is proportional to the quantum yield of photodissociation and can be associated with the portion P s of the hemes remaining in unbound state after realization of the geminate reoxygenation stage [54]. In our simulation the ratio τ reb /τ esc falls within a range of values from 0.4 up to 0.6. It implies that the portion P s can be determined from approximation (9). The behavior of pH-dependence for the obtained quantity P s agrees well with one for the measured quantum yield of photodissociation [51,51]. The quantity P s is shown in Fig. 4 to be proportional to the apparent quantum yield γ for the whole investigated scale of pH. Notice that for the studied range of pH values the quantity C s τ reb /τ esc is considerably low in comparison with 1 (C s τ reb /τ esc ∼ 0.15 ÷ 0.25) and the time τ esc is practically constant. Therefore the relation of the portion P s with the time τ reb is close to the linear law: In our study (see Fig. 3 and Fig. 4) the pH-dependences obtained for P s and τ reb are similar that testifies again that the transport properties for the oxygen molecule in hemoglobin do not depend on pH. C. Diffusion properties of oxygen migration in hemoglobin The analysis of X-ray diffraction data [55,56] for oxygenated and deoxygenated species of human hemoglobin (PDB ID 1HHO and 2HHB, correspondingly) shows that the cage radius ρ to be associated with the heme pocket size is a quantity ranging from 1 up to 5Å(taking into account the Van der Waals radiuses). Hence, the diffusion coefficient D for the oxygen migration in hemoglobin can be estimated from relation (8): D = ρ 2 /6τ esc ∼ 0.2 ÷ 5 · 10 11 m 2 /s. This coefficient is intermediate to diffusion coefficients for small molecules in water ( 10 −9 m 2 /s) and solids (10 −18 m 2 /s at T < 400K) [57] According to the diffusion law, at a time instant t m the root-mean-square displacement |r(t)| 2 of the ligand from the iron is approximately equal to ρ t m /τ esc . It implies that on the completion of kinetic measurements (t m = 1.5 ns [50]) the oxygen remains inside the protein and is localized in immediate proximity from the heme pocket: |r(t)| 2 ≈ 1.3ρ < 7Å. This conclusion is consistent with results of spectroscopy investigation of motional dynamics for CO in Hb [42]. IV. CONCLUSION We have represented a simple model of the geminate recombination of heme proteins with small ligands. The model takes into account dynamic properties of ligand displacement in protein matrix. In the model the recombination is due both to the heme-ligand rebinding and to the diffusion-like migration of ligand between protein cavities. The temporal behavior for the probability P (t) to find the heme in unbound state is specified in terms of two parameters. They are the time τ reb of heme-ligand rebinding for the ligand inside the heme pocket and the time τ esc of ligand escape from the pocket. We have applied our model in order to analyze a postphotodissociative geminate reoxygenation of human hemoglobin at various pH values of the solution. The measured kinetic curves and the efficiency of oxygen escape from the hemoglobin are well reproduced in our simulation. It testifies that the non-exponential behavior for the probability P (t) can be explained by a diffusion-like migration of ligand over protein cavities. This conclusion is consistent with recent kinetic measurements [58]. We believe that the theory-experiment agreement may be considered as an additional validation for the glass-like model of proteins. Our study demonstrates also that the variation in pH can result in considerable changes for the rate of hemeligand rebinding. At the time, the oxygen migration in hemoglobin depends slightly on pH. We have interpreted this effect as a result of essential structural transformations in immediate proximity from the iron atom. Certainly, this conclusion demands a more detailed and thorough examination. In any case we suppose that the pHinduced modification of the initial stage of GR (if the modifications are observed) can be explained by a change in the rate of heme-ligand rebinding.
2002-12-09T12:02:19.000Z
2002-12-09T00:00:00.000
{ "year": 2003, "sha1": "7d989ca325ac34e119db176de96fcbd243dac6b5", "oa_license": null, "oa_url": "http://arxiv.org/pdf/physics/0212039", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "7d989ca325ac34e119db176de96fcbd243dac6b5", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry", "Physics", "Biology" ] }
5882063
pes2o/s2orc
v3-fos-license
Disruption of Slc4a10 augments neuronal excitability and modulates synaptic short-term plasticity Slc4a10 is a Na+-coupled Cl−-HCO3− exchanger, which is expressed in principal and inhibitory neurons as well as in choroid plexus epithelial cells of the brain. Slc4a10 knockout (KO) mice have collapsed brain ventricles and display an increased seizure threshold, while heterozygous deletions in man have been associated with idiopathic epilepsy and other neurological symptoms. To further characterize the role of Slc4a10 for network excitability, we compared input-output relations as well as short and long term changes of evoked field potentials in Slc4a10 KO and wildtype (WT) mice. While responses of CA1 pyramidal neurons to stimulation of Schaffer collaterals were increased in Slc4a10 KO mice, evoked field potentials did not differ between genotypes in the stratum radiatum or the neocortical areas analyzed. Paired pulse facilitation was diminished in the hippocampus upon disruption of Slc4a10. In the neocortex paired pulse depression was increased. Though short term plasticity is modulated via Slc4a10, long term potentiation appears independent of Slc4a10. Our data support that Slc4a10 dampens neuronal excitability and thus sheds light on the pathophysiology of SLC4A10 associated pathologies. Introduction Proper brain function depends on a well-balanced interplay between excitation and inhibition. Disturbing this balance can cause severe neurological disorders like epilepsy. GABA is the main inhibitory neurotransmitter in the central nervous system, which mainly acts via GABA A and GABA B receptors. While the latter are metabotropic receptors that are linked to potassium channels via G-proteins, GABA A receptors are ligand-gated anion channels that mainly conduct chloride and bicarbonate at physiological conditions. In the developing brain the Na + -K + -Cl − co-transporter NKCC1 and AE3/Slc4a3 (Hentschke et al., 2006;Pfeffer et al., 2009) accumulate chloride into neurons. Thus, opening of GABA A receptors causes a depolarizing efflux of chloride, which is deemed to be important for the development of the neuronal circuitry (Ben-Ari, 2002). With the incipient expression of the neuronal K + /Cl − co-transporter KCC2, i.e., in the second postnatal week in rodents the chloride gradient reverses (Rivera et al., 1999;Stein et al., 2004). As a consequence the opening of GABA A receptors typically results in a hyperpolarizing influx of chloride, which is the correlate of fast synaptic inhibition. The bicarbonate gradient always results in a depolarizing efflux of bicarbonate upon opening of GABA A receptors (Rivera et al., 2005;Farrant and Kaila, 2007;Hübner and Holthoff, 2013). In neurons with a rather hyperpolarized resting membrane potential and a low intracellular chloride concentration this current can exceed the chloride current and thus result in bicarbonate dependent depolarization (Kaila and Voipio, 1987;Hübner and Holthoff, 2013). However, the role of bicarbonate and hence the role of neuronal mechanisms to control intracellular bicarbonate levels are often neglected. In neurons, bicarbonate transport is mainly mediated by members of the SLC4A family of proteins. While the Na + -independent anion-exchanger SLC4A3 lowers the intraneuronal bicarbonate concentration, the Na + -dependent anion exchangers SLC4A8 (NDCBE) and SLC4A10 (NCBE) use the sodium gradient to accumulate bicarbonate in exchange of chloride (Chesler, 2003;Romero et al., 2013). The raise in the intracellular bicarbonate concentration may augment the depolarizing efflux of bicarbonate upon activation of GABA A receptors; however, both transporters also extrude chloride and thereby increase the gradient for a hyperpolarizing chloride current. Moreover, the transport of bicarbonate is inseparably linked to changes in pH with consequences for both neuronal excitability and synaptic transmission (Chesler and Kaila, 1992;Sinning and Hübner, 2013). Thus, it is quite difficult to predict the consequences of the disruption of either SLC4A8 or SLC4A10 on neuronal excitability. Although in mice both transporters are broadly expressed within the brain Damkier et al., 2007;Chen et al., 2008;Sinning et al., 2011), there are some notable differences: while Slc4a8 is restricted to excitatory principal neurons, Slc4a10 also localizes to inhibitory neurons (Jacobs et al., 2008). Slc4a8 is enriched in presynaptic nerve terminals (Sinning et al., 2011;Burette et al., 2012) and supports glutamate release in a pH-dependent manner. As a consequence Slc4a8deficient mice display an increased seizure threshold (Sinning et al., 2011). Slc4a10-knockout mice also have an increased seizure threshold, however, the neurological phenotype is more complex and includes visual impairment (Hilgen et al., 2012) and collapsed brain ventricles (Jacobs et al., 2008). The latter is most likely owed to a compromised production of the cerebrospinal fluid, because Slc4a10 is prominently expressed in choroid plexus epithelial cells (Praetorius et al., 2004). Surprisingly, different neurological disorders including idiopathic epilepsy have been associated with heterozygous deletions of large genomic regions spanning the human SLC4A10 (Gurnett et al., 2008;Krepischi et al., 2010;Belengeanu et al., 2014). Aim of the present study was to better characterize the role of Slc4a10 for neuronal excitability and plasticity of synaptic connections in different brain regions. Extracellular field recordings of acute brain slice preparations revealed an increase of somatic field responses and a lower paired pulse ratio in the hippocampal CA1 region of Slc4a10 KO mice, while longterm potentiation (LTP) in response to tetanic stimulation was not changed. In the visual and auditory cortex, synaptic short term plasticity was modulated, but amplitudes of evoked field responses were unchanged. No genotype-dependent differences in LTP induced by tetanic stimulation were noted in the cortex. Taken together, these data suggest that Slc4a10 plays an important, so far unknown role as a modulator of synaptic short term plasticity in different neocortical areas. Slc4a10 dampens the excitability of CA1 pyramidal neurons and may thus act as a regulator of the excitation/inhibition balance in the brain. Animals All experiments were approved by the responsible local institutions and complied with the regulations of the National Institutes of Health and those of the Society of Neuroscience (Washington, DC, USA). Constitutive knockout (KO) mice were generated as described earlier (Jacobs et al., 2008). In brief, deletion of exon 12 of the Slc4a10 gene, which encodes for the first of the predicted transmembrane spans of Slc4a10, leads to a frame shift and a premature stop codon in exon 13. Total KO and Slc4a10 wild type (WT) mice were generated from heterozygous matings in a pure C57/Bl6 background. Mice were group-housed on a 12 h light-dark cycle and fed with food and water ad libitum. For all experiments we used littermates with a 50/50 gender ratio from heterozygous matings. Cortical and Hippocampal Field Potential Recordings: Paired Pulse Paradigm Evoked field potentials were investigated on coronal brain sections by placing of bipolar stimulating electrodes with a tip separation of 100 µm (SNE-200X, Science-Products, Germany) onto layer VI of the cortex or the Schaffer collaterals of the hippocampal CA3 area, respectively. Upon stimulation (pulse duration 50 µs), field potentials were recorded using glass microelectrodes (impedance 2-5 MΩ, filled with aCSF) impaled into the cortical layer II/III of the cortex. In the hippocampus evoked field potentials were recorded simultaneously from the stratum radiatum and the stratum pyramidale of the CA1 area as described previously (Sinning et al., 2011). For all experiments stimulus intensity was gradually increased until responses saturated (0-70 V) and the halfmaximal stimulus intensity was determined (inter-stimulus interval 30 s). After determination of the half-maximal stimulus intensity, paired-pulse stimuli were applied with inter-stimulus intervals of 15,20,30,50,80,120,180,280,430, 650 and 1000 ms. Data of field potential recordings were collected with an extracellular amplifier (EXT-02, NPI, and Germany), low pass filtered at 4 kHz and digitally stored with a sample frequency of 10 kHz. Analysis of field potential recordings was performed using the software ''Signal'' (Cambridge Electronic Design, UK). Absolute amplitudes and slopes were analyzed for population spikes (PS) and population synaptic responses (fEPSP), respectively. To assess fEPSP-PS coupling, slopes of fEPSP recorded in the stratum radiatum and the amplitudes of the simultaneously recorded respective PS in the stratum pyramidale were correlated. For the comparison between genotypes mean PS amplitudes within fEPSP slope bins of 0.5 mV/ms were calculated. Cortical and Hippocampal Field Potential Recordings: LTP For hippocampal LTP recordings, Schaffer collaterals were stimulated in the CA3 region of the hippocampus by a bipolar stimulation electrode (PI2ST30.1A3, tip separation 75 µm, Science Products, Germany). Recordings were performed from the stratum radiatum and the stratum pyramidale of the hippocampal CA1 region and fEPSP amplitudes and PS slopes analyzed, respectively. For cortical LTP, stimulation electrodes were placed in layer VI of the cortex and recordings were performed in cortical layer II/III. After determination of the half maximal stimulus intensity a stable baseline of responses was recorded for 20 min with a stimulation frequency of 0.05 Hz. LTP was induced by repeated high frequency stimulation with half-maximal stimulus intensity (2 × 100 pulses at 100 Hz, inter stimulus interval 1 s) in the hippocampus and in the cortex (5 × 100 pulses at 100 Hz). Evoked responses were recorded subsequently for 60 min with a stimulus frequency of 0.05 Hz. Slopes of hippocampal fEPSPs and PS and cortical fEPSP amplitudes were normalized to its mean during baseline recording. Statistical Analysis Data are presented as mean ± SEM. Statistical analysis of two experimental groups was performed with the parametric two tailed Student's t-test. In experiments that included repeated measurements, differences between groups were tested by repeated-measures ANOVA. If applicable, subsequent Bonferroni post hoc tests were applied. Significance was considered at p values <0.05. Disruption of Slc4a10 Increases Somatic Field Potentials in the Hippocampus To assess whether the disruption of Slc4a10 affects network excitability, we recorded evoked field potentials in different regions of acute brain slices of Slc4a10 KO and WT mice and analyzed input-output relationships. Firstly, evoked extracellular field responses of CA1 pyramidal neurons to a single stimulation of the Schaffer collaterals were analyzed. Whereas slopes of fEPSP recorded in the stratum radiatum did not differ between genotypes (Figures 1A,C; repeated-measures ANOVA, F = 0.47; p = 0.50), PS amplitudes recorded in the stratum pyramidale were increased in Slc4a10 KO mice (Figures 1B,D; repeated-measures ANOVA, F = 4.16; p = 0.04). Half-maximal stimulation intensities did not differ between genotypes (KO 39.4 ± 6.8 V; WT 32.8 ± 5.1 V; n = 27/22; Student's t-test p = 0.50). Next, we correlated slopes of evoked fEPSP with the respective population spike amplitudes. This analysis revealed that a larger population spike amplitude in KO mice manifests with a more efficient coupling between the synaptically driven fEPSP-slope and the action-potential derived PS-amplitude in the hippocampus (Figure 1E; repeatedmeasures ANOVA, F = 8.52; p < 0.0001). These results suggest an increased excitability of CA1 pyramidal neurons and a positive shift in fEPSP/PS coupling efficiency in Slc4a10 KO mice. Slc4a10 Modulates Synaptic Short Term Plasticity in the Hippocampus For the analysis of short term plasticity, we applied paired stimuli with a varying inter-stimulus interval (15-1000 ms) to Schaffer collaterals and compared paired pulse ratios between genotypes. In recordings from the stratum radiatum of slices from Slc4a10 KO mouse brains we did not observe significant changes in the paired pulse ratios of the slopes of fEPSP2 and fEPSP1 (Figures 1F,G; repeated-measures ANOVA, F = 1.38; p = 0.25). In agreement with the unaltered inputoutput relationship, there was also no change in the slope of the response to the first half-maximal stimulus ( Figure 1H; KO: 1.25 ± 0.15 mV; WT: 1.00 ± 0.12 mV; n = 18/17; Student's t-test p = 0.21). In the stratum pyramidale, however, the PS amplitude at half-maximal stimulus intensity was increased ( Figure 1K; KO: 5.75 ± 0.65 mV; WT: 3.39 ± 0.30 mV; n = 18/22; Student's t-test p = 0.001), while the paired pulse ratio was decreased (Figures 1I,J; repeated-measures ANOVA, F = 5.12; p = 0.03) upon disruption of Slc4a10. The Bonferroni post hoc analysis revealed that the genotypedependent difference in paired pulse ratio only applies to an inter-stimulus interval of 80 and 120 ms (80 ms: KO 1.60 ± 0.09, WT 2.08 ± 0.11; n = 18/22; Bonferroni post hoc test p < 0.05; 120 ms: KO 1.47 ± 0.07, WT 1.95 ± 0.11; n = 18/22; Bonferroni post hoc test p < 0.05). Thus, paired Schaffer collateral stimulations showed that the increase in somatic excitability of CA1 hippocampal neurons is accompanied by a decreased paired pulse ratio in the stratum pyramidale. Slc4a10 Deletion does not Affect Long-Term Potentiation in the Hippocampus We next assessed whether disruption of Slc4a10 also affects hippocampal long-term plasticity. For this purpose we stimulated Schaffer collaterals with a tetanic stimulation and analyzed evoked postsynaptic potentials of CA1 pyramidal neurons. We compared early (averaged changes to baseline from 0 min to 5 min) and late (averaged changes to baseline from 55 min to 60 min) potentiation of postsynaptic excitability after two trains of 100 pulses at 100 Hz. Normalized field responses in the stratum radiatum were likewise increased after high frequency stimulation but there was no difference between genotypes (Figures 2A,C; early potentiation: KO 193.0 ± 7.2%, WT 178.6 ± 19.0%, n = 11/11, Student's t-test p = 0.81; late potentiation: KO 149.0 ± 5.8%, WT 140.6 ± 12.7%, n = 11/11, Student's t-test p = 0.53). Also, LTP recorded from the stratum pyramidale did not differ between genotypes, neither in the early phase (Figures 2B,D Slc4a10 is Expressed in Cortical GABAergic Synapses and Modulates Synaptic Short Term Plasticity in the Cortex Slc4a10 is abundantly expressed in the somatodendritic compartment of hippocampal as well as cortical principal neurons (Jacobs et al., 2008;Song et al., 2014). To specify whether Slc4a10 localizes to synapses we performed double immunostainings of WT brain slices for Slc4a10 and different synaptic markers. In the visual cortex we found a substantial co-localization of Slc4a10 with VGAT, a marker of GABAergic presynapses (Figure 3A -A ), and the GABA A receptor subunit α5, as a marker for GABAergic postsynapses (Figure 3B -B ). To assess the role of Slc4a10 for excitability and plasticity in cortical neurons, we recorded evoked field responses of layer 2/3 neurons in response to stimulation of cortical layer 6 in both the visual and the auditory cortex. No differences were observed in field potential amplitudes, neither in response to stimulation with an increasing stimulus intensity (Figures 4A,C These results suggest that Slc4a10 co-localizes with pre-and postsynaptic markers of GABAergic synapses in the cortex and modulates synaptic short-term but not long-term plasticity in the cortex. Slc410 does not Affect Long-Term Potentiation in the Cortex Finally, we addressed the functional role of Slc4a10 on long term plasticity in the cortex and compared absolute amplitudes of synaptic population responses evoked in cortical layers 2/3 upon tetanic stimulation within cortical layer 6 of WT and KO slices. Normalized field responses were increased upon high frequency stimulation in both groups ( Figure 4F), but comparison of potentiation in the early and late phase of LTP did not reveal differences between genotypes (Figures 4F,G; early potentiation: KO 135.4 ± 5.5%, WT 139.5 ± 7.9% n = 6/5, Student's t-test p = 0.84; late potentiation: KO 123.1 ± 7.9%, WT 124.5 ± 4.9%, n = 6/5, Student's t-test p = 0.89). Thus, these results suggest that also in the cortex Slc4a10 does not have an important functional role neither for the induction nor for the maintenance of LTP. Discussion Here, we show that disruption of Slc4a10 increases the excitability of CA1 pyramidal neurons in the somatic but not the dendritic compartment. Paired pulse facilitation of PS was decreased in the stratum pyramidale, while it was not changed in the stratum radiatum. Short-term plasticity was also altered in different cortical areas, while amplitudes of evoked field potentials did not differ between genotypes. Hippocampal and cortical LTP were not changed in Slc4a10 KO mice. Sub-Regional Differences in Slc4a10-Dependent Alterations of Single Evoked Field Potentials in the Hippocampus Field potentials recorded in different compartments of the hippocampal CA1 region have different electrophysiological origins: while there is a graded dendritic potential in the stratum radiatum, the action-potential based population spike recorded from the stratum pyramidale originates at the axon hillock. Increased amplitudes of field potentials in the stratum pyramidale in Slc4a10 KO mice indicate that the expression of Slc4a10 decreases the likelihood of action potential generation for a defined excitatory postsynaptic potential, a phenomenon that is commonly described as excitatory postsynaptic potentialor EPSP-spike coupling. Such a potentiation of CA1 response which is also supported by a positive shift in fEPSP/PS-spike coupling efficiency in KO slices can either be caused by a change in the intrinsic excitability of pyramidal neurons or by impaired GABAergic inhibition (Daoudal et al., 2002;Staff and Spruston, 2003). While there is considerable GABAergic input to the stratum pyramidale, it is sparse in the distal stratum radiatum. Consequently, changes in GABAergic inhibition are more likely to affect somatic excitability (Megías et al., 2001). Because of the strong expression of Slc4a10 in hippocampal interneurons (Jacobs et al., 2008;Song et al., 2014), changed PS, unaltered fEPSP slopes in the stratum radiatum and a positive shift in the fEPSP/PS-spike coupling efficiency may be indicative for a compromised GABAergic inhibition upon disruption of Slc4a10. Slc4a10 Affects Short-Term but not Long-Term Plasticity at Hippocampal Synapses Multiple forms of short-term plasticity including facilitation and depression co-occur at synapses (Raimondo et al., 2012). Nevertheless, short-term plasticity of hippocampal synaptic connections is dominated by facilitation. Facilitation is the consequence of residual presynaptic calcium after the conditioning pulse, which transiently increases transmitter release probability (Katz and Miledi, 1968;Zucker and Regehr, 2002). At short intervals (<50 ms) the response to the second stimulus is limited via GABA A dependent feed forward inhibition, while at intervals between 100-125 ms activation of presynaptic GABA B autoreceptor activation dominates (Davies et al., 1990;Steffensen and Henriksen, 1991). The decrease in paired-pulse facilitation at 80 and 120 ms interstimulus intervals is thus compatible with a modulation of GABA B receptor mediated inhibition in the hippocampus in Slc4a10 KO mice. In contrast to the hippocampus, short-term plasticity at neocortical synapses is dominated by depression (Deisz and Prince, 1989;Markram and Tsodyks, 1996), which is of multimodal origin (Zucker and Regehr, 2002). Besides classical depletion of vesicle pools, a reduced paired pulse ratio in the cortex is mainly attributed to activation of GABA B receptors (Takesian et al., 2010). The reduction of paired pulse ratios in the cortex of Slc4a10 KO mice at either 15 or 120 ms is consistent with a modulation of both GABA A and GABA B receptor mediated inhibition in Slc4a10 KO mice (Davies et al., 1990;Wehr and Zador, 2005). This was most obvious at an inter-stimulus interval of 15 and 120 ms (p = 0.015; n = 40/50). (E) No difference was observed for the mean first fEPSP amplitude upon half-maximal stimulation (p > 0.05; n = 40/50). (F) Exemplary traces of fEPSPs recorded in cortical layers 2/3 of WT and KO slices before and after columnar high frequency stimulation. (G) LTP recordings from WT and KO slices did not reveal genotype dependent differences in the potentiation of cortical fEPSP amplitudes after high frequency stimulation. *p < 0.05. Based on the alterations in short-term plasticity upon disruption for Slc4a10, we expected enhanced LTP in Slc4a10 KO slices. However, Slc4a10 deletion did not alter the levels of LTP in the hippocampus, neither when recorded in the stratum radiatum nor in the stratum pyramidale, and also not in the cortex. Long-lasting changes in postsynaptic excitability such as LTP in principal CA1 hippocampal neurons require AMPA-receptor mediated activation of postsynaptic NMDAreceptors (Bliss and Collingridge, 1993) and structural changes at synapses (Engert and Bonhoeffer, 1999;Malenka and Bear, 2004). GABAergic inhibition has a profound effect on the depolarization of the postsynaptic neuron and hence NMDA-receptor activation in response to the tetanic stimulation (Wigström and Gustafsson, 1983;Lu et al., 2000). Nevertheless, the influence of GABAergic inhibition on LTP of excitatory responses appears to be stimulation-dependent (Chapman et al., 1998) and can occur in the absence of both inhibitory and excitatory GABAergic signaling (Debray et al., 1997). Under conditions of high-frequency stimulation GABA B autoreceptor mediated suppression of GABA release is known to promote the induction of LTP (Davies et al., 1991). However, these effects may be blunted in Slc4a10 KO mice. Outlook While K + /Cl − co-transporters have been extensively studied for their role in regulating neuronal excitability under physiological and pathophysiological conditions (Blaesse et al., 2009), the role of Slc4 bicarbonate transporters in regulation of neuronal excitability and synaptic activity has mostly been neglected. In light of the evidence that bicarbonate transporters of the SLC4A family including SLC4A10 are involved in seizure disorders (Gurnett et al., 2008;Krepischi et al., 2010;Belengeanu et al., 2014), a better understanding of their role for synaptic transmission is desirable to get a more comprehensive view of neuronal excitability and the pathophysiology of epilepsy (Chesler, 2003;Leniger et al., 2004).
2016-05-12T22:15:10.714Z
2015-06-16T00:00:00.000
{ "year": 2015, "sha1": "00c9672c2e0a4236db603b9d3f25aba3d3bf7633", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fncel.2015.00223/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "00c9672c2e0a4236db603b9d3f25aba3d3bf7633", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
14347794
pes2o/s2orc
v3-fos-license
Validation of the multivariable In-hospital Mortality for PulmonAry embolism using Claims daTa (IMPACT) prediction rule within an all-payer inpatient administrative claims database Objective To validate the In-hospital Mortality for PulmonAry embolism using Claims daTa (IMPACT) prediction rule, in a database consisting only of inpatient claims. Design Retrospective claims database analysis. Setting The 2012 Healthcare Cost and Utilization Project National Inpatient Sample. Participants Pulmonary embolism (PE) admissions were identified by an International Classification of Diseases, ninth edition (ICD-9) code either in the primary position or secondary position when accompanied by a primary code for a PE complication. The multivariable IMPACT rule, which includes age and 11 comorbidities, was used to estimate patients’ probability of in-hospital mortality and classify them as low or higher risk (≤1.5% deemed low risk). Primary and secondary outcome measures The rule's sensitivity, specificity, positive and negative predictive values (PPV and NPV) and area under the receiver operating characteristic curve statistic for predicting in-hospital mortality with accompanying 95% CIs. Results A total of 34 108 admissions for PE were included, with a 3.4% in-hospital case-fatality rate. IMPACT classified 11 025 (32.3%) patients as low risk, and low risk patients had lower in-hospital mortality (OR, 0.17, 95% CI 0.13 to 0.21), shorter length of stay (−1.2 days, p<0.001) and lower total treatment costs (−$3074, p<0.001) than patients classified as higher risk. IMPACT had a sensitivity of 92.4%, 95% CI 90.7 to 93.8 and specificity of 33.2%, 95% CI 32.7 to 33.7 for classifying mortality risk. It had a high NPV (>99%), low PPV (4.6%) and an AUC of 0.74, 95% CI 0.73 to 0.76. Conclusions The IMPACT rule appeared valid when used in this all payer, inpatient only administrative claims database. Its high sensitivity and NPV suggest the probability of in-hospital death in those classified as low risk by IMPACT was minimal. The incidence of pulmonary embolism (PE) in the USA has increased substantially over the past decade, with incidence estimates surpassing 112 PEs per 100 000 Americans. 1 This increased PE incidence has been attributed to improved diagnostic modalities and is associated with a decreased overall case fatality rate. Some have used these data to suggest that there is a substantial fraction of patients with PE who could potentially be discharged directly from the emergency department, observational unit or hospital following an abbreviated stay. [1][2][3] However, to do so would require a method for estimating the risk of complications in patients with PE, in particular early mortality. Strengths and limitations of this study ▪ Many of the 11 comorbidities of the In-hospital Mortality for PulmonAry embolism using Claims daTa (IMPACT) rule were coded for within the claims data using the validated Agency for Healthcare Research and Quality 29-comorbidity software/schema. ▪ Owing to the lack of out-of-hospital mortality data in the National Inpatient Sample (NIS), we could not evaluate the longer term (30-day) mortality of these patients. ▪ As with all claims databases, the NIS may contain inaccuracies or omissions in coded diagnoses/procedures, leading to the potential for misclassification bias. ▪ The 1.5% cut-point for defining low risk for in-hospital mortality can be considered arbitrary, but was chosen (in the original derivation study) on the basis of a review of area under the receiver operating characteristic curve analysis and previous clinical prediction rules. Numerous clinical prediction rules for prospectively estimating short-term mortality of patients with PE have been developed. 4 The PE Severity Index (PESI), 5 simplified PESI (sPESI) 6 and Hestia 7 scores are among the most sensitive for classifying early mortality risk, and suggest that at least one-third of all patients with PE could be treated at home or following an abbreviated admission. 4 A common theme of these prediction rules is their use of vital signs and laboratory values in addition to comorbidity status. 4 For this reason, these rules cannot be implemented in most administrative claims databases. In the current era of cost-conscious healthcare, there is a growing need for a benchmarking rule that payers and hospitals can use to assess whether they are providing optimal and efficient acute care for patients presenting with PE. Coleman et al 8 derived such a multivariable benchmarking rule for in-hospital PE mortality using a large US commercial claims database. This prediction rule, dubbed the In-hospital Mortality for PulmonAry embolism using Claims daTa (IMPACT), consists of 11 comorbidities identified using inpatient or outpatient claims data during the 12 months prior to the index PE event ( plus age as a continuous variable) and was demonstrated to have a sensitivity and specificity similar to PESI and sPESI. 4 However, since there are many hospital specific and commercial claims databases which contain only data from inpatient admissions, 9 they contain insufficient claims data to identify relevant comorbidities to populate the aforementioned rule. The aim of this study was to determine whether the validity of the IMPACT prediction rule developed in a commercial claims database remained acceptable when utilised in an inpatient only claims database. METHODS We utilised the 2012 Agency for Healthcare Research and Quality (AHRQ) Healthcare Cost and Utilization Project National Inpatient Sample (NIS) for this study. 10 The NIS contains data on hospital inpatient stays and covers all patients, including those with Medicare, Medicaid, private insurance and the uninsured. The 2012 inpatient core file contained data on 7 296 968 hospitalisations occurring between 1 January 2012 and 31 December 2012 and was drawn from 4378 hospitals within 44 states. Since only analysis on de-identified data was performed, our study was exempt from institutional review board oversight. Patients The IMPACT prediction rule (a claims-based in-hospital mortality logistic regression prediction rule initially derived in a large US MarketScan commercial and Medicare claims database) was then evaluated in an all-payer inpatient claims only database: 8 where ×=−5.833+(0.026×age)+(0.402×myocardial infarction, MI)+(0.368×chronic lung disease)+(0.464×stroke) +(0.638×prior major bleeding)+(0.298×atrial fibrillation)+(1.061×cognitive impairment)+(0.554×heart failure)+(0.364×renal failure)+(0.484×liver disease) +(0.523×coagulopathy)+ (1.068×cancer). The 11 comorbidities in the above equation, which were originally calculated using inpatient and outpatient claims data occurring anytime within 12 months before an index PE event, were calculated only on the basis of the maximum of 25 ICD-9-CM diagnosis codes and procedural codes reported for each discharge in the NIS. When possible, the presence or absence of comorbidities as determined using AHRQ's 29-comorbidity index coding software. 10 11 A key aspect of the AHRQ 29-comorbidity coding is the use of a diagnosis-related group (DRG) screen in addition to the traditional ICD-9-CM coding. This DRG screen allows comorbidities to be considered as coexisting medical conditions not directly related to the principal diagnosis or the main reason for admission (likely existing prior to the index hospital stay). Since the comorbidities of prior major bleeding, cognitive dysfunction, stroke, MI and atrial fibrillation are not part of the AHRQ 29-comorbidity schema, we coded these variables using ICD-9-CM diagnosis and procedural codes and implemented a similar DRG screen methodology (see online supplementary appendix 1). We performed a calibration analysis 11 by plotting observed outcome (in-hospital mortality) by decile of predictions by the IMPACT multivariable prediction rule. The calibration plot was characterised by an intercept, which indicates the extent to which predictions are systematically too low or high ('calibration-in-large') (a value=0 is ideal), and a calibration slope, which would be equal to 1.0 in the case of a perfect model. Patients were classified as being low risk for in-hospital mortality if their predicted in-hospital mortality risk using the above equation was ≤1.5% (a threshold defined in the original derivation study on the basis of the area under the receiver operating characteristic (ROC) curve (AUC) analysis and a review of prior clinical PE in-hospital mortality rules). 4 8 To quantify the accuracy of IMPACT for predicting in-hospital mortality in patients with low risk and higher risk PE, we calculated sensitivity (the percentage of patients at high risk for in-hospital mortality who are correctly identified as being high risk as evidenced by in-hospital death occurring), specificity (the percentage of patients at low risk of in-hospital mortality who are correctly identified as being low risk as evidenced by survival to discharge), positive predictive value (PPV; the probability that in the case of being classified as high risk for in-hospital mortality, the patient dies prior to discharge) and negative predictive value (NPV; the probability that in the case of being classified as low risk for in-hospital death, the patient survives to discharge) along with 95% CIs. The AUC was calculated to assess the rule's discriminative power to correctly predict inpatient mortality. We defined an abbreviated hospital stay as ≤1, ≤2 or ≤3 days based on values utilised in previous studies, 12 and determined the proportion of patients in this category. In order to estimate the potential cost savings from an early discharge, we calculated the difference in total hospital costs between low-risk patients having and not having an abbreviated hospital stay. Total hospital costs were estimated from total hospital charges reported in the NIS using supplied cost-to-charge ratios. 10 All data management and statistical analyses were performed using SAS V.9.2 (SAS Institute Inc, Cary, North Carolina, USA) or IBM SPSS Statistics V. 22.0 (IBM Corp, Armonk, New York, USA). Categorical comparisons were made using Pearson's χ 2 tests and continuous comparisons were made using either independent samples t tests or Mann-Whitney U tests (where appropriate). A p value <0.05 was considered statistically significant in all situations. The preparation of this report was in accordance with the Transparent Reporting of a multivariable prediction model for Individual Prognosis Or Diagnosis (TRIPOD). 13 RESULTS A total of 34 108 PE admissions were included in this analysis; 97.7% had a primary ICD-9-CM code for PE. Characteristics of patients at baseline are reported in table 1. The overall in-hospital PE case-fatality rate was 3.4%. Our calibration analysis demonstrated increasing observed in-hospital mortality risk across the progressively increasing deciles of IMPACT predicted risk, a slope of 0.82 and an intercept of 0.0046 (figure 1). The IMPACT prediction rule classified 11 025 (32.3%) patient admissions as low risk, and low-risk patients had lower in-hospital mortality (odds reduction of 83%; OR, 0.17, 95% CI 0.13 to 0.21), shorter length of stay (LOS) (−1.2 days, p<0.001) and lower total treatment costs (−$3074, p<0.001) than patients classified as higher risk (table 2). Of low-risk patients, 13.1%, 31.1% and 47.7% were discharged within 1, 2 and 3 days of admission. Low-risk patients discharged within 1 day accrued $5465, 95% CI $5018 to $5911 less in treatment costs than those staying longer. Discharge within 2 or 3 days in low-risk patients was also associated with a reduced cost of hospital treatment ($5820, 95% CI $5506 to $6133 and $6314, 95% CI $6031 to $6597, respectively) when compared to those staying longer. The sensitivity and specificity of IMPACT for prognosticating in-hospital mortality was 92.4%, 95% CI 90.7 to 93.8 and 33.2%, 95% CI 32.7 to 33.7, respectively (table 3). IMPACT's high NPV (>99%) suggests that the probability of in-hospital death in those whom it classifies as low risk is low, but its low PPV (4.6) suggests that it will classify patients who will survive to discharge as high risk (anticipated to die in-hospital). The AUC of IMPACT was 0.74, 95% CI 0.73 to 0.76 (figure 2). DISCUSSION The IMPACT prediction rule originally developed by Coleman et al 8 in a large US commercial claims database remained valid when adapted for use in the NIS all payer, inpatient only claims database. This rule classified in-hospital mortality risk with high sensitivity (and a high NPV) but modest specificity, meaning it classified nearly all patients who died during the index PE admission into the higher risk group, but also classified patients who survived to discharge as high risk (also supported by the small PPV indicating that many of the patients classified as high risk were false positives). While any prediction rules would ideally be 100% sensitive and specific, high sensitivity is preferable to high specificity when making the decision to discharge a patient with PE early from the hospital or treat them on an outpatient basis. Moreover, the observed sensitivity, specificity and proportion of patients deemed to be at low risk for early mortality when using the IMPACT prediction rule was on par with that seen with the PESI, sPESI and Hestia rules. 4 Despite IMPACT having similar prognostic accuracy to previously developed clinical rules, we strongly suggest that the claims-based rule not be used to make treatment decisions, as it was not developed in a clinical setting. The true value of the IMPACT rule is as a benchmarking tool for payers and hospitals to quickly and inexpensively benchmark population rates of PE treated at home or following an abbreviated hospital admission; as well as, to assure high-risk patients remain in-hospital for an adequate period of time. The IMPACT benchmarking rule has significant potential value due to the common and expensive nature of treating PE in hospital. There are approximately 181 000 admissions for PE yearly in the USA, with a mean LOS of >5 days and hospital treatment costs >$10 000/admission. 1 10 Importantly, our analysis found that only 13.1% of patients classified as low risk for in-hospital death were discharged within 1 day of admission, 31.1% within 2 days and <50% were discharged within 3 days. Even though IMPACT was not 100% accurate, and there are valid reasons why clinicians might not discharge a patient with PE early (eg, need for adequate home circumstances and medication adherent patients 2 3 ), when compared to recent studies performed in Canada where approximately 50% of patients with PE were treated as outpatients; 14-16 our data suggest that many patients with PE treated at US hospitals may be kept in-house longer than is medically necessary. Since data from this study suggest that a low-risk patient discharged within 3 days has less than half the hospital costs compared to a low-risk patient staying >3 days, we believe there are significant cost savings opportunities to institutions and the healthcare system by assuring patients with PE are safely discharged as soon as possible. A strength of the IMPACT rule, and subsequently our analysis, was our use of the validated AHRQ 29-comorbidity software/Elixhauser coding schema whenever possible. 10 11 This ICD-9-CM coding schema for comorbidities has been demonstrated to be the best predictor of in-hospital mortality among common comorbidity indices for administrative data. 17 The AHRQ 29-comorbidity software itself codes for 29 comorbidities; of which 8 comorbidities were included in IMPACT. A key aspect of AHRQ-29 coding is the use of a DRG screen so that comorbidities can be considered as coexisting medical conditions not directly related to the principal diagnosis or the main reason for admission and thus most likely existing prior to the index hospital stay. Our analysis has some limitations. First, owing to the unavailability of out-of-hospital mortality data in the NIS, we could not evaluate 30-day mortality like some previous clinical rules/scores. 4 However, most commercial claims databases and hospitals will also not have broad access to out-of-hospital mortality status. It has been long appreciated that the highest risk of complications or death due to PE is in the first few hours to a week after diagnosis. [18][19][20][21] Despite the fact that the in-hospital mortality rate observed in this study (3.4%) is lower than the 30-day mortality rate (approximately 9%) reported in studies of clinical prediction rules such as PESI, 5 6 the sensitivity of clinical prediction rules do not vary markedly when used to predict in hospital or 30-day mortality. 5 22 23 For these reasons, in-hospital mortality seems a reasonable end point for assessing whether a patient is a good candidate for early discharge (or outpatient treatment). Second, as with all claims databases, the NIS may contain inaccuracies or omissions in coded diagnoses/procedures, leading to the potential for misclassification bias. Finally, the use of 1.5% as a cut-point for low risk was somewhat arbitrary. The 1.5% value was chosen on the basis of a review of the ROC curve (to roughly identify a value balancing sensitivity and specificity) and because it approximates the in-hospital mortality rate seen in patients with PE at very low risk and low risk (classes I and II) in the original PESI derivation study. 5 8 CONCLUSION The IMPACT prediction rule appeared valid when adapted for use in this all payer, inpatient only administrative claims database. The rule classified patients' mortality risk with high sensitivity, and consequently may be valuable to those wishing to benchmark rates of PE treated at home or following an abbreviated hospital admission. Contributors CGK, CIC, CC, JRS and WFP participated in study concept and design and analysis and interpretation of data. CIC, CC, JRS were responsible for acquisition of data. CKG, CIC, WFP were involved in drafting of the manuscript. CIC, CC, JRS and WFP were responsible for critical revision of the manuscript for important intellectual content. CIC, CC and JRS were responsible for administrative, technical or material support. CIC, CGK and CIC were responsible for study supervision and had full access to all the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis. All the authors read and approved the final manuscript. The authors meet the criteria for authorship as recommended by the International Committee of Medical Journal Editors (ICJME) and were fully responsible for all content and editorial decisions they were also involved in all stages of manuscript development.
2016-05-04T20:20:58.661Z
2015-10-01T00:00:00.000
{ "year": 2015, "sha1": "c0d1344128f68dc6df85675335375316c3cc7f0f", "oa_license": "CCBYNC", "oa_url": "https://bmjopen.bmj.com/content/5/10/e009251.full.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c0d1344128f68dc6df85675335375316c3cc7f0f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
257269439
pes2o/s2orc
v3-fos-license
Molecular and Serological Identification of Pathogenic Leptospira in Local and Imported Cattle from Lebanon , Introduction Worldwide, leptospirosis is an important emerging zoonosis caused by pathogenic spirochaetes in the genus Leptospira [1].Tropical and subtropical regions are the most vulnerable to infection [2].In Lebanon, annual morbidity (i.e., disease incidence) and mortality in humans due to leptospirosis have been estimated at 2.93 IC 95% [0.92-5.37]and 0.15 IC 95% [0.05-0.26]per 100,000 individuals, respectively, based on age-and gender-adjusted demographic attributes of the human population [3].However, the epidemiology of leptospirosis in this country remains unknown; no case reports have been published, with the sole exception of a 1947 inquiry launched following the discovery of two cases of Weil's disease [4].Lebanon lacks a formal system of prophylaxis for leptospirosis despite the suitability of its climate (warm and wet) for the bacteria's persistence and the presence of potential maintenance hosts such as rodents [5,6].Furthermore, several domesticated animals that are known to be susceptible to leptospirosis, primarily local and imported cattle (Bos taurus taurus), are raised in abundance [7], but their Leptospira infection status remains unknown. Once infected, cattle experience a period of bacteremia that can persist for up to a week [8].Nonmaintenance pathogenic Leptospira spp., such as the serogroup Grippotyphosa [9,10], cause incidental infections in cattle and are recognized as a leading cause of reproductive failure, icteric abortions, stillbirth, economic loss, and occasionally, meningitis and death [8,11].In addition, cattle are well known as maintenance hosts for serogroup Sejroe (mainly serovar Hardjo).Te acute phase of such infection is mainly subclinical and often goes unobserved, except in lactating cattle, which might develop agalactia [8].However, chronic infection associated with the Hardjo bovine-adapted serovar can also lead to reproductive failure, stillbirth, and perinatal death [12,13].Regardless of the infection type, cattle may shed pathogenic bacteria in their urine-for up to 40 weeks for Hardjo infection [14]-that can expose humans to the bacteria either directly during the milking process or indirectly following exposure to urine-contaminated water or soil [8,15,16]. Leptospira infection can be screened through several diferent routes (blood, serum, urine, and renal tissue) in which Leptospira or leptospiral antibodies can be detected [17].In cattle, blood and serum are typically the most easily accessible, compared to the sampling of urine on farms or renal tissue in slaughterhouses [18,19], and both blood and serum can be used to perform several leptospiral diagnostic techniques [20].However, Leptospira and/or leptospiral antibody detection can be hindered by diferences between the two phases (septicemic and immune) experienced by the host, which leads to an underestimation of its impact [21,22].Te diagnosis of pathogenic organisms such as Leptospira increasingly relies on molecular methods based on the polymerase chain reaction (PCR) [23], which can accurately detect Leptospira-derived DNA in the early stages of infection [20,24,25].Te serology-based microscopic agglutination test (MAT) is less advantageous for early diagnosis since it only detects antibodies from a past or current infection.Te results of the latter test are reliable at the level of the serogroup but cannot be used to detect infectious serovars [26].In addition, MAT results are interpreted subjectively, which can lead to repeatability issues and variability in the identifcation of the putative infecting serogroup [27,28].Despite these drawbacks, MAT is considered the immunological reference method for the experimental diagnosis of leptospirosis by the World Organization for Animal Health (WOAH) [29] and the World Health Organization (WHO) [30], and is recommended by the former for use in herd Leptospira screening [31].Overall, a complementary approach that combines both PCR and MAT is the best option for improving early detection of biphasic leptospirosis [32]. Within the larger region of the Middle East, attempts have been made to survey cattle for Leptospira infection [33]. Prevalence ranged from 0% to 43%, and seroprevalence ranged from 0% to 85%, with Sejroe and Grippotyphosa as the predominant pathogenic serogroups in seropositive cattle.With the exception of a single publication from 1947 [4], though, no epidemiological studies have been carried out in Lebanon to describe the occurrence of Leptospira and the risk linked to infection.To fll this gap, this preliminary study aimed to provide an initial characterization of the threat posed by Leptospira infection in cattle in Lebanon.Specifcally, our goal was to describe the Leptospira infection status and circulating serogroups within cattle herds in Lebanon, starting with the governorate of Mount Lebanon (ML), which is an important operational area for raising cattle [34].Furthermore, we investigated the occurrence of Leptospira in imported cattle upon their arrival to Lebanese soil, with the goal of potentially diferentiating between the genetic profles of autochthonous and exotic Leptospira DNA. Provision of Blood Samples. Cattle blood collected during regular professional consultations by a commercial veterinary company-that provides services related to animal importation and development-for their private annual prophylaxis program was used later on in this study for further investigation of Leptospira infection in Lebanon.Nonetheless, all herd owners were contacted by the company and agreed to provide cattle specimens for leptospirosis research purposes. Description of Cattle of Interest. Cattle, particularly of the Holstein breed, are raised in abundance in Lebanon for dairy milk production [35].Te veterinary company that provided blood samples is dedicated to the development of cattle production in this country and therefore imports breeding dairy cattle from Europe, mainly France and Germany.It conducts consultations on dairy herds in all of Lebanon's governorates, but the majority of inspections are carried out in ML, where there are 50 dairy herds of interest to the company.Te cattle in the ML herds are mainly of the Holstein breed, and their numbers range between 5 and 80 heads per farm.Te leptospirosis vaccine had not been administered to local cattle. Sampling Design. Tis study was carried out in Lebanon, in the ML and Beirut governorates (port of Beirut), for local and imported cattle, respectively.Animals were conveniently selected, regardless of their reproductive performance or clinical picture, from 14 of the 50 farms followed by the veterinary company in the ML governorate; this was carried out during the end of the dry season in 2021, in the months of October and November.Tis amount of sampling would allow the detection of at least one seropositive herd given a minimum between-herd seroprevalence of 20% and an uncertainty of 5% [36].In addition, a one-time sampling campaign was performed by the company in the same period on a subset of a group of approximately 400 imported cattle 2 Transboundary and Emerging Diseases upon their arrival at the port of Beirut.Tests were performed at an individual level, however, the serological results were interpreted at the herd level, as recommended by the WOAH manual for terrestrial animals [29].In order to obtain relevant information at the herd level, samples were provided from 10% of the imported and local herd populations, with a minimum of 10 heads per farm or the whole herd if the total was less than 10, and a 10-cow sample being appropriate to reveal the presence or absence of an infection in a herd [29]. Repartition of Blood Samples. A total of 187 blood and 135 serum samples were provided from cattle.Of the 187 blood specimens, 135 were from arbitrarily chosen animals in herds located in the governorate of ML, while the remaining 52 were from imported livestock.Serum specimens (n � 135) were acquired solely from cattle in the ML governorate; they could not be collected from imported livestock due to the stress experienced by the animals, resulting in serum hemolysis following centrifugation.No animal showed clinical signs of illness. Leptospira Microagglutination Testing. Microscopic agglutination tests were performed based on the standard methodology [29] using a panel of live leptospires.In total, twelve Leptospira serogroups, with related serovars in parentheses, were used: Australis (Bratislava, australis, munchen), Autumnalis (autumnalis, bim), Ballum (Castellonis), Bataviae (bataviae), Canicola (canicola), Grippotyphosa (grippotyphosa, vanderheidon), Icterohaemorrhagiae (icterohaemorrhagiae, copenhageni), Panama (panama, mangus), Pomona (pomona, mozdok), Pyrogenes (pyrogenes), Sejroe (sejroe, saxkoebing, hardjo, and wolfi), and Tarassovi (tarassovi).Information on the serogroups, serovars, and strains used is available in Table 1.To avoid biases in interpretation, all MAT reactions were analyzed by a single technician.As recommended in the WOAH manual, a 1 : 100 titer was used as the cut-of point for seropositive samples [29].MAT results were interpreted at a global level, and the epidemiological unit considered was the herd.A herd was considered currently or recently infected at the herd level when at least one animal showed a positive MAT result.However, given the high specifcity of MAT, serum samples were primarily tested using a 1 : 50 titer as evidence of previous exposure to Leptospira, as suggested by WOAH [29].Seropositive reactions were analyzed as follows: (1) If a serum specimen demonstrated reactivity to only one serogroup, that serogroup was designated dominant. (2) If a serum specimen reacted to two or more serogroups, but with a diference of threefold or more between the highest and the next highest titer, the former was designated the dominant serogroup.(3) If a serum specimen reacted to two or more serogroups with less than a threefold diference between the highest and the next highest titer, the serogroups were designated equally dominant.Tis most often occurs as a result of cross reactions [37,38], and the result was considered inconclusive in this case.No. D6030 (50 spin columns/purifcations) (Zymo Research, USA), was used directly on the extracted DNA following the manufacturer's instructions, in order to effciently remove contaminants that might inhibit downstream PCR reactions. Real-Time PCR Targeting the 16S rRNA Gene.As an initial step, the efciency of DNA extraction and the absence of inhibitors were tested for each sample by the amplifcation of the β-actin endogenous housekeeping gene.Te β-actin primers were 5′ CAGCACAATGAAGATCAAGATCATC 3′ (forward) and 5′ CGGACTCATCGTACTCCTGCTT 3′ (reverse), as described in Toussaint et al. [39], and the sequence of the β-actin probe was 5′FAM TCGCTGTCCACCTTCCAGCAGATG T TAMRA 3′ .Real-time PCR reactions were performed on a Stratagene-Agilent Mx3000P qPCR system.β-actin gene expression also served as an internal control for expression of the 16S rRNA target gene.Tis gene sequence was amplifed from all purifed DNA using AgPath-IDOne-StepReal-Time PCR Reagents (Applied Biosystems).Real-Time PCR reactions contained 12.5 μL 2X Real-Time GCAATGTGATGATGGTACCTGCCT BHQ1 3′ , as described in Waggoner et al. [40].Real-Time PCR cycling was performed on a Stratagene-Agilent Mx3000P qPCR system using the following parameters: 95 °C for 10 min, followed by 40 cycles of (1) 95 °C for 15 s and (2) 60 °C for 1 min.Fluorescence was provided by TaqMan probes (based on reporter and quencher fuorochromes) which continuously detected and reported DNA amplifcation; a C T was automatically set for each DNA sample, and any exponential curve that reached a C T prior to cycle 40 was considered a positive result [41].A no-template mix and a positive control were added in each run of the real-time PCR. Sanger Sequencing. Samples that were cPCR-amplifed and visualized on a 1% agarose gel under ultraviolet light were then Sanger sequenced by a service provider (Genoscreen, Lille, France) using the same primers employed in the cPCR.ChromasPro (version 2.6.6) was used to assemble nucleotide sequences that were at least 330 bp in length and compatible with the genus Leptospira.Each contig was queried using the nucleotide Basic Local Alignment Search Tool (BLAST) in the NCBI database (https://blast.ncbi.nlm.nih.gov/) to determine Leptospira species assignment.A phylogenetic tree was then generated based on the partial 16S gene rDNA sequences obtained from our blood sample amplicons and reference Leptospira DNA sequences provided by the "Laboratoire des Leptospires" [42].Te tree was constructed using Muscle version 5 [43] with IQ-TREE 2.2.0.3 [44], using the maximum likelihood method (loglikelihood −991.593) and the best-ft model TPM3 + G4, chosen based on values of the Bayesian information criterion (BIC).A bootstrap analysis was performed with 1000 replicates (S1 Fig. ). Agreement between PCR and MAT. Compliance between PCR and MAT results at the herd level was determined using Cohen's Kappa coefcient, calculated using Rstudio (version 1.3.1093,"Apricot Nasturtium") with the formula K � Pr(a) − Pr(e)/1 − Pr(e), where Pr (a) is the observed percentage of agreement and Pr (e) the expected percentage of agreement.Tis comparison was repeated for two different MAT dilutions: PCR and MAT (titer 1 : 50) and PCR and MAT (titer 1 : 100). Molecular Analysis. β-actin gene expression was reported by Real-Time PCR from all 187 DNA samples that were extracted from whole blood.Te 16S rRNA gene sequence of Leptospira was detected by Real-Time PCR in 7 of 135 local cattle (representing 5 of the 14 herds) and in 1 of 52 imported cattle.For six of the seven Real-TimePCR-positive local animals (representing the same fve herds) and the Real-TimePCR-positive imported animal, cPCR amplifcation revealed a 330-bp fragment compatible with the genus Leptospira [45].All seven sequences demonstrated 100% nucleotide afnity with a published sequence corresponding to Leptospira kirschneri (GenBank accession number MK726123.1).Te fve PCR-positive herds were geographically distributed as follows: two were located in the north, one in central ML, and two in the south (Figure 1). Leptospira Microagglutination Testing in Cattle in Mount Lebanon.Five herds (three in the center and two in the south of ML) contained cattle with MAT titers ≥1 : 100 (Figure 1).Herds F, J, and M each contained a single serogroup-Sejroe (1 : 400), Canicola (1 : 100), and Grippotyphosa (1 : 200), respectively-even though the herds had multiple seropositive animals.Te remaining seropositive herds-herds N and G-contained two serogroups each, with neither appearing to be dominant as the reported MAT titers-CAN (1 : 200) and GRI (1 : 200) in herd G, and GRI (1 : 200) and SJ (1 : 100) in herd N-had less than a threefold diference.In herd G, a single seropositive individual demonstrated equal antibody titers against serogroups Canicola and Grippotyphosa, hindering the identifcation of a single dominant serogroup.In herd N, the two seropositive animals had distinct MAT profles, with Grippotyphosa as the putative serogroup for the frst and Sejroe for the second.For each seropositive animal, the reactive antibody titers for 4 Transboundary and Emerging Diseases each tested serovar/serogroup in MAT are displayed in Table 2. Combining Molecular and Serological Results Among Cattle Herds in Mount Lebanon.When we examined both PCR results and MAT results at titers ≥1 :100, evidence for Leptospira or antileptospira antibodies was found in 8 of the 14 tested herds.Cohen's Kappa coefcient was 0.07 with a p value of 0.8.When we examined both PCR results and MAT results at titers ≥1 : 50, the total number of positive herds remained the same (8 of 14), but the agreement between the two methods changed.Te resulting Cohen's Kappa coefcient was 0.43, with a p value of 0.06.Te serogroups and species of Leptospira found in each herd are presented in Table 2 and Figure 1. Discussion To the best of our knowledge, this is the frst study to use molecular and serological methods to characterize bovine Leptospira infection in Lebanon, particularly in Mont Lebanon (ML).It also applied the same molecular approach to evaluate Leptospira infection among imported cattle upon their arrival to Lebanese soil (port of Beirut) prior to their distribution to Lebanese herds.Te identifcation of pathogenic Leptospira species and circulating serogroups in cattle highlights an unaddressed threat to public health in Lebanon, particularly for people working with cattle.In addition, the detection of pathogenic Leptospira in imported cattle suggests that animal importation may be one of the means by which pathogenic bacteria like Leptospira are introduced to this country.Pathogenic L. kirschneri was the only species detected in all positive cattle, local or imported.Tis species is known worldwide as an agent of human leptospirosis [16,46,47] and can have notable clinical manifestations in some patients, as reported in France [48] and Malaysia [49].Furthermore, it has been suggested that infection by L. kirschneri can also have an impact on herd production output, but the clinical manifestations of such infection are ambiguous [50].L. kirschneri has not been described in studies conducted in neighboring countries, which to date have been limited to reports of molecular positivity in cattle and have not identifed the infecting bacterial species.However, this species has been described in cattle in countries in North and South America (Brazil, Uruguay, and Mexico) [50][51][52].Our fnding of L. kirschneri in local herds is consistent with our serological fnding of the predominant serogroup Grippotyphosa since the related serovars Grippotyphosa and Vanderheiden, tested in our study, belong to the L. kirschneri species [53].In addition, the fact that L. kirschneri was also identifed among imported cattle likely destined for introduction into Lebanese herds suggests the potential introduction or maintenance of pathogenic Leptospira through global trade. Transboundary and Emerging Diseases Te number of Leptospira infections detected in our study is more likely an underestimation of the genuine occurrence in this population due to the time of sampling as well as the clinical specimen chosen for analysis.Te illness has a distinct seasonal pattern and is closely associated with climatic factors [54].In subtropical countries such as Lebanon, most cases of leptospirosis in both humans and cattle naturally occur following rainfall and fooding [55][56][57][58].In general, fooding is thought to help Leptospira disseminate in the environment [59], resulting in leptospirosis transmission and infection, as it has been described in cattle in other subtropical countries [57,60].In our study, sampling was performed in October and November, a period with relatively little rainfall, which might have reduced our chances of detecting Leptospira through Real-Time PCR.In addition, Leptospira bacteria are found in relatively low numbers in the bloodstream, which can impede the identifcation of infected cattle despite the sensitivity of the 16S Real-Time PCR (7.0 to 2.0 log 10 copies/ μL) [40] and the optimization of primer sequences and annealing temperatures [61].Moreover, leptospires can only be found in blood during the frst week of illness in both humans and animals, mainly from the second to the fourth day of infection [8,15,62].Tis likely explains the low number of Leptospira-infected cattle detected here as well as in two studies conducted in Egypt (country with similar meteorological circumstances and herd management techniques) that also used blood as a sampling matrix [63,64] and detected Leptospira DNA in seven out of 625 cattle blood samples in one study, solely [64].Te use of sample specimens other than cattle blood, such as urine, where leptospires are retrieved for a longer period of time, could have led to a higher incidence report, as obtained in other studies that tested cattle urine and blood samples by Real-Time PCR and only detected Leptospira DNA in urine samples [65].In the population used in our study, whole blood was the only available sample that remained appropriate for bacteremia detection and Leptospira characterization, although the detection period was limited to the acute phase of infection.Overall, despite its drawbacks, our approach enabled us to identify Leptospira and reveal its presence in domestic cattle with ML, but it was not inappropriate to assess the true prevalence of Leptospira infection. A low degree of concordance between PCR and MAT was observed using a cut-of titer of ≥1 : 100.All cattle which tested positive by Real-Time PCR still had not produced antibodies and were categorized as negative by MAT.Te latter fnding supports the biphasic nature of leptospirosis, as seroconversion mainly occurs 10 to 14 days following infection [8].However, some cases of (Leptospira serovar Hardjo) infected cattle who survived the bacteremic phase but did not develop agglutinating antibody titers above the 1 : 100 threshold have also been documented in the literature [66,67].Still, our work is consistent with previous studies demonstrating that the use of Real-Time PCR in conjunction with MAT increases the sensitivity of Leptospira detection, mainly in the early stages of infection [32,[68][69][70].Here, the use of both molecular and serological methods within the same herd enabled us to acquire more information about the status of Leptospira infection in cattle herds from ML than either method could have revealed by itself. Although the methodology used most likely underestimates the occurrence of Leptospira infection in cattle, it provides some initial data, namely, that Leptospira kirschneri and serogroup Grippotyphosa are the predominant species and serogroup, respectively.Tese results are useful in developing further research studies, therefore, we make the following recommendations.Despite its weakness regarding sensitivity, the methodology used in this paper could be extended in space and time to assess the relative leptospiral risk throughout the remaining governorates of Lebanon and across seasons, assuming that in each case the degree of underestimation would be relatively constant.To improve the precision of prevalence estimates and thus assessments of the level of risk for people in contact with cattle, the sensitivity of detection should be improved by sampling specimens other than blood, such as kidneys (in abattoirs) or urine (in farms and abattoirs), where leptospires persist longer [71]. In Lebanon, cattle carriers of pathogenic L. kirschneri can spread the bacteria through their urine and potentially act as a reservoir for humans-particularly farmers in close proximity-and domestic and wild animals [72], as well as a source of water and soil contamination in which the bacteria can remain viable for months in optimal condition [73].Cattle being an interface between wildlife and humans, managing Leptospira infection in cattle herds (e.g., by following an appropriate vaccination plan) can not only reduce the cattle's health impact but also prevent leptospirosis in humans and other animals besides cattle.Te fnding of L. kirschneri in cattle in ML raises a One Health concern for leptospirosis control in Lebanon in order to sustainably ensure the health of the ecosystem, including humans and animals [74].Consequently, there is a need to implement a response according to the quadripartite One Health concept defnition [75] that includes intersectorial mobilization and communication related to the presence of L. kirschneri and leptospirosis risk management among veterinarians, farmers, general physicians, and workers in wild mammal associations practicing in Mont Lebanon.In addition, services related to zoonosis management at the ministries of agriculture, public health, and environment should be aware of the leptospirosis risk and be able to support future eforts on intersectorial and collaborative epidemiological surveillance, disease control, and research [76]. One of the fndings of this study is the detection of pathogenic Leptospira in imported cattle, which highlights the risk related to importation.WOAH recommends the application of a reference test to 10% of each batch of imported cattle [29].Tis step cannot ensure a disease-free herd, but it can minimize the potential risk of infection to herds in the importing country.Another approach to improving the sensitivity of the detection of infected herds could be the use of simultaneous direct and indirect detection, but further studies are necessary to assess the efciency of such a measure. Transboundary and Emerging Diseases Conclusion Pathogenic Leptospira was detected in both local and imported cattle in Lebanon.As a result, leptospirosis risk should be addressed as a public and animal health concern in this country, raising the need to follow a "One Health" approach.Enhancing public awareness is essential, particularly among veterinarians and general physicians, so they can detect and report clinical forms of leptospirosis and consequently, maximize the health of humans, and animals.Additional studies on Leptospira-infected populations (e.g., rodents and dogs) should be conducted in Lebanon in order to characterize potential maintenance hosts and have a more thorough understanding of leptospirosis epidemiology to design efective disease control strategies. Table 1 : [40] of the reference strains employed in the MAT antigen panel.μL of probe and each primer, 1 μL of 25X Real-Time PCR Enzyme Mix, and 4 μL of DNA in a fnal volume of 25 μL.Amplifcation was performed using primers targeting a region of the Leptospira rrs (16S) gene that were designed in a previous study, with a slight modifcation of the cycling protocol[40].
2023-03-02T16:06:49.931Z
2023-02-27T00:00:00.000
{ "year": 2023, "sha1": "3afc040592c8accd03f1962b23048fbe735eeed8", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/tbed/2023/3784416.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "0a8c7afd2ffadc5e7823e81b1b634e83818079e2", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [] }
251481705
pes2o/s2orc
v3-fos-license
Morality is Supreme: The Roles of Morality, Fairness and Group Identity in the Ultimatum Paradigm Purpose A large number of decision-making need to be carried out in the context of social interactions. Previous studies have demonstrated the impact of fairness perception, moral judgment, and group identity on decision-making. However, there is no clear conclusion as to how the effect of these factors existing simultaneously on decision-making and the extent to which these factors play a role. Methods We manipulated the moral quality of proposers to explore the issue of whether morality has an impact on fairness perception and manipulated the moral quality of proposer and responder simultaneously forming group identity to explore whether group identity has an impact on the effect of morality on fairness in decision-making. Results Participants displayed a higher acceptance rates for positive moral proposers than the negative moral proposers regardless of the fairness of the allocation of money (Experiment 1) and group identity (Experiment 2). However, the effect of group identity was working, though it partially supported the In-group Preference (Experiment 1 and Experiment 2 combined analysis). We hold that the group identity was influenced by morality. Conclusion When making an economic decision, morality has the supreme influence on individuals. Introduction In complex social environments, many of our decisions are made in the context of social interactions, which will further affect oneself and others. 1 Therefore, when making decisions, individuals need to consider both their own interests and those of others, which inevitably leads to conflicts. How to resolve conflicts between self-interest and others' interests is an important part of social decision-making. 2 Research on the trade-off mechanism between these two types of interests is also an important foundation for our understanding of social decision-making. Fairness, Morality and Social Decision-Making During social decision-making process, fairness and morality are the basic norms of people's social interaction. A sense of fairness can promote effective cooperation to achieve higher quality of social coordination. 3 However, unfair treatment can trigger resentments and cause people to expend resources to punish those individuals who commit unfair practices. [4][5][6] In the studies on the impact of fairness perception on social interaction, the ultimatum game (UG) is a classic paradigm often applied to explore decision-making preferences and strategies. 7,8 The paradigm sets the two players of the game as the proposer and the responder, respectively, and they are required to allot a fund under anonymous conditions. The proposer first proposes a scheme for the allocation of money, and the responder makes choices. If they accept the scheme, the money will be distributed accordingly. But if not, both of them will achieve no earnings. Traditional economic decision-making theory emphasizes the egoistic hypothesis, holding that individuals are rational and driven by personal interests, and proposes the concept of "rational people", which holds that people's behavior is based on the principle of pursuing the maximization of benefits, so the responder should accept any non-zero allocation. 9 However, individual's motivation to make decisions in a complex social environment is more intricate than the egoistic hypothesis. 5 Researchers have discovered that besides pure-profit orientation, fairness is a critical factor that influences individual behaviors in social decision-making as well. 10 Substantial experiments have revealed that the allocation ratio of the money is generally 50%. When it is less than 40%, it is more likely to be rejected. 11 What is more, the rejection rate increases as the allocation amount decreases. 2,12 When the amount of money divided is only about 20 to 30% of the total, responders will feel unfair and reject the scheme as punishment to the proposer. 7 But when it is less than 20%, the rejection probability reaches up to 50%. 13 Previous studies on the ultimatum paradigm have focused on the outcomes of allocation schemes and have proposed an "inequity aversion" 14 to explain the responders' seemingly "irrational" rejection behavior, which states that there is fairness preference in human society, people display a strong preference for fairness in asset allocation scenarios 6 and show strong aversion to unfair allocation, 15 indicating that people are influenced by fairness perception when making economic decisions. Morality, as an important part of social culture, is also a significant criterion for treating others. Moral judgment can be defined as a good or bad evaluation of individuals' behaviors or characteristics, basing on a series of virtues required by culture or subculture. 16 Saltzstein argues that moral judgment included three processes: interpreting or characterizing the situation, choosing appropriate morality norms to resolve conflicts in a characterized situation, and making decisions based on the situation and related morality norms. 17 The mainstream holds that there are two pathways for individuals to make a moral judgment: teleological reasoning and deontological reasoning. 18 The teleological reasoning was a process of accurate analyses of costs and benefits from individuals, while deontological reasoning was a spontaneous process of emotional response. 19 Specifically, teleological reasoning is based on a definite outcome, which holds that the behavior producing the benefit is moral, and vice versa is immoral; deontological reasoning is based on social norms and standards, which holds that the behavior according to the standards is moral, and vice versa is immoral. Research has found that moral judgment plays a role in people's early cognitive processing. 20 When it is needed to form the first impression of strangers, people will prioritize moral information (honesty, trustworthiness) over social information (warmth, ability). 21,22 In addition, a moral factor further affects an individual's decision-making. 23 Delgado in the trust game found that the perception of morality will affect cooperative behavior in subsequent economic decision-making, and people are more likely to trust players with positive morality and are more willing to take risks and make investments with them. 24 The reasons for the above results may be that people have a trust bias in positive moral people 24,25 or hold higher moral expectations of them, 20 thus promoting the social cooperation of both parties. This suggests that moral judgments may affect social cognitive processes such as an individual's social trust and expectation of others, thereby further influencing an individual's social decision-making. The above research shows that moral judgment and fairness preference have an impact on people's decision-making, respectively. But there is no clear conclusion as to when faced with unfair behaviors by others, whether the moral quality of others will have an impact on individuals' perception of fairness and result in different decision-making behaviors. From the above, we can draw a conclusion that people can spontaneously make moral judgments about the behavior of others according to social norms. In the present study, in addition to using the ultimatum paradigm, a series of short sentences of moral behaviors, including negative moral behaviors and positive moral behaviors, were used to manipulate the moral quality of proposers, which is consistent with deontological reasoning. When participants read these descriptive sentences of proposers, based on social norms and their life experiences, they will make corresponding positive or negative moral judgments. In addition, decision-making is also related to individual factors, such as gender. We also take into account the influence of the proposed gender in experiment 1. Previous studies have found that women and men follow different strategies in economic decision-making. Women were thought to be more empathetic and followed deontological principles in decision-making, 26,27 while men were more rational and followed interests principles despite the risk of harming the benefits of others. 28 There are also differences in risk tolerance that women being less risk-seeking and risk-averse than men. 29 However, these studies just focused on the gender differences between men and women acting as decision-makers whereas ignored the non-decisionmakers. When apologizing to others, compared to men, women were found warmer and therefore elicited greater forgiveness. 30 In the present study, we investigated the gender differences of non-decision-makers (proposers) in the ultimatum paradigm, specifically, whether the gender of the proposers affects the fairness perception and acceptance rates of responders and even further affects the effect of moral judgment on fairness perception. Group Identity and Social Decision Making In social decision-making, people will automatically make distinctions among interactive objects as in-group or outgroup members by using certain labels. This distinction can help people form the perception of whether they belong to the same social group and the recognition of their membership of a group, and thus generate group identity and experience the connection of the psychological relationship between themselves and the group, 31,32 which further affects their psychological processing and decision-making during the interaction. 33,34 For example, Wang et al found that moderately and extremely unfair proposals of in-group members and fair and moderately unfair proposals of out-group members induced more exogenous attention. 35 The early fairness evaluation was also affected by the group identity of the two interactive parties. The unfair proposals provided by in-group members were perceived as a stronger violation of fairness rules, while those unfair offers provided by the out-group members were perceived as reasonable or expected. 35,36 In recent years, some researchers have also begun to focus on the impact of group identity on the implementation of fairness norm. [34][35][36][37][38][39] McLeish and Oxoby's experiments set up three conditions of priming identity, including common identity, discriminative identity and non-specific identity, and then compared the results of the UG game under these conditions. 33 The results showed that participants who initiated the common identity condition exhibited the most cooperative behaviors, while those who initiated the discriminative identity exhibited the least cooperative behaviors. Although scholars generally agree that the fairness norms will be affected by the group during the implementation process, there are still discrepancy in its effect direction, mechanism and boundary. Whether group identity will hinder or promote the implementation of fairness norms is still an open question. Wang employed the Minimal Group Paradigm (MGP) to manipulate the group identity of the interactive parties and asked the responders to complete the ultimatum game with the in-group and out-group members, respectively, turning out that the acceptance rate of in-group unfair proposal was significantly higher than that of the out-group. 35 Jordan also adopted the MGP to manipulate group identity and observe whether children as third-party bystanders would punish others for their selfish behavior, suggesting that children would punish others more severely when out-group members provided unfair offers to in-group members. 40 Researchers generally use "in-group favoritism" and "inter-group bias" to explain this result. 41 That is to say, when the group identity is divided and recognized, it will prompt individuals to show In-group Preference Effect; namely, they will become more kind, tolerant, and altruistic towards the in-group, 42 but more suspicious, indifferent, and even hostile to the outgroup. 43 This explanation is consistent with the view of Mere Preference Theory, which holds that people are willing to evaluate in-group members with a positive attitude when group distinctions are formed and recognized, 32 driving people to be voluntary to tolerate the unfair behavior of in-group members. In other words, the negative violations from the in-group members and the positive evaluation of group identity offset each other, thus reducing the possibility and intensity of punishment for the in-group. 34 Other researchers contended that Black Sheep Effect (BSE) occurs when group identity conflicts with fairness norm, meaning that people will impose more severe punishments and sanctions on the in-group for their unfair behavior. 44,45 Mendoza found that the unfair behavior of the in-group bears more severe sanctions than that of the out-group. 46 Wu and Gao used the MGP to manipulate the group identity of the in-group and out-group with Chinese children aged 3 to 6 and explored how children at different developmental stages dealt with the conflict between group identity and fairness norm. It turned out that both boys and girls aged 5 to 6 will punish in-group offenders, indicating that there existed a signal to maintain the fairness norm of the group. 39 For the BSE, researchers mainly employ the Normal Focus Theory proposed based on the perspective of in-group norm to explain it, which holds that in-group members comply with similar values and norms. People often anticipate that in-group members are trustworthy. If the behavior of them deviates from people's expectations, other members in the group will generate strongly negative emotions (such as anger, disgust and so on), which in turn leads to severe sanctions. 47 If in-group members violate core principles of the group, their violations will be regarded as a potential threat to the whole group, and thus, people will severely punish the in-group members for their unfair behaviors. 34 For individuals, the simplest manipulation of cues regarding a group, such as a random label, is enough to form a consciousness of belonging to a common group. 48 McLeish and Oxoby primed the social group identity of the participants by asking them to recall what they had experienced with the group. 33 Lv also proposed the concept of "group shared experience" to examine the impact of shared experience of unfairness in a group on individual's behavior and sense of injustice. 49 Therefore, we used "group shared experience" to arouse the group identity of participants in the ultimatum task. Specifically, we primed the moral perception of participants themselves by asking them to read a series of scenarios including moral information, and then, they needed to react as requested. If they reacted positively, they were positive moral priming group, and if they reacted negatively, they were negative moral priming group. Participants had the same moral experiences as the proposers were in-group members, if not, they were out-group members. Overview of Study From the above literature, it can be seen that individuals making social and economic decisions are affected by fairness perception, which is manifested in accepting fair proposal while rejecting unfair one. However, in social and economic decision-making, people are also affected by moral judgment and group identity besides fairness perception. In the present study, we attempted to examine the impact of these factors on decision-making in an environment that encompasses economic and moral contexts and the extent to which these three factors play a role. We managed to address two issues whether the moral quality of others influences individuals' fairness perception when making decisions and whether group identity influences the effect of morality on fairness perception. In Experiment 1, we adopted the ultimatum game paradigm to manipulate the moral quality of proposers by presenting their prior background information, 50 including the positive moral and negative moral behaviors, in order to allow participants to make corresponding moral judgments to them. In addition, we have set up three allocations in UG, including fair and unfair allocations, with the latter was further divided into moderately unfair allocation and extremely unfair allocation. The purpose of this experiment was to investigate whether the moral judgment will affect the perception of fairness and whether this influence further has an impact on behavior in the decision-making task involving economic and moral contexts. Based on this question, we proposed two hypotheses: H 1.1: If morality affects the fairness perception of participants, the acceptance rates of positive moral proposer are higher than those of negative moral proposer in unfair allocations. H 1.2: If fairness perception is not affected by morality, the acceptance rates of fair allocation are always higher than unfair allocations. In addition, we took the gender of proposers into account. Generally, the warmth of women will make us more lenient of them , so we assumed that H 1.3: Individuals are more receptive to women than men when faced with unfair allocations; even though proposers were negative moral, the acceptance rate of female proposers is also higher than that of male in fair and unfair allocations. The purpose of Experiment 2 was to investigate whether group identity influences the effect of morality on fairness perception. In addition to manipulating the moral quality of proposers, we also primed the moral quality of participants in order to arouse their group identity to proposers. That is, the positive (negative) moral priming group with the positive (negative) moral proposers were in-group members, with the negative (positive) moral proposers were out-group members. If group identity influences the decision-making, there are two directions: the In-group Preference and the BSE. Based on this question, we also proposed two hypotheses: H 2.1: If In-group preference affects the effect of morality on fairness perception, individuals will perform a higher acceptance rate of moderately and extremely unfair proposals of the in-group than the out-group; 2052 H 2.2: If BSE affects the effect of morality on fairness perception, individuals will perform a lower acceptance rate of moderately and extremely unfair proposals of the in-group than the out-group; Pilot Material Ratings Method Participants Forty college students (20 females), aged between 18 and 25 years (M age = 20.68 ± 2.07), were recruited locally to take part in the material ratings. All the participants from South China Normal University and have basic reading comprehension ability. The pilot material ratings and the following experiments were carried out in accordance with the recommendations of the Institute Ethics Committee, South China Normal University, with written informed consent was obtained from all participants in accordance with the Declaration of Helsinki. The protocol was approved by the Institute Ethics Committee, South China Normal University. Materials and Task A total of 200 short sentences of moral behaviors were created, including four types: 50 short sentences of positive moral behaviors performed by men (eg, "He adopted a large number of stray cats"), 50 short sentences of negative moral behaviors performed by men (eg, "He cheated on his partner for many times"), 50 short sentences of positive moral behaviors performed by women (eg, "She adopted a large number of stray dogs"), and 50 short sentences of negative moral behaviors performed by women (eg, "She had multiple induced abortions due to her debauchery"). All the four kinds of short sentences were 10-20 words in length and were randomized to form an online moral judgment questionnaire. Participants were instructed to read and rate these sentences on a 9-point scale at their own selfpace (1 for morally very positive and 9 for morally very negative). Specifically, participants are first presented with instructions for the questionnaire to clarify their task, following the basic demographic information was collected. And next, the short sentences presented on the screen one by one, and participants were allowed to rate these short sentences according to the perception of morality of themselves. Results Paired-sample t-test were conducted in SPSS 26.0. The results of the pilot material ratings confirmed that there was a significant difference between positive moral (7.69 ± 0.63) and negative moral sentences (2.33 ± 0.71), t (99) = 54.42, p < 0.001. There was also a significant difference of the sentences between positive moral (7.77 ± 0.07) and negative moral behaviors (2.26 ± 0.10) performed by men, t (49) = 44.59, p < 0.001. Meanwhile, we also found a significant difference of the sentences between positive moral (7.61 ± 0.10) and negative moral behaviors (2.40 ± 0.10) performed by women, t (49) = 34.35, p < 0.001. However, there was no significant difference between the positive moral sentences performed by men and by women (p = 0.200) and negative moral sentences performed by men and by women (p = 0.330), respectively. According to the ratings, we eliminated some short sentences that are more related to personal economic level (eg, "She likes money and believes that money is supreme") or some short sentences with less discrimination (eg, "He lets self-interest take precedence over the collective interests"). After that, we selected 100 short sentences in a random way as experimental stimuli, composed by 25 stimuli of each type as mentioned above. Experiment 1 In Experiment 1, we attempted to explore whether the different moral qualities and gender of proposers have an impact on fairness perception in the ultimatum game. 1.38) were recruited locally and paid for their participation. In this and all subsequent experiments, participants provided basic demographic information. Design The ultimatum paradigm was used as the task of Experiment 1. Experiment 1 involved a 2 (moral behavior: positive morality vs negative morality; within-subjects) × 3 (fairness of the allocation of money: moderately unfair 7:3, extremely unfair 9:1, fair 5:5, the number after the colon is the amount obtained by the participants; within-subjects) × 2 (the gender of virtual proposer: male proposer vs female proposer; between-subjects) mixed factorial design. The dependent variables were the acceptance rate. Participants were randomly divided into two groups: the male proposer group and the female proposer group (30 participants in each group). Materials 100 short sentences of moral behaviors, including 50 short sentences of positive moral behaviors and 50 short sentences of negative moral behaviors (half of them performed by male and half by female). There was a significant difference between positive moral (7.53 ± 0.63) and negative moral sentences (2.37 ± 0.61), t (49) = −111.19, p < 0.001. Procedure The stimuli were presented on E-prime 2.0. During Experiment 1, participants were asked to assume themselves as the responders to make the decision (accept or reject) for the allocation of a fixed sum of money proposed by the virtual proposer. If the responder (participant) was willing to accept the allocation of money proposed by virtual proposer, the money will be split according to the proposal. However, if the responder was willing to reject the proposal, both responder and proposer received nothing. In order to increase the ecological validity of the experiment, participants were instructed to read the depicted information, which included the background information of the virtual proposer, the relationship between the virtual proposer and the participant, and the scene of the allocation of money. Examples of the depicted information are shown as follows: Suppose that you have a colleague named Xiao Zhang, you two have received an additional subsidy because of your excellent work. It is known that Xiao Zhang and you contributed equally in this work, and this subsidy will be allocated by Xiao Zhang. You can choose to accept the allocation of money to get the amount offered by Xiao Zhang or you can reject the proposal, but neither of you can get the money. After reading the depicted information, the short sentence of moral behavior of the proposer was presented (eg, He cheated on his partner for many times), followed by the allocation of money. The testing procedure consisted of 150 testing trials: 25 positive moral and 25 negative moral depictions, which were paired with three allocations of money (fair, moderately unfair, and extremely unfair) and presented randomly. Participants were instructed to make the decision to acceptance or rejection the proposal. After the testing trial, participants need to answer several questions to ensure they can have full comprehension of the instruction and the proposals for allocation of money. The descriptions of the question are shown as follows: (a) Which of the following proposals is the allocation of money proposed by Xiao Zhang? (b) What is the relationship between Xiao Zhang and you? (c) If you reject the proposal, how much will Xiao Zhang and you get, respectively? Results and Discussion We performed an analysis of variance (ANOVAs) using spss 26.0 for the acceptance rate of allocation of money in Experiment 1, see Table 1. For the results of acceptance rate, there was no significant main effect of the gender of virtual proposer, F (1, 58) = 0.60, p = 0.442, η 2 = 0.01. However, we found a significant main effect of the moral behavior of proposers, F (1, 58) = 34.84, p < 0.001, η 2 = 0.38, as well as a main effect of the fairness of the allocation of money, where the acceptance rate of fair allocation was the highest (0.90 ± 0.02), followed by moderately unfair allocation (0.37 ± 0.03), and extremely unfair allocation was the lowest (0.11 ± 0.03), F (2, 57) = 254.46, p < 0.001, η 2 = 0.81. We found a significant interaction between the moral behavior and the fairness of the allocation of money, F (2, 57) = 11.73, p < 0.001, η 2 = 0.17, which revealed that the acceptance rate was significantly higher for positive moral proposer Figure 1. However, neither the interaction between the fairness of the allocation of money and the gender of virtual proposer, F (2, 57) = 0.16, p = 0.800, η 2 < 0.01, nor the interaction between the moral behavior and the gender of virtual proposer was significant, F (1, 58) = 0.09, p = 0.772, η 2 < 0.01. Meanwhile, we also found no significant three-way interaction between the moral behavior, the fairness of the allocation of money and the gender of virtual proposer, F (2, 57) = 2.36, p = 0.113, η 2 = 0.04. In line with the hypothesis, the results of Experiment 1 showed an interaction between moral behavior and the fairness of the allocation of money, suggesting that positive morality had an effect on inhibiting the perception of unfairness and increasing the probability of cooperation. In Experiment 2, we tried to investigate whether group identity has an influence on the effect of morality on fairness perception by inducing participants' positive or negative moral experiences. We did not find the effect of gender on decision-making in Experiment 1, so in this experiment, we did not take gender into account. Experiment 2 Method Participants Using G-power 3.1.9, when the medium effect size is 0.25 and the power value is 0.8, the minimum sample size was calculated for 20 people. A different group of 60 healthy students (30 females) from South China Normal University who were right-handed, had normal or corrected-to-normal vision, and aged between 20 and 25 years (M age = 22.92 ± 1.90) were recruited locally and paid for their participation. Design The ultimatum paradigm was identical to that of Experiment 1. Experiment 2 involved a 2 (moral behavior: positive morality vs negative morality; within-subjects) × 3 (fairness of the allocation of money: moderately unfair 7:3, extremely unfair 9:1, fair 5:5, the number after the colon is the amount obtained by the participants; within-subjects) × 2 (moral priming: positive vs negative; between-subjects) mixed factorial design. The dependent variable was the acceptance rate. Participants were randomly divided into two groups: the positive moral priming group and the negative moral priming group. Materials Short Sentences of Moral Behaviors A total of 40 short sentences of moral behaviors were selected from Experiment 1, in which 20 of the sentences described positive moral behaviors (7.73 ± 0.50) and 20 of the sentences described negative moral behaviors (2.15 ± 0.46). A paired-sample t-test using SPSS 26.0 with the sense of morality on a 9-point scale showed a significant difference between morally positive sentences (7.73 ± 0.50) and morally negative sentences (2.15 ± 0.46), t (1, 19) = 132.55, p < 0.001. Moral Priming Based on the previous study, we compiled 10 scenarios which describe situations that are common in everyday life. There are both positive moral and negative moral behaviors in each scenario, for example, in the "mistakenly receiving an email from a colleague", the positive moral behavior is "A's mailbox mistakenly receives important material that should have been sent to Colleague B, which is important for B's promotion but will hinder A's future. But A still forwarded the email to B"; the negative moral behavior is "A deletes the mistakenly received mail". A different group of 19 students (M age = 22.74 ± 1.28) were instructed to read and rate these scenarios on a 9-point scale in terms of "the moral value of behavior" (1 for morally very positive and 9 for morally very negative) and "the rationality of situation and behavior" (1 for very rational and 9 for very irrational). In particular, the stimulus was presented on E-prime 2.0. The 10 scenarios with corresponding moral behavior or immoral behavior will be presented in the screen randomly. At the same time, introductions will appear below the sentence to ask participants to react. Participants need to make judgment about the figures of these scenarios, first rating "the moral value of behavior" and then rating "the rationality of situation and behavior". Five scenarios were selected (in other, mistakenly receiving emails from colleagues, picking up a stranger's wallet, hearing a friend's secret, selling a bad cake, examination), forming 10 positive moral scenarios and 10 negative moral scenarios. The results of the paired-sample t-test using SPSS 26.0 showed that there was a significant difference between the morally positive (7.43 ± 2.42) and negative behaviors (2.13 ± 2.04) for each scenario, t (94) = 13.87, p < 0.001. Procedure The procedures for Experiment 2 were similar to those of Experiment 1. However, at the beginning of this experiment, the participants were randomly divided into the positive moral priming group (30 participants, 21 females) and the negative moral priming group (30 participants, 21 females). Participants were instructed to read and immerse themselves in the scenarios at their self-paced. Then, they were told to press the button to enter the next screen, where the behavior corresponding to this scenario will be present. After reading the corresponding behavior, the participant needs to press the "F" key to indicate they can imagine they were really acting the same behavior (eg, in the "Pick up a Stranger's Wallet" scenario, the negative moral priming group needs to press "F" to leave the cash behind and throw the wallet in the trash; the positive moral priming group needs to press "F" to return the wallet to the owner and did not take anything, see Figure 2). Formal experimental trials were not allowed to enter until participants had responded to five moral scenarios. The five moral scenarios were presented once in random order. In Experiment 2, both equitable and inequitable allocations appeared 40 times (the 7:3 allocation and the 9:1 allocation appeared 20 times, respectively). In order to balance the genders, positive moral behavior and negative moral behavior for men and women appeared 10 times in the fair allocation of money, respectively, and 5 times for moderately unfair and extremely unfair allocation of money. The process for a formal trial in Experiment 2 is illustrated in Figure 3. The testing procedure consisted of 80 presentation trials. At the beginning of each testing trial, a black fixation point was presented (500ms), followed by the presentation of information about the proposer (2000ms), and then a blank black screen was displayed (200ms). After the black screen offset, the allocation of money put forward by the virtual proposer was presented on the screen, and the participants needed to press one of the two buttons to decide to accept or refuse this allocation (the "F" key indicates rejection and the "J" key indicates acceptance). Following the decision of the participants, the feedback on the amount of money they get was presented on the screen (1000ms). The interval between the two trials was 700ms. Before the experiment began, the participants practiced eight trials to familiarize themselves with the process. In Experiment 2, the results were similar to Experiment 1, which found that participants always gave a higher acceptance rate to positive moral proposer rather than negative moral one in either positive moral priming group or negative moral priming group. In other words, there was no difference between positive and negative moral priming groups, that is, group identity had no influence on decision-making. The ineffectiveness of group identity due to the short sentences used in the present study contained moral judgment. In order to further explicitly explore the effect of group identity and to extent rule out the influence of moral factors, we combined Experiment 1 and Experiment 2 to analyze. Experiment 1 and Experiment 2 Combined Analysis Given the nearly identical experimental design and procedures were used in Experiments 1 and 2, we tried to combine the two datasets, creating three types of moral priming (positive moral priming, negative moral priming, and no priming) as an additional, between-subjects factor in the analysis, in order to further explicitly investigate the effect of group identity on the decision-making. Similar to the results of Experiment 2, 2 (moral behavior: positive moral vs negative moral) × 3 (fairness of the allocation of money: moderately unfair 7:3, extremely unfair 9:1, fair 5:5, the number after the colon is the amount obtained by the participants) × 3 (moral priming: positive, negative, no priming) three-way repeated-measures ANOVA revealed that the main effect of moral behavior was significant, F (1, 117) = 174.32, p < 0.001, η 2 = 0.60, the main effect of the fairness of the allocation of money, where the acceptance rate of fair allocation was the highest (0.87 ± 0.02), followed by moderately unfair allocation (0.56 ± 0.02), and extremely unfair allocation was the lowest (0.23 ± 0.02), F (2, 116) = 263.44, p < 0.001, η 2 = 0.69, and the main effect of the moral priming was also significant, F (2, 116) = 23.94, p < 0.001, η 2 = 0.29, which manifested that the priming of group identity was effective. More importantly, we found a significant three-way interaction between the moral priming, the depicted moral behavior and the fairness of the allocation of money, F (4, 234) = 7.06, p < 0.001, η 2 = 0.11. Under the moderately unfair allocation and extremely unfair allocation conditions, when the moral behavior of proposer was positive, the acceptance rates in the positive (moderately unfair: 0.94 ± 0.04, extremely unfair: 0.51 ± 0.07) and negative moral priming groups (moderately unfair: 0.93 ± 0.04, extremely unfair: 0.49 ± 0.07) were significantly higher than those in the no priming group (moderately unfair: 0.51 ± 0.03, F (2, 117) = 57.12, p < 0.001, η 2 = 0.50; extremely unfair: 0.19 ± 0.05, F (2, 117) = 11.98, p < 0.001, η 2 = 0.17). However, Figure 4 The mean acceptance rate from ANOVAs used to test the differences in fairness of allocation of money for the moral behavior of virtual proposer in Experiment 2. The acceptance rate of proposals from positive moral proposer was higher than those from negative moral proposer in fair moderately unfair and extremely unfair allocation of money. Error bars represent 95% confidence intervals. there was no significant difference among the positive moral priming (1.00 ± 0.17), negative moral priming (0.95 ± 0.17) and no priming groups (1.00 ± 0.01) in the fair allocation, F (2, 117) = 3.00, p = 0.054, η 2 = 0.05. General Discussion People are generally affected by various factors when they make social decisions. In this study, two experiments are employed to explore which factors influence us the most in economic decision-making among fairness, morality and group identity. In Experiment 1, we found that no matter how fair the allocation of money was, people's acceptance rate of the proposals of the individual with positive morality was higher than that of people with negative morality, indicating that the fairness of allocations will be affected by the morality of proposers. People are more inclined to rely on morality in economic decision-making. Previous studies have also found that moral judgment can affect people's perception of fairness and their decision-making. Zhan explored the impact of moral judgment and the fairness of proposals on responders' decision-making, reporting that responders were more willing to accept the unfair proposals put forward by people with positive morality than those by people with negative morality. Besides, responders rejected unfair proposal of negative moral people more than positive moral people. 51 However, there was no difference in participants' acceptance rates between male and female proposers. That is to say, when faced with unfair behaviors that undermine our benefits, we will treat male and female equally. The reason may be that the emotion aroused by the characteristics of female making us treat their more mercy was offset by the disgust of the unfair behaviors. In Experiment 2, we added moral priming to examine whether group identity will have an impact on the effects of morality on the perception of fairness in people's preference in decision-making. The results show that responders always displayed a higher acceptance rate of proposals of people with positive morality either when they were faced with ingroup or out-group. Although previous research has revealed that group values will influence our social decisionmaking, 37,38 the results of Experiment 2 show that group values do not seem to affect the effect of morality on the perception of fairness, which means that morality has a great influence in economic decision-making. This is consistent with previous research. Zhan, for instance, found that unfair behaviors that violate fairness norm can be quickly detected, and this early evaluation process was modulated by social factors such as moral judgment. 51 In other words, people will rely more on moral judgment to make decisions not affected by the perception of fairness or group identity. From Experiment 1 and Experiment 2, we can draw a conclusion that people are more likely to reject fair allocation from people with negative morality than those from people with positive morality. Otherwise, they tend to accept moderately and extremely unfair allocation from positive moral individuals more than negative moral ones, which is in line with the social intuition model. The social intuition model indicated that individuals are driven by emotions when making moral judgment, 52 and moral emotion is even more important than other types of emotions. 53 Immoral behaviors will induce negative moral emotions, such as disgust, 54 and moral behaviors will generate positive moral emotions, such 2061 as gratitude. 55 Negative emotions elicited by immoral behaviors can decrease an individual's acceptance rate of unfair allocation of money. 56,57 Morett also pointed out that the emotion of disgust caused by immoral behavior will increase the rejection rate of unfair allocation of money in UG. 58 On the contrary, individuals hold a trust bias towards positive moral people, which can promote mutual social cooperation. 25 What is more, moral behavior can inspire positive emotions like gratitude. At this point, people will choose economic cooperation even at the expense of sacrificing greater personal economic interests. 55 This behavior actually reflects the mutual compensation between moral reward and economic reward. Studies have shown that people are willing to sacrifice economic interests for moral value in the tradeoff between morality and fairness of the economic decisions. 24,59 People sacrifice their own interests in order to increase the well-being of others as they accept unfair proposals from positive moral people. 60 However, people tend to sacrifice their own interests to punish evildoers as they reject the fair behavior of people with negative morality. 61 Therefore, the moral reward induced by the unfair behavior of positive moral person compensates for the cost of unfair allocation of money, while the economic loss caused by rejecting the fair plan of negative moral person can be compensated by the moral reward gained from punishing negative moral person. We also found that there was no significant difference in the acceptance rates of the allocations of positive moral proposers and negative moral proposers between positive and negative moral priming groups. This may be due to the assimilation effect that occurs when people are faced with a positive moral person and a sense of moral identity to the positive group. This sense of identity will make people temporarily stand in the same moral position as the target group, so they will actively avoid fighting with them and support the group in the subsequent situation. 62 Therefore, the effect of moral priming is perhaps concealed by the assimilation effect, and both positive priming and negative priming participants are willing to sacrifice economic interests for moral value, that is, participants will show a higher acceptance rate of unfair proposal from positive moral proposers. While in the face of a group or individual with negative morality, the historical experience and cognition of negative moral identity will make people feel disgust and make it difficult to generate moral identity, and thus urges people to reject non-zero allocation which violates the traditional rational Economic Man hypothesis. Meanwhile, the unfair economic proposal also accelerates this process, 63 and thus show a low acceptance rate of the fair proposal of negative moral proposers. In short, the effect of the assimilation to positive individuals and the difficulty in generating moral identity for negative individuals may simultaneously override the effect of group identity, which makes no difference between positive and negative moral priming groups. In order to further explore the influence of the effect of group identity on social decision-making, we conducted a joint analysis of Experiment 1 and Experiment 2. Our results, to some extent, support the In-group Preference. Specifically, for the positive moral priming group, the acceptance rates of the proposers with positive morality were significantly higher than the no priming group in moderately unfair allocation and extremely unfair allocation, which reveals that the acceptance rates of the unfair in-group proposals were significantly higher than those of the out-group; for negative moral priming group, there was no significant difference in the acceptance rates of the proposers with negative morality between their and the no priming group in moderately unfair allocation and extremely unfair allocation. The acceptance rates of the negative moral priming group for negative moral proposers did not significantly lower than the no priming group, that is, the punishment for the in-group member was not strengthened, which is inconsistent with the BSE; however, it can negatively support the In-group Preference which can be explained by the view of Mere Preference Theory. The reason why there is no significant difference in the acceptance rates of negative moral proposers between the negative moral priming group and the no priming group may also be the difficulty in generating moral identity for negative individuals. People are generally more resistant to immoral behavior, resulting in the psychological defense mechanism offsetting the effect of negative moral priming. Although the results seem to support the Mere Preference Theory to some extent, we must emphasize that our results only partially support In-group Preference. The In-group Preference Effect includes two effects, being more kind, tolerant and altruistic towards the in-group, 42 but more suspicious, indifferent and even hostile to the out-group. 43 That is, besides the positive and negative moral priming group should show a higher acceptance rate for the in-group than the out-group, they should show a lower acceptance rate for the out-group than the in-group. However, our results showed that the acceptance rates of the positive moral priming group for negative moral proposer and the acceptance rates of the negative moral priming group for positive moral proposer were higher than the no priming group in moderately and extremely unfair allocations. We hold that this is due to the fact that the moral sensitivity of participants in Experiment 2 was increased via moral priming. In other words, the sensitivity caused by moral priming may have a crucial impact on an individual's decision-makings and even influence the effect of group identity. Although this finding to some extent demonstrates the great influence of morality in decision-making, it even further impacts the accuracy of the experimental manipulation, affecting the group identity of participants. In the present study, we focused on how to prime the group identity within a moral context. In this way, morality cannot be excluded from group identity. Therefore, the impact of morality on group identity can be justified in future research. In the present study, we utilized the two players of the UG, manipulating the moral quality of the proposer to explore the issue of whether morality has an impact on fairness perception and manipulating the moral quality of the proposer and responder simultaneously forming group identity to explore whether group identity has an impact on the effect of morality on fairness in decision-making. However, we did not find a comprehensive theory to include all the constructs of our study. In addition, we only used the social intuition model to explain the impact of morality and the Mere Preference Theory to explain the impact of group identity, but we cannot find a general theory explaining the impact of morality and group identity simultaneously. A reasonable explanation covering fairness, morality and group identity can be developed in future studies. Our research has drawn a preliminary conclusion that morality has an impact on people's perception of fairness and group identity. However, the mechanism behind these issues is not yet clear. Therefore, future research can employ precise techniques like Event-related Potential or Functional Magnetic Resonance Imaging technology to solve this problem. Furthermore, the present study only investigated the effect of morality on the inferior unfair condition (in other words, the participants had less money allocated); however, it did not consider the effect on the superior unfair condition (in other words, the participants had more money allocated). Therefore, moral situations with higher ecological validity can be applied to further explore the impact of moral judgment on superior unfair decision-making and individual differences. Conclusion In the present study, economic and moral contexts were combined in the ultimatum paradigm to explore which factors have the most influence among fairness, morality, and group identity in social decision-making. The results revealed that people are inclined to be affected by moral judgment to making decisions. Moreover, the effect of group identity was working though it partially supported the In-group Preference. The group identity does not fully work because the priming shared experiences including moral information, which in turn influences the efficacy of group identity. To sum up, we can draw a conclusion that when making an economic decision, morality has the supreme influence on individuals.
2022-08-11T15:20:34.572Z
2022-08-01T00:00:00.000
{ "year": 2022, "sha1": "c340bca21b3ce214c13a77d883e220d3c36bae07", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=82839", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2a6ca6fd34bd3e3d87c344ce04649bda511ac64c", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Medicine" ] }
16782260
pes2o/s2orc
v3-fos-license
Damage evaluation in graphene underlying atomic layer deposition dielectrics Based on micro-Raman spectroscopy (μRS) and X-ray photoelectron spectroscopy (XPS), we study the structural damage incurred in monolayer (1L) and few-layer (FL) graphene subjected to atomic-layer deposition of HfO2 and Al2O3 upon different oxygen plasma power levels. We evaluate the damage level and the influence of the HfO2 thickness on graphene. The results indicate that in the case of Al2O3/graphene, whether 1L or FL graphene is strongly damaged under our process conditions. For the case of HfO2/graphene, μRS analysis clearly shows that FL graphene is less disordered than 1L graphene. In addition, the damage levels in FL graphene decrease with the number of layers. Moreover, the FL graphene damage is inversely proportional to the thickness of HfO2 film. Particularly, the bottom layer of twisted bilayer (t-2L) has the salient features of 1L graphene. Therefore, FL graphene allows for controlling/limiting the degree of defect during the PE-ALD HfO2 of dielectrics and could be a good starting material for building field effect transistors, sensors, touch screens and solar cells. Besides, the formation of Hf-C bonds may favor growing high-quality and uniform-coverage dielectric. HfO2 could be a suitable high-K gate dielectric with a scaling capability down to sub-5-nm for graphene-based transistors. (ALD) 6,7 for dielectric growth on graphene 8 . ALD allows for controlling the thickness and uniformity of the deposited films with atomic-level precision while avoiding physical damage of energized atoms to the surface. ALD techniques are classified into plasma (oxygen-based) and thermal (water-based) depositions. Very few examples dealing with the former technique were reported. Only Nayfeh et al. 9 demonstrated a graphene transistor for which aluminum oxide (Al 2 O 3 ) gate dielectric was directly deposited on graphene by using a remote plasma-enhanced ALD (PE-ALD) process. Most of the reports are related to the latter method, because plasma is rather aggressive (especially using a direct plasma) and generally etches graphene 10 . It is well known that graphene is hydrophobic and inert. More specifically, graphene does not provide reactive nucleation sites for the precursors in thermal ALD 11,12 since it does not display covalent bonds out of the plane. Therefore, growing high-quality and uniform-coverage dielectrics by thermal ALD requires a graphene pretreatment. Various approaches have been proposed: (i) graphene is chemically modified by fluorine 13 , ozone 14 , nitride plasma 15,16 , organic molecules 17 or perylene tetracarboxylic acid 18,19 ; (ii) metal particles are deposited on graphene as appropriate nucleation layers 20 ; (iii) self-assembled monolayers are used to template the direct growth of dielectrics 21 ; (iv) graphene islands, serving as a seed layer, are generated by low-power plasma 22 . Some of these approaches are complicated and incompatible with the existing mainstream integrated circuit technology. More particularly, these approaches might cause undesirable side effects, such as: inducing defects, doping graphene, leaving seed layers, increasing the dielectric thickness, and degrading the dielectric properties. Two previous articles pointed out that graphene is possibly degraded by the pretreatment 23,24 . Alternatively, the ozone pretreatment has proved to be responsible for significant damage to graphene in high-temperature thermal ALD 25 . In this work, we use mild plasma conditions to directly grow hafnium oxide (HfO 2 ) and Al 2 O 3 dielectrics on graphene by PE-ALD. In our process, the graphene samples are placed away from the plasma source for outside of the glow discharge. The remote oxygen plasma with low ion bombardment avoids fast etching of graphene. Simultaneously, the reaction between graphene and the precursor still guarantees physical/chemical modification of graphene. Based on micro-Raman spectroscopy (μ RS) and X-ray photoelectron spectroscopy (XPS), we study the structural damage induced in 1L graphene underlying HfO 2 and Al 2 O 3 upon different oxygen plasma power levels. We evaluate the damage levels in AB-stacked bilayer (AB-2L), twisted bilayer (t-BL) and trilayer (3L) graphene (hereafter they are collectively referred to as FL) underlying HfO 2 and Al 2 O 3 for a fixed oxygen plasma power. We also investigate the influence of the HfO 2 thickness on graphene with various layer numbers. The results indicate that in the case of Al 2 O 3 /graphene, both 1L and FL graphene are strongly damaged under the present process conditions. In the case of HfO 2 /graphene, μ RS analysis clearly shows that FL graphene presents much less disorder than 1L graphene. Moreover, the FL graphene damage decreases with the number of layers. Our results also reveal that an inverse dependence of FL graphene damage with increasing the thickness of HfO 2 film. FL graphene allows for controlling/limiting the defect formation during the PE-ALD HfO 2 process. Therefore, it could be a good starting material for applications such as graphene-based transistors and sensing devices 26,27 , since, presently, wafer-scale homogeneous FL graphene can been synthesized by chemical vapor deposition (CVD) 28 . Results Graphene morphology observations by SEM, AFM, optical microscopy and TEM. Figure 1a-c show scanning electron microscopy images of atmospheric pressure chemical vapor deposition (APCVD) graphene on copper foils (see methods and the reference 29 for more information about the graphene growth and transfer conditions). The different contrasts in the images correspond to the different graphene layer numbers. It can be seen that the 1L, 2L, and 3L graphene domains are hexagonal. An hexagon is the typical shape of graphene domains grown by APCVD 30 . In another article, the same group explored the shape variations versus the APCVD growth conditions. They explained the shape modulation by the competition between atom diffusion along graphene domain edges or corners and surface diffusion processes 31 . In addition, Fig. 1d presents an atomic force microscopy image of graphene transferred onto SiO 2 , in which 1L, 2L and 3L can be clearly seen. Figure 1e plots the profile of stacked graphene flakes along the A-B line, confirming that each graphene layer has a thickness of about 0.3 nm. Figure 2a illustrates an optical microscopy image of as-transferred graphene on the SiO 2 /Si substrate, revealing that the graphene film is composed of isolated and contiguous hexagonal flakes of various layer numbers. Most of the hexagons are 1L. High-angle t-2L, AB-2L and 3L graphene hexagons are found in certain regions. As shown in Fig. 2b, graphene underlying the HfO 2 dielectric film is still clearly visible. Figure 2c shows a cross-sectional high angle annular dark field scanning transmission electron microscopy (HAADF-STEM/Z-contrast) image of the HfO 2 /graphene/SiO 2 /Si stack, indicating the uniform covering of graphene with the HfO 2 film. No clustering or pinholes in the HfO 2 film is observed. Graphene structural damage evaluation by μRS. μ RS is a nondestructive method and is employed here to assess the structural damage in graphene underlying dielectric films. It is worth noting that a laser wave length of 514 nm was used in all the measurements except for the case by special description. Figure 3 shows the Raman spectra of 1L graphene after PE-ALD Al 2 O 3 and HfO 2 , at an oxygen plasma power of 300 W. The spectra are offset for clarity. For the sake of comparison, the Raman spectrum of as-transferred 1L graphene is also shown at the bottom of the figure. According to the statistical data Scientific RepoRts | 5:13523 | DOi: 10.1038/srep13523 from many measured points, the peak at ~1588 cm −1 originates from the G mode of graphene. The non-perturbed G mode is usually around 1580 cm −1 (see the work of Ferrari et al. 32 ), which is a first order Raman peak related to the E 2g optical phonons at the Brillouin zone center. The slight upshift is most probably due to residual strain originating from the copper substrate or/and unintentional doping. The doping possibly comes from traps in the SiO 2 substrate, from insufficient rinsing after copper etching, from PMMA residues in transfer step, from moisture in air, and other similar contamination sources 33,34,35 . The other peak is the 2D mode at ~2687 cm −1 , which is a two-phonon second-order Raman process. The integrated intensity ratio (I 2D /I G ) is 1.9, and the full width at half maximum (FWHM) of the 2D peak is 31 cm −1 . These figures of merit first confirm the presence of as-transferred 1L graphene at the probed locations. At the exact same positions, two new peaks (defect-activated peaks) appear after PE-ALD HfO 2 and PE-ALD Al 2 O 3. Namely the D peak located at ~1356 cm −1 and the D' peak (~1620 cm −1 ) at the right shoulder of the G mode. The D peak in sp 2 graphene is activated by a double resonance Raman process in the presence of disorder and defects and is related to the breathing modes of carbon atoms in the vicinity of the K point in the Brillouin zone (see the work of Ferrari et al. 32 ). The D peak can only be observed when the crystal symmetry is broken by point defects or at the edges of graphene 36 . The D' mode corresponds to an independent defect-assisted intervalley process in graphene. It could be due to the presence of sp 3 bonding. For as-transferred 1L graphene, the intensity of D peak is weak and ignored, as shown in the bottom spectrum. Since the size of the examined hexagon flake is large enough (larger than 15 μ m from vertex to vertex) to make the measurement inside the crystalline region (in the center of hexagon) with a Raman laser spot diameter of about 1 μ m, the boundary of the hexagons does not contribute to the spectrum here. After the dielectric depositions, the intensity of D peaks becomes very strong. This indicates that the dielectric depositions break the symmetry of the graphene lattice and induce structural defects in graphene. Moreover, the positions of the G peaks are slightly shifted, the FWHM of 2D peaks are broadened and the D' peaks are separated from the G peaks. These characteristics indicate that graphene is disordered, but it is not completely etched and then still optically visible. We therefore use the area ratio A D /A G between the integrated intensities of the D and G peaks to quantify the amount of disorder (i.e. the A D /A G increases with increasing amount of disorder at low defect concentration range). The area ratio is preferred over the individual intensities since it accounts for variations in peak position, intensity and linewidth 37 . To investigate the damage in graphene subjected to different oxygen plasma power levels, we reduce the oxygen plasma power from 300 to 200 W and 150 W for PE-ALD HfO 2 and PE-ALD Al 2 O 3 , respectively. The corresponding Raman spectra of 1L graphene underlying the two dielectrics are shown in Fig. 4a,b, respectively. The D and D' peaks exist even if the oxygen plasma power is reduced to 150 W. The A D /A G ratios are very similar for the different power levels (300 and 200 W for HfO 2 , 300 and 150 W for Al 2 O 3 ), implying that the amount of generated disorders in graphene is not correlated to the plasma power levels in this range of powers. For ALD Al 2 O 3 on graphene, Lim et al. used nitrogen plasma to pretreat graphene 38 . They investigated the dependence of the number of defects on the nitrogen plasma power levels (30, 60, and 100 W). Their results show that the number of defects is increased with the nitrogen plasma power level. Our results suggest that for oxygen plasma powers above 150 W, the number of defects in 1L graphene has reached saturation. The layer number is first identified by the color contrast of graphene under optical microscopy, followed by μ RS measurements. Figure 5a-d shows the Raman spectra of 3L, AB-2L, t-2L and 1L graphene under HfO 2 and Al 2 O 3 with a thickness of about 5 nm, respectively. The Raman spectra of as-transferred graphene with various layer thicknesses (black curves) are also shown at the bottom of the figure as references. In the case of Al 2 O 3 /graphene (red curves), all the A D /A G ratios are large and all the D' peaks clearly separate from G peaks. These features indicate that all kinds of graphene are damaged during PE-ALD Al 2 O 3 . In the case of HfO 2 /3L graphene, the intensity of the D peak becomes very weak and the D' peak even almost disappears (blue curve in Fig. 5a). The A D /A G ratios of AB-2L and t-2L graphene are much smaller (blue curves in Fig. 5b,c). However, the spectrum of HfO 2 /1L graphene (blue curve in 5d) is similar to that of the Al 2 O 3 /graphene case. These results point out that in PE-ALD HfO 2 , FL graphene is less damaged than 1L graphene and the damage level of graphene decreases with the number of layers. We also investigate the influence of the HfO 2 thickness on graphene. Figure 6a-d shows the Raman spectra for 1L, AB-2L, t-2L and 3L graphene under HfO 2 with different thicknesses (0, 0.5, 1 and 5 nm), respectively. It can be seen from Fig. 6a that for different HfO 2 thicknesses, the A D /A G ratio of 1L graphene dramatically increases compared with that of as-transferred graphene. However, the changes in the Raman spectra of FL graphene are less drastic. Surprisingly, all the A D /A G ratios decrease with increasing the thickness of HfO 2 film (see Fig. 6e). The reason will be discussed later. Finally, to identify if the HfO 2 film or the hypothesized Hf-C exhibit some peaks in Raman spectra, μ RS analysis is performed using high resolution (1800 gr/mm) gratings and three excitation laser energies in the visible range, namely 514.5 nm (2.41 eV), 488 nm (2.54 eV) and 633 nm (1.92 eV). We do not find any peak related to HfO 2 or Hf-C in the measurement range from 0 to 4000 cm −1 . This implies that the HfO 2 film is amorphous due to the low growth temperature. X-ray photoelectron spectroscopy measurements. We carry out ex situ XPS measurements to evaluate the impact of PE-ALD dielectrics on graphene. Four samples are sputtered with an Ar + gun to perform a depth profile: 5.5-nm-thick Al 2 O 3 and 5.9-nm-thick HfO 2 are deposited either on silicon (as references) or on graphene/SiO 2 /silicon stacks (hereafter referred to as Al 2 O 3 /silicon, HfO 2 /silicon, Al 2 O 3 /graphene, and HfO 2 /graphene, respectively). Core level spectra are recorded from carbon (C 1s), oxygen (O 1s), hafnium (Hf 4f), aluminum (Al 2p), and silicon (Si 2p). The elemental composition of the dielectrics obtained from 20-nm-thick reference layers can be estimated from the ratios of the integrated intensities of the XPS spectra: [O/Hf] = 2.15 ± 0.1 and [O/Al] = 1.47 ± 0.05. These results testify to the good quality of the dielectrics. We now focus on the C 1s atomic concentration profile and spectra of each sample. Figure 7a,b illustrate the depth profiles of the Al 2 O 3 /silicon and HfO 2 /silicon samples, respectively. In both cases, a small amount of carbon (2% on average) is found in the profiles (except for a ~15% concentration corresponding to adventitious carbon on top of the dielectric layers). Moreover, a slight increase of the carbon concentration is observed when approaching the interface between the dielectric and silicon, most likely originating from residual contamination on silicon before PE-ALD. Figure 7c,d display the depth profiles of the Al 2 O 3 /graphene and HfO 2 /graphene samples, respectively. We can clearly identify the presence of graphene between the dielectric and the SiO 2 /silicon substrate. Figure 7e,f exhibit the C 1s spectra of the Al 2 O 3 /graphene and HfO 2 /graphene samples at the maximum of the carbon profiles, respectively. The main peak at 284.5 eV in both spectra corresponds to graphene. Strikingly, in contrast to the Al 2 O 3 /graphene sample, the HfO 2 /graphene sample displays an additional peak at 281.5 eV. This peak can be attributed to the formation of the metallic carbide Hf-C. Consequently, the HfO 2 /graphene profile in Fig. 7d can be fitted by its two components: C in graphene and C in Hf-C. At the interface, the Hf-C concentration reaches 2% of the total composition (see the cyan and magenta profiles in Fig. 7d, corresponding to C in graphene and C in Hf-C, respectively). However, it was reported by Engelhard et al. 39 that Ar + sputtering of ALD HfO 2 induces the formation of Hf-C (at an ion-gun energy of 2000 eV), amounting to 1% of the total composition. To ascertain that the observed Hf-C peak is not related to sputtering (the Ar + gun is operated on purpose at the very low energy of 200 eV in the hope of preventing Hf-C formation), we have performed an additional experiment. A 1L graphene is transferred onto an HfO 2 /SiO 2 /Si substrate and next subjected to sputtering in the same conditions as before. In the corresponding C 1 s spectrum, the carbide-related peak occurs as well, most likely due to b) AB-2L, (c) t-2L (t-2L), (d) 3L graphene and (e) all the A D /A G ratios as a function of the HfO 2 thickness. Scientific RepoRts | 5:13523 | DOi: 10.1038/srep13523 intermixing between Hf and C during the erosion, resulting in Hf-C bonding. This means that, from the ex situ XPS analysis only, we cannot conclude if Hf-C forms during the PE-ALD process or during the erosion process or both. Nonetheless, as opposed to Al-C, this illustrates how easy it is to form Hf-C, since the Ar + sputtering is operated at a very low energy of 200 eV. To further elucidate where Hf-C originates from requires an ALD apparatus fitted with an in situ XPS analyzer. Discussion Although the optical image of the Al 2 O 3 /graphene/SiO 2 /Si (see Fig. 1 in the Supplementary Information) looks similar to that of the HfO 2 /graphene/SiO 2 /Si (see Fig. 2), all the graphene samples, regardless of their thickness, are significantly damaged in the PE-ALD Al 2 O 3 process (see the Raman spectra of Fig. 5). This may be linked to the fact that the TMA-Al precursor itself does not react with graphene at temperatures lower than 400 °C 40 14. Al carbide is not formed during the initial TMA-Al precursor pulse, which is confirmed by the XPS data in Fig. 7e within the detection limits of the technique. This results in a delayed Al 2 O 3 nucleation onto graphene. Only when graphene is subjected to the first or few oxidant precursor pulses (which is analogous to a pretreatment by oxygen plasma), the Al 2 O 3 film starts to nucleate and then grow. Unfortunately, 1L graphene, and even FL graphene, have been degraded during the pretreatment. On the contrary, the XPS results have demonstrated that the Hf-C bonds are quite easily formed, whatever the origin of the bonding. We hypothesize that, during the initial TDMA-Hf precursor pulse, a chemical reaction occurs between the Hf atoms and the C atoms constituting graphene to form Hf-C (its formation temperature is 25 °C) 39 . Hf in Hf-C bonds may act as a uniform and active template for the subsequent HfO 2 growth. Subsequently, the coverage of the first HfO 2 layer protects the underlying graphene layer from damage during the following deposition cycles. This may also explain why HfO 2 rather than Al 2 O 3 can be directly grown on graphene without out-of-plane covalent functional groups, in low-temperature thermal ALD process 41,42 . On the other hand, the origin of the inverse dependence of the A D /A G ratio on the number of graphene layers is not clear. It may be due to the interlayer interactions between the HfO 2 layer and graphene or a direct consequence of the HfO 2 deposition process or both. Previous works about the impact of the graphene thickness on its physical properties suggest that graphene's rigidity may increase with increasing number of layers 43 . More precisely, when 1L graphene is deposited on top of the SiO 2 substrate, it conforms more easily to the surface morphology of the underlying substrate, thereby more prone to deform compared with FL graphene. Oxygen plasma modification of 2L and FL graphene has been studied in the literature. Calculation results 44 predict that oxidized 2L graphene, unlike 1L, still retains its intrinsic properties even if the oxygen density is as high as 50%. Electrical experimental results in the work of Felten et al. 45 show that only the top layer of 2L graphene is chemically modified, while the bottom layer maintains its structural integrity. Moreover, it was reported that the chemical modification of FL graphene occurs layer by layer 46,47 . The Raman spectra of HfO 2 /graphene (blue curves in Fig. 5) show that the A D /A G ratio of 1L graphene is much larger than that of FL graphene and the D' peak is separated from the G peak in the Raman spectrum of 1L graphene. The formation of Hf-C bonds means a chemical adsorption of Hf atoms on graphene and the conversion of the bond from sp 2 to sp 3 , with a resulting increase of the D peak and separation of the D' peak. In addition, the oxygen plasma pulse is very aggressive toward 1L graphene 48 . Besides, the poor rigidity of 1 L graphene results in its fracture and wrinkling 49 . All these effects would induce damages in the graphene lattice e.g. vacancies, dislocations and dangling bonds. In sharp contrast to 1L, the A D /A G ratios of FL graphene significantly decrease. Hf-C bonds are only present on the top layer due to preferential chemisorption of Hf atoms. Since the plasma pulse is less reactive to FL graphene than 1L 50 and graphene becomes more rigid with the number of layers, this may lead to less damage in FL graphene. The I 2D /I G ratio and the FWHM of the 2D peak can be used to easily distinguish 1L and t-2L graphene. More specifically, pristine 1L graphene has a I 2D /I G ratio of about 2 and a FWHM of 35 cm −1 , while the I 2D /I G ratio and the FWHM of t-2L graphene are about 6 and 28 cm −1 , respectively 51 . Our results reveal another interesting behavior: after depositing a 5-nm-thick HfO 2 film, t-2L graphene presents "1L-like" features. In other words, when the thickness of the HfO 2 film increases from 0 to 5 nm, the I 2D /I G ratio reduces from 5.83 to 1.97 and the 2D peak FWHM increases from 26 to 38 cm −1 (see Fig. 6c). It is emphasized here that the I 2D /I G ratio of about 2 and the 2D peak FWHM of about 35 cm −1 are the features of 1L graphene 32 , while t-2L graphene has a I 2D /I G ratio of about 6 and a 2D peak FWHM of about 28 cm −1 (see ref. 51). We suggest that the C atoms in the top layer of t-2L intermixing with Hf atoms and O atoms form an interfacial layer, which is a complex amorphous layer. It is supported by the of the cross-sectional HAADF-STEM image as shown in the insert of Fig. 2c. This is consistent with a previous work 52 . In Fig. 6c, we can see that the spectrum of t-2L gradually becomes 1L-like upon augmenting the HfO 2 thickness. We make the assumption that, below 5 nm, the interfacial layer is not completely formed. In contrast, for 5 nm, the top graphene layer is entirely incorporated into the interfacial layer, leaving a stack made up of HfO 2 /amorphous interlayer/1L-like graphene. μ RS cannot detect that interfacial layer because it is amorphous. On the other hand, Felten et al. 45 observed a similar behavior with AB-2L under different plasma conditions. Indeed, they show by electrical measurements that after long-time and mild plasma treatment, 1L becomes an insulator, while AB-2L graphene still retains its ambipolar property with a relatively high charge mobility. They attribute these facts to the chemically modified top graphene layer and decoupling between the top and bottom layer. However, we did not observe the same behavior for AB-2L and 3L. As shown in Fig. 6e, the A D /A G ratios of FL graphene decrease with increasing the thickness of HfO 2 . This is also consistent with the above discussions. More specifically, the increase of the HfO 2 /graphene thickness makes it more difficult to conform to the surface morphology of the underlying substrate, and as a consequence graphene is less prone to deform. It is worth emphasizing that FL graphene may be a prospective material with regard to applications such as transistors and sensors. For instance, it has been reported that the sheet resistance of 2L graphene is smaller than that of 1L graphene and the low-frequency 1/f noise in the transistor (30-nm gate length) made from 2L graphene is strongly suppressed compared with 1L graphene transistors 53,54 . Conclusion We have investigated the structural damage in graphene underlying dielectrics (HfO 2 and Al 2 O 3 ) deposited by remote PE-ALD. Our results show that FL graphene is less damaged than 1L graphene; the damage level of FL graphene decreases not only with the number of graphene layers but also with the thickness of HfO 2 . Interestingly, the Raman spectrum of t-2L graphene underlying HfO 2 presents the features of that of as-transferred 1L graphene. XPS measurements indicate that Hf-C is easily formed. After coverage by the first HfO 2 , the bottom graphene layer has an additional protection. The oxygen plasma pulse in PE-ALD is less reactive to FL graphene than 1L. Graphene rigidity increases with the number of graphene layers. Moreover, it also increases with the thickness of HfO 2 . These may be the reasons of FL graphene less damaged in PE-ALD HfO 2 . Therefore, FL graphene, more particularly, t-2L graphene allows for controlling/limiting the defect formation during the PE-ALD HfO 2 process and might be a prospective material for applications such as graphene-based transistors and sensing devices. It appears that the thickness of PE-ALD HfO 2 can be arbitrarily scaled down to 5 nm. Our results open up direct perspectives for FL graphene and HfO 2 gate dielectric in graphene-based transistor applications. Methods APCVD graphene conditions. Graphene is synthesized by APCVD with dilute methane (5% in argon) as hydrocarbon precursor on copper foils (Alfa Aesar #13382). The samples are grown at 1000 °C for 1 h under flows of 500 sccm of argon, 20 sccm of hydrogen, and 0.2 sccm of dilute methane. Graphene is then transferred onto 300-nm-thick SiO 2 /Si substrates (to easily observe graphene with a conventional white light microscope) by the usual method based on PMMA, after etching the copper foil in ammonium persulfate. Process conditions for PE-ALD of dielectrics on graphene. HfO 2 and Al 2 O 3 films are deposited on graphene/SiO 2 /Si stacks, by PE-ALD (Fiji F200 from Ultratech/Cambridge NanoTech Inc., MA) at 250 °C. The plasma source, inductively coupled at 13.56 MHz, is far away from the samples. It is very important to note that the distance between the plasma source and sample location is larger than 40 cm since the type and concentration of the reactive species (electrons, ions, and radicals) strongly depend on this distance. Outside of the glow discharge, only long lifetime radicals are present while ions and electrons recombine quickly. In order to remove the native stress and polymethyl methacrylate (PMMA) residues from growth and transfer, the graphene samples on the SiO2/Si substrate first are annealed at 250 °C for 2 h in the deposition chamber under pressure of 80 mtorr (argon gas). During both dielectric film depositions, the pulse duration of the oxygen plasma (oxidant precursor) is 10 s for each cycle. The flows of the oxygen plasma and the argon carrier gas are 20 and 200 sccm, respectively. The metallic and oxidant precursor pulses are separated by a short argon purge of 5 s. To obtain uniform dielectric films and avoid graphene etching, the metallic precursors are first pulsed on the graphene surface. The metallic precursors can adsorb or react with carbon to form the related metal oxides following the first oxidant precursor pulse. In contrast, if the first pulse is a single cycle of oxygen plasma, the graphene surface is possibly damaged or non-uniform dielectric films are formed. The other parameters related to the metallic precursors and the final thickness of both dielectric films are listed in Table 1. The thickness of the dielectric films is measured by in situ ellipsometry from reference films directly deposited on silicon substrates. The composition of the dielectric films is characterized by XPS. In order to investigate the damage level of graphene upon different oxygen plasma power levels, HfO2 and Al2O3 films are deposited on graphene/SiO2/ Si stacks with nominal 300 W and with reduced oxygen plasma power of 200 and 150 W, respectively. Raman spectroscopy system. A LabRam HR 800 confocal laser system from Horiba Jobin Yvon was used for the acquisition of the Raman spectra. The measurements are performed at room temperature with a laser wavelength λ = 514 nm in backscattering geometry. The laser beam is focused on the center of hexagons and a 100 × objective (NA = 0.95) is used to collect the signal. The incident power is kept below 1 mW. Low resolution (150 g/mm) and high resolution (1800 g/mm) gratings are used for the measurements. XPS characterization. A ThermoFisher Scientific K-alpha spectrometer is employed. It is equipped with a monochromatized Al Kα 1,2 x-ray source and a hemispherical deflector analyzer. The spectra are recorded at constant pass energy (150 eV for depth profiling and survey; 30 eV for high resolution spectra). A flood gun (low energy electrons and Ar ions) is used during all the measurements. During the sputtering, the Ar + ion gun is operated at a low energy (200 eV), with an erosion time of 5 s per cycle, Final thickness (nm) 5.9 5.5 Table 1. Process conditions in PE-ALD and thicknesses of the two dielectric films. and the analysis is done in snapshot mode. The XPS data are treated with the Avantage software. High resolution spectra are fitted by Gaussian-Lorentzian lineshapes with an Avantage "smart" background (i.e. a Shirley background in most cases, or a linear background in case the lineshape decreases with increasing BE).
2018-04-03T06:10:25.195Z
2015-08-27T00:00:00.000
{ "year": 2015, "sha1": "b4b67268788969fd19db6916fc92401af04a5db2", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/srep13523.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "b4b67268788969fd19db6916fc92401af04a5db2", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Medicine", "Materials Science" ] }
227038004
pes2o/s2orc
v3-fos-license
Convolutional neuronal networks combined with X-ray phase-contrast imaging for a fast and observer-independent discrimination of cartilage and liver diseases stages We applied transfer learning using Convolutional Neuronal Networks to high resolution X-ray phase contrast computed tomography datasets and tested the potential of the systems to accurately classify Computed Tomography images of different stages of two diseases, i.e. osteoarthritis and liver fibrosis. The purpose is to identify a time-effective and observer-independent methodology to identify pathological conditions. Propagation-based X-ray phase contrast imaging WAS used with polychromatic X-rays to obtain a 3D visualization of 4 human cartilage plugs and 6 rat liver samples with a voxel size of 0.7 × 0.7 × 0.7 µm3 and 2.2 × 2.2 × 2.2 µm3, respectively. Images with a size of 224 × 224 pixels are used to train three pre-trained convolutional neuronal networks for data classification, which are the VGG16, the Inception V3, and the Xception networks. We evaluated the performance of the three systems in terms of classification accuracy and studied the effect of the variation of the number of inputs, training images and of iterations. The VGG16 network provides the highest classification accuracy when the training and the validation-test of the network are performed using data from the same samples for both the cartilage (99.8%) and the liver (95.5%) datasets. The Inception V3 and Xception networks achieve an accuracy of 84.7% (43.1%) and of 72.6% (53.7%), respectively, for the cartilage (liver) images. By using data from different samples for the training and validation-test processes, the Xception network provided the highest test accuracy for the cartilage dataset (75.7%), while for the liver dataset the VGG16 network gave the best results (75.4%). By using convolutional neuronal networks we show that it is possible to classify large datasets of biomedical images in less than 25 min on a 8 CPU processor machine providing a precise, robust, fast and observer-independent method for the discrimination/classification of different stages of osteoarthritis and liver diseases. Methods Artificial neural networks. Artificial Neuronal Networks (ANNs) are computing systems inspired by the structure and functioning of biological neuronal networks, which "learn" how to perform tasks from given input examples, without being specifically programmed for the task. ANNs are a sub-category of the more general machine learning algorithms 25 . In this work, the convolutional neuronal network (CNN) was used, which is a special kind of ANN having four different types of layers: an input, a convolutional, a pooling, and a fully connected layer. The input layer takes the image that is given to the network to be analyzed. The convolutional layer has a kernel with trainable weights and with a size that can be varied: usual sizes are 3 × 3, 5 × 5 or 7 × 7 pixels. The input image is convolved with the kernel, which acts, thus, as a filter. The pooling layer performs downsampling of the input, with parameters that depend on the kernel size and stride length (step of displacement after convolution). In this study, the so-called max-pooling layer is used: it takes the maximum value within the kernel as an input for the next layer. At the end of the network system, classification and activation functions are performed in the fully connected layer, where all artificial neurons are connected 26 . In the CNN language, one epoch is one iteration of the network. An epoch consists of two parts: the forward processing of images to classify them and the backpropagation to change/train the weights and converge towards an improved classification. In the forward processing, the image data go from the input layer to the classification layer; in this latter case, a function calculates the error between the predicted classification and the apriori classification information (error function) by considering the effect of every weight. To minimize the error, an optimizer 27 is used, which adjusts the weight according to the learning rate set by the user (in the range [0,1]). Depending on the number of epochs and the learning rate, the CNN converges and classifies the data; therefore, the learning rate is set to get the fastest convergence (as defined at the end of this section) and the best classification. In this study, we used the so-called transfer learning method that works with CNN weights pre-trained on large image datasets 28 . In our case, weights were trained on the ImageNet dataset 29 . A custom-designed network was implemented: this was achieved by removing the classification layer of the pre-trained network and by adding a fully connected layer and another classification layer with two or four outputs, depending on the dataset. The pre-trained CNN acts as a feature extractor for the self-designed part of the network. In the backpropagation, the weights are adjusted in the self-designed network, whereas the weights in the pre-trained network are fixed. We tested three pre-trained CNNs for our study: the VGG16 30 , the Inception V3 31 , and the Xception Network 32 . We selected these specific networks, because of their high performance in the image classification competition, i.e. the Large Scale Visual Recognition Challenge (LSVRC) organized by the ImageNet project 33 The VGG16 CNN is a network with 16 convolutional layers with a kernel size of 3 × 3 pixels, two fully connected layer and a classification layer with 1000 classification outputs. The size of the input images is 224 × 224 pixels (for RGB images) 30 . The VGG16 is a heavy computation network, with long training times and a large number of weights (for a total size of 533 MB). The Inception network was introduced by Szegedy et al. 34 . The idea is to construct the network "wider" instead of "deeper". To make a network "deeper", layers are added in such a way that layers are behind each other. For the Inception network, layers are instead added and arranged in a parallel configuration; the network becomes thus "wider" and the layers also work in parallel. In this way, the size of the pre-trained weights is reduced to 96 MB 35 . The Xception Networks architecture builds on a depth-wise separable convolutional layer. The weight size of this network is 91 MB 32 . The analysis was performed on a Fujitsu workstation with 8 Intel Xeon CPU processors with 4 kernels and 2.6 GHz. The graphics card on which the calculations were carried out is a NVIDIA Quadro P1000 with 4 GB Memory. The entire code is written in Python based on the Keras library 36 , a deep learning library in Python interfacing tensorflow-GPU 37 as a backend. Transfer learning process. We used the transfer learning of the CNNs, which is achieved through a threestep process. (1) The network is divided in two sections: the first section is trained first on a large dataset with annotated images, which is not related to the later task. (2) All the parameters in the first section are fixed and a second training is performed using a dataset related to the classification task to train the second section of the network. In this way, the algorithm learns how to classify the images. Fine tuning of the networks. For the fine tuning of the networks, we tried to improve them in two ways: (1) by studying the influence of parameters such as optimizer, learning rate or number of epochs etc.; (2) by adding an additional network element to the pre-trained network, such as fully connected layer, drop-out layer, etc. Concerning the first method, we kept the optimizer algorithm RMSProb 38 for all cases constant as well as the number of epochs. The number of epochs was chosen to assure convergence of the CNN. In this study, we defined a convergence criteria based on a threshold (± 0.5% between two consecutive epochs of the validation data). The learning rate has been always adjusted to push the network to its best performance and to avoid overfitting. Using the second method, the best results were obtained by adding one fully connected layer. Phase contrast imaging and dataset description. In X-ray PCI the image contrast derives from the perturbations of the X-ray wave-front induced by the presence of an object along its propagation path. This contrast mechanism has been proven leading to a superior image contrast 3-5 with respect to standard X-ray attenuation, especially in case of soft tissues. In this work, we applied X-ray PCI to investigate cartilage and liver biological specimens. Cartilage samples and dataset. For the cartilage evaluation, human cadaveric patellae were used. According to the regulations for experiments involving cadaveric samples, this study was waived by the ethics committee of the Ludwig-Maximilians-University, Munich, Germany. However, the required informed consent was obtained from the legally authorized/next of kin of the deceased prior to the extraction of the patella. Samples were extracted in compliance with the relevant guidelines and regulations by the forensic medicine department of the Ludwig-Maximilians-University, including testing for infectious diseases. Four cartilage samples (plugs), cylinders of 7 mm in diameter, were harvested from human patella (67-year-old woman) within 24 h of death. The plugs were divided into two groups based on OARSI assessment system 39 by two experienced pathologists: the control group with healthy cartilage samples and the OA degraded cartilage group. The samples were imaged at the Biomedical beamline (ID17) of the European Synchrotron (ESRF, France) by using X-ray propagation-based PCI micro-CT 40 with a polychromatic and filtered X-ray beam with peak energy around 40 keV 41 . The detection system consisted of a PCO edge 5.5 sCMOS camera 42 coupled with a 10 × optics and a 19 µm thick GGG scintillator screen leading to a final pixel size of 0.7 × 0.7 µm 2 . From the reconstructed CT volumes, sagittal CT images of the transitional and mid zone of the cartilage are extracted layer (1024 × 1024 pixels), downscaled to reduce them to 224 × 224 pixels in order to fit the CNN requirements and finally normalized to values in the [0-1] range. The analysis was performed on images presenting a voxel size of 0.7 × 0.7 × 0.7 µm 3 . Example are shown in Fig. 1A,B): (1A) is a sagittal PCI CT image of a healthy cartilage, whereas is (1B) the sagittal slice of an osteoarthritic cartilage sample with a small crack of the tissue visible on the right side. From every group, 3800 images were extracted and split into three categories: 60% were used as training data, 20% as validation data during training and 20% as testing data after training. Half of the images corresponded to healthy samples, the other half to pathological ones. In addition, for this step we have split the samples as follows: two samples (one from each group) were used for training the network; the images of the remaining two samples were split equally into validation and testing data sets 43,44 . 45,46 . Organs were stored in cold University of Wisconsin solution (4 °C; DuPont de Nemours, Bad Homburg, Germany). All experiments were carried out in accordance with the German legislation on animal protection and with the "Principles of Laboratory Animal Care" (NIH publication no. 86-23, revised 1985) 15 . All experimental protocols for the rat studies were approved by the local government (Regierung von Oberbayern, Munich, Germany) and were reported to the responsible authorities regularly. The liver samples were divided into three groups: (1) healthy; (2) fibrotic four weeks (4 weeks perfusion); (3) fatty livers. The samples were all paraffin-embedded and were imaged with X-ray PCI micro-CT with a polychromatic X-ray beam with a mean energy of 24 keV at the ID19 beamline at the ESRF. The detection system was a PCO Edge with a 6.5 µm pixel size connected with a 2.9 × optics, thus determining a final effective pixel size of 2.2 × 2.2 µm 2 . The analysis was performed on CT slices presenting a voxel size of 2.2 × 2.2 × 2.2 µm 3 . Both CT datasets were pre-processed and reconstructed with the PyHST2 software 47 . The 512 × 512 pixels' liver images were extracted from 3D reconstructed volumes, the intensity normalized to the [0-1] range and reduced as well to 224 × 224 pixels by binning via linear interpolation. This dataset originates from 6 different liver samples: two healthy (Fig. 1C), two fibrotic 4 weeks (Fig. 1D) and two fatty livers (Fig. 1E). A total of 3600 images were obtained and, from each liver sample, 600 images were extracted. By applying again, a 60/20/20 split ratio, 2160 images were used for training the network, 720 for validation during the training and 720 image were used for testing the trained network. In addition, the total number of input images for training and testing was increased by rotating the original images by 90, 180 and 270 degrees and adding them to the respective groups of images. This method is referred as "data augmentation" and it increased the total number of available images by a factor of four in this case 48 . In this third step, we have split the samples as follows: one of each group was used for training and the images of the remaining three samples were used for validation and testing data sets 43,44 . The training data were used for training and updating the weights of the network. The validation data were used to evaluate after each iteration the accuracy of the network. The testing data set was used to evaluate the accuracy of a trained model with a new dataset. The accuracy of the performances of the networks was calculated as the ratio of the sum of the true positive cases plus true negative cases over the total number of input images: Accuracy = True Positive + True Negative Total number of images Results All validations were done with respect to the histological data, taken as reference. For the cartilage data, the VGG16 network provided a testing accuracy of 99.8% (validation accuracy 99.8% and training accuracy 99.9%) (Fig. 3 top) after 25 epochs and a learning rate of 7 × 10 -7 . In this case, none of 760 images were falsely classified as healthy instead of degenerated (false-positive) and 3 out 760 images were classified healthy instead of degenerated (false-negative), as shown in the so-called confusion matrix in Fig. 2A. The time to train and validate this network was 34 min and 42 s. The inception V3 network classified the data with a test accuracy of 84.7% (validation accuracy 86.8% and training accuracy 96.6%) (Fig. 3 mid) after 25 epochs with a learning rate of 1 × 10 -6 ; in numbers: 224 images were predicted as false negative and no false positive cases were found (Fig. 2B); 536 healthy images were classified correctly and 760 images with signs of degeneration were classified correctly over the total number of 1520 images (Fig. 2B). The calculation time for this network (training, validation and testing) was 19 min and 41 s. The Xception network has 308 images predicted false positive and no false negative; 376 healthy images and 760 degenerated are predicted correctly (Fig. 2C). Figure 3 shows the accuracy (training and validation) plot as a function of the number of epochs for the VGG16, the Inception and the Xception networks, respectively; it shows how they converge toward a stable solution for a same number of iterations. With the Xception network, a validation accuracy of 70.2% and a testing accuracy of 72.6% was achieved after 25 epochs. The training, validation and testing with this Network took 37 min and 25 s. The accuracy of the training data is at 88.7% (Fig. 3 bottom). By using instead 42 epochs, the plateau (< ± 0.5% different between epochs) was reached during the training accuracy (97.2%), the validation accuracy increased to 79.7% and the testing accuracy was 81.3%. The time required for this calculation was 1 h 14 min and 33 s. Training the network with more epochs the testing accuracy of the Xception network is increased, but it does not reach the testing accuracy of the inception or VGG16 network. In the next step, the cartilage data were split by sample type: images of one healthy and one degenerated sample were used for training. The images of the other samples were used for validation and for testing. Therefore, we have a split of 50/25/25 percent: 1900 images were used for training (950 images from healthy sample, 950 image from degenerated sample) and 950 images were used for validation and testing. The training with this dataset was 25 epochs long. The testing (validation) accuracy of the VGG16 network was 68.6% (70.0%) whereas the training accuracy is 99.9%. The training/validation and testing took 32 min and 45 s. The inception network achieved a testing (validation) accuracy of 65.9% (66.6%) and a training accuracy of 99.0%. The calculation time was 29 min and 38 s. The testing (validation) accuracy was at 75.7% (79.8%) of the Xception and a training accuracy of 99.8%. The testing accuracy of the VGG16 network, declined as well as the accuracy of the inception network, when splitting the dataset based on samples. The Xception network increased its testing accuracy from 72.6% up to 75.7%. For the liver data, we report in Fig. 4 the confusion matrixes. For all the three CNNs, the convergence was obtained after 50 epochs. The VGG16 network performed with a test accuracy of 96.0% (validation accuracy 96.1%), with a learning rate of 1 × 10 -6 . The training accuracy is slightly lower at 94.6%. 2 healthy, 26 fat liver, 1 fibrotic (4 week) images got mistakenly classified (Fig. 4A). The computational time was 34 min and 21 s. The Inception V3 network, with a learning rate of 1 × 10 -6 , performed on this dataset with a test accuracy of 43.6% in 18 min and 54 s. By this network, 442 out of 720 images were falsely classified (Fig. 4B). The Xception network (learning rate of 1 × 10 -6 ) performed with a test accuracy score of 53.8% on this dataset in 36 min and 1 s (Fig. 4C). To increase the number of samples as input for the network, we decided to repeat the classification by rotating the images to get a more general classifier. The training/validation/testing ratio was set to 60/20/20 again. As a result, 14,400 images (4 × 360 original images) were available: 8640 images were used for training the CNNs and 2880 for the validation; finally, 2800 were used for testing the network. The networks converged faster with a In a next step, we tested the network performances by training the system with images of one set of samples and then validating and testing it with another set. For training 1800 images from three samples of different groups were used (600 images one healthy liver sample, 600 images from one fatty liver sample and 600 images from one four-week perfusion sample). The classification accuracy of the testing dataset by the VGG 16 network was 75.4%, whereas the training (validation) accuracy score was 99.8% (73.7%). Training (15 epochs) and testing process of this network took 10 min and 33 s. The inception network obtained a testing accuracy of 39.8% and a training (validation) accuracy of 99.94% (41.0%). This training (15 epochs) and testing of the network took 6 min and 10 s. The Xception network achieved a testing accuracy of 42.0%. The training and validation accuracy were 98.7% and 46.4%, respectively. The training of the network with 15 epochs and testing lasted 11 min and 17 s. Discussion and conclusions In this work, we have investigated the possibility of using convolutional neuronal networks for the classification of healthy and pathological biological tissues considering two different biomedical cases: osteoarthritic cartilage and liver fibrosis. The evaluation of the samples was carried out by two experienced pathologists on the basis of the histological results, which served as golden standard. We have applied three CNNs (VGG16, Inception V3, and Xception networks) and compared their performances in terms of accuracy in the classification and the time needed for this calculation. The VGG16 network provided the highest accuracy, compared to the Inception V3 and Xception network in the analyzed cases. In the VGG16, the entire image is convoluted, whereas in the Inception V3 and the Xception networks, the image to be analyzed is split into different regions. This process of subdividing the images can lead to overfitting that causes poor performances of the networks when applied to data in the validation and testing phase. This fact determines the discrepancy between the training and the validation/testing accuracy curves for the Inception V3 and Xception networks. Additionally, this explains the discrepancy of our results with respect to the LSVRC competition. We also tested the effect of training the network with images of two cartilage samples (one healthy and one degenerated) and validated and tested with images of another cartilage sample: the testing accuracy decreased in all of the three networks. However, the Xception network was the one with the highest testing accuracy. This last fact shows that the Xception network model is the best generalized model, when splitting the dataset by sample type. Many different ways, such as additional fully connected layers and drop out layers, changing the optimizer algorithm or adjusting the learning rate, were used to reduce overfitting in the Xception and Inception V3 networks. The results we presented here were obtained after this optimization procedure (best accuracy and lowest overfitting); instead, the results of these intermediate optimization procedures were not reported. In the case of the cartilage, other computer-aided diagnosis tools are available, like texture analysis. This kind of analysis on cartilage PCI images for characterizing osteoarthritis, gives good results for both 2D images 49 and 3D volumes 50 . The Inception V3 network with its inception modules is much faster for training, validation and testing than the other two networks. For the cartilage dataset, the Inception was 56.7% faster than the VGG16 network www.nature.com/scientificreports/ and 47.4% than the Xception. For the liver dataset, the Inception was 26.6% and 29.5% faster than the VGG16 and Xception networks, respectively. The reason of its higher performances lies in its unique inception module structure, which reduces the number of trainable weights and therefore speeds up the computation. With the data augmentation of the liver dataset, we could show that the networks converged faster when the number of input images was increased and therefore we needed fewer epochs. For the VGG16 network, the testing accuracy stayed approximately the same 95.5% and but the computational time increased by 23.5% from 34 min and 21 s to 42 min and 11 s because of increasing the number of input images by a factor of 4. We can conclude that more input data leads to a better accuracy and a faster convergence of the VGG network, but this does not come with shorter computational times. When we used data from different liver samples for the training and the testing of the networks, we achieved a decreased testing accuracy of all the networks, whereas the training accuracy increased. This result shows that an overfitting occurs and the networks do not generalize enough; to overcome this limitation, a larger number of samples should be used. The network presenting the best testing accuracy is the VGG16 with 75.38%, as in the calculation without the split based on samples. Both the Inception V3 and Xception networks testing accuracies were for this test below 50%; for this test and both networks did not perform well on liver data, in contrast to the cartilage data, were both networks had a testing accuracy above 68%. The testing accuracy strongly depends on the data splitting method that is used. If the slices for training and testing the CNN are extracted from the same sample, the data used in the two processes may look very similar and an overfitting of the networks during training may occur. In this case, the generalization of the CNN on new samples is unsettled and may be severely hindered. This study shows that the combination of advanced high sensitive X-ray imaging techniques (PCI) with newly available algorithms for data classification based on the neuronal network concept, could significantly support the discrimination between healthy-normal and pathological-abnormal conditions of biological tissues. The proof of concept of this methodology was here performed on small tissue samples (cylindrical bone/cartilage plugs of 7 mm in diameter). This method could be an important asset in the direction of the automation of diagnostic procedures. The application of CNNs to our datasets showed that these tools (in the specific case we identified the VGG16 network as the most accurate one) make it possible to analyze and classify sets of 9616 images of 224 × 224 pixels in less than 25 min providing a robust, fast and observer-independent method of diagnosis. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat iveco mmons .org/licen ses/by/4.0/.
2020-11-19T09:17:34.166Z
2020-11-17T00:00:00.000
{ "year": 2020, "sha1": "3897482365c042e3d9988d323663b01548d68c63", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-020-76937-y.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f2952cb1e84d7832d2d682724c23255523ff4b13", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
245253713
pes2o/s2orc
v3-fos-license
Healing of bone defects by induced pluripotent stem cell-derived bone marrow mesenchymal stem cells seeded on hydroxyapatite-zirconia Background Induced pluripotent stem cells (iPSCs) can generate bone marrow mesenchymal stem cells (BMSCs) as seed cells for tissue-engineered bone to repair bone defects. In this study, we investigated the effects of hydroxyapatite-zirconia (HA/ZrO2) composites combined with iPSC-derived BMSCs as a bone substitute on repairing skull defects in rats. Methods Human urinary cells isolated from a healthy donor were reprogramed into iPSCs and then induced into BMSCs. Immunocytochemistry (IHC) and reverse transcription-polymerase chain reaction (RT-PCR) were used to examine the characteristics of the induced MSCs. The iPSC-derived BMSCs were cultured on HA/ZrO2 composites, and cytocompatibility of the composite was analyzed by cell counting kit-8 (CCK-8) assays, RT-PCR, and scanning electron microscopy. Then, HA/ZrO2 combined with iPSC-derived expanded potential stem cells (EpSCs) were transplanted onto skull defects of rats. The effects of this composite on bone repair were evaluated by IHC. Results The results showed that MSCs induced from iPSCs displayed the phenotypes and property of normal BMSCs. After seeding on the HA/ZrO2, iPSC-derived BMSCs had the ability to proliferate and differentiate into osteoblasts. After transplantation, iPSC-derived BMSCs on HA/ZrO2 promoted construction of bone on rat skulls. Conclusions These results indicated that transplantation of a HA/ZrO2 combined with iPS-derived BMSCs is feasible to reconstruct bones and may be a substantial reference for iPSC-based therapy for bone defects. were found in tumorigenicity testing of iPS-MSCs, which further demonstrated the safety of iPS-MSCs. In recent years, iPS-MSCs have provided breakthrough progress in the treatment of osteonecrosis and bone defects (7,8). However, there are few reports on the transplantation of IPS MSCs into HA/ZrO 2 as tissue-engineered bone materials. In our previous studies, gradient composites of zirconia and hydroxyapatite (HA/ZrO 2 ) have been shown to have a three-dimensional (3D) porous structure with good biocompatibility. Moreover, when combined with BMSCs, HA/ZrO 2 showed a good therapeutic effect on Beagle canine bone defects. Here, we differentiated urine cell-derived iPSCs into BMSCs, and cultured these cells on a HA/ZrO 2 scaffold to form tissue-engineered bone that was then transplanted into the skull defect of rats. The effects of iPSC-derived MSCs on bone regeneration were then analyzed, and the feasibility of using iPSC-derived MSCs as seed cells to construct bone substitutes to repair bone defects were also assessed. We present the following article in accordance with the ARRIVE reporting checklist (available at https://dx.doi. org/10.21037/atm-21-5402). Generation and analysis of iPS-MSCs Urine cells (UCs) were collected from a 24-year-old healthy male volunteer (after he had provided informed consent), and reprogrammed into iPSCs by introducing 4 exogenous transcription factors (OCT4, SOX2, C-MYC, KLF4) into human UCs using a retrovirus-mediated infection system. Briefly, we used calcium phosphate transfection to introduce 4 plasmids (OCT4, SOX2, C-MYC, and KLF4) into 393T cells, then respectively collected viral fluids for 48 h and 72 h. The UCs were infected twice with the addition of polybrene containing a final concentration of 8 μg/mL and changed to UC culture medium 12 h later. After 4-5 days of infection, the nucleocytoplasmic ratio became larger, and the cells were spread onto embryonic fibroblast (MEF) feeder cells and (2×10 5 /10 cm dish) were replaced with human (h)ESCs medium (20% dermal fibroblast cells + knockout serum replacement (DFBs + KSR) medium). All of cultures were supplemented with 50 μg/mL each of vitamin C and 1 mM final concentration of valproic acid (VPA) until the clonogenic cells appeared. The clonal iPSCs were collected onto 6-well plates plated with an MEF-like layer or Matrigel, cultured with mTesR1 medium. The Culture medium was changed daily. The BMSCs samples serving as controls were collected from patients with normal hip fracture at the Xiaoshan Hospital of Traditional Chinese Medicine. All procedures performed in this study involving human participants were in accordance with the Declaration of Helsinki (as revised in 2013). The study was approved by the Ethics Committee of Xiaoshan Hospital of Traditional Chinese Medicine (No. 2014396) and informed consent was taken from all the patients. Briefly, 5 mL of bone marrow was diluted 1:1 with phosphate-buffered saline (PBS). The bone marrow was dropped into a 50 mL centrifuge tube containing 10 mL Ficoll using a Pap pipette, 1,800 rpm, 25 ℃, for 18 min centrifugation. We then aspirated the middle flocculent layer into a new centrifuge tube with PBS dilution, washed it twice, and performed 10 min centrifugation at 1,000 rpm and 4 ℃. After the MSC medium was resuspended, it was seeded into 10 cm dishes and replenished to 10 mL. After 48 h, half of the medium was changed every 2 days, using pancreatin replacement digested for 1:3 passages after confluency. The expression of OCT4, NANOG, and vimentin proteins after the iPSCs had differentiated into iPS-MSCs was evaluated by immunofluorescence, and BMSCs were used as control. The glass pieces were built into 24 well plates first, and then the cells were seeded on the climbing pieces. When the density of cells had grown appropriately, they were washed once with PBS. Cells were first fixed in 4% paraformaldehyde (PFA) for 30 min at room temperature, then permeabilized in 0.2% Triton for 30 min at room temperature, after which they were fixed in 3% bovine serum albumin (BSA) for 2 h at room temperature. The liquid was aspirated and discarded, and the cells were then incubated with antibodies in 1% BSA configuration overnight in the dark at 4 ℃, and finally secondary antibodies were added and incubated for 1-2 h in the dark at room temperature. At the end of each step above, the cells were washed 3 times with PBS for 5 min and shaken at 30 rpm. Subsequently, they were added to a final concentration of 1 μg/mL 4',6-diamidino-2-phenylindole (DAPI; Sigma-Aldrich), 0.5 mL per well, and kept in the dark at room temperature for 5 min. The sections were sealed and observed under a fluorescence microscope (LSM710, ZEISS, Oberkocken, Germany). Expression of pluripotency genes OCT4, NANOG and mesoderm marker gene vimentin on iPS-MSCs was examined by RT-PCR, with iPSCs and BMSCs as controls. Total RNA was extracted from cells at different differentiation stages according to the RNA kit instructions. The RNA was reverse transcribed into complementary DNA (cDNA) according to the reverse transcription kit instructions, and then the relative quantification of messenger RNA (mRNA) expression of each gene was performed on a quantitative realtime polymerase chain reaction (qPCR) instrument according to the manufacturer's instructions. The reverse transcription (RT)-PCR results were confirmed by at least 3 independent analyses. Relative gene expression was calculated using the 2 -∆∆CT method relative to the glyceraldehyde 3-phosphate dehydrogenase (GAPDH) control. The primer sequences are given in Table 1. Flow cytometry analysis was conducted as follows: Pancreatin substitute was used to digest and collect cells, which was washed twice with prechilled PBS, and then incubated with fitc-14, fitc-45, fitc-90, fitc-14, fitc-hla-dr, pe-34, pe-44, pe-73, PE-1, apc-19, and apc-105 antibodies (all mouse anti-human antibody, Becton, Dickinson, and Co. (BD), Franklin Lakes, NJ, USA) in the dark for 30 min. The tubes without antibodies were set up as blank controls. After washing with PBS, cells were resuspended in buffer (no less than 1×10 4 cells per tube) and then processed for analysis on a Fortessa Flow Cytometer (BD, USA). Multipotent differentiation capacity was analyzed as follows: Cells were seeded into 6 well plates until they reached about 70% confluence. Then, the medium was replaced respectively with osteogenic and adipogenic induction medium which was changed every 3 days. The differentiation lasted for 21 days. The cells were then stained by Alizarin red and oil red O, respectively. For chondrogenic differentiation, 5×10 5 cells were collected by centrifugating into 15 mL centrifuge tubes (pellet) and adding to chondrogenic induction medium. The cell pellets were gently blown up while changing the medium every 3 days. The samples were embedded in paraffin and sectioned for identification by Alcian blue staining after 21 days. Teratoma formation test was conducted as follows: The iPS-MSCs (P6) and iPSCs were trypsinized into singlecell suspensions and re-suspended in PBS at the density of 1×10 7 cells/mL. A total of 100 μL iPS-MSCs, iPSCs suspensions, and the pure PBS were injected subcutaneously into the hind limbs of 8-week-old male nonobese diabetic/ severe combined immunodeficiency (NOD-SCID) mice (Shanghai Slac Laboratory Animal Co. Ltd, Shanghai, China), respectively. Teratoma formation was examined after transplantation for 8 weeks. Culture and osteogenic differentiation of iPS-MSCs on HA/ZrO 2 scaffolds The HA/ZrO 2 porous gradient composite foams ceramic materials were obtained by mechanical foam impregnation combined with gradient composite high-temperature sintering and the materials were provided by Shanghai University School of Material Science and Engineering. The sponges were pretreated with 15 wt% sodium hydroxide solution, then air dried sponges were immersed into the slurry with repeated soaking 3 times. The main components of the slurry, which was thoroughly stirred and mixed with distilled water, included ZrO 2 powder (65%, 60%, 55% wt%, respectively), polyvinyl alcohol (PVA, 0.5 wt%), carboxymethylcellulose (CMC, 0.5 wt%), silica sol (10 wt%), ammonium polyacrylate (PAA-NH4, 0.6 wt%), and octanol (0.5 wt%). After soaking, the excess slurry was blown by nitrogen (N 2 ) and dried in an oven at 110 ℃ for 24 h. After drying, the foam materials with slurry immersed were sintered at 1,550 ℃ to obtain porous ZrO 2 scaffolds. Then, ZrO 2 scaffolds were soaked gradually into a mixture of 30% HA/70% ZrO 2 , 50% HA/50% ZrO 2 , 100% HA, which had to be air dried before each soak. Finally, a porous HA/ZrO 2 gradient composite foam ceramic material was successfully fabricated by sintering and cooling at 1,250 ℃ in a furnace. Cell adhesion: The materials were firstly sterilized using a autoclave (121.3 ℃, 20 min) and then immersed in a-MEM which was put in a cell culture incubator for 1 h. The iPS-MSCs and BMSCs were seeded onto HA/ZrO 2 respectively and cultured with osteogenic differentiation medium and normal medium without inducer. After 7 days, the samples were gradually fixed with 2% glutaraldehyde solution for 4 h, 1% osmic acid solution for 1 h, dehydrated through graded ethanol gradients (30%, 50%, 70%, 80%, 90%, 95%, 100%), until they reached the critical point and then sputtered gold. Finally, we photographed the samples under scanning electron microscope to observe the adhesion of iPS-MSCs and BMSCs on HA/ZrO 2 . For osteogenic differentiation, iPS-MSCs and BMSCs were respectively seeded at 1×10 4 /cm 2 on 15 mm diameter HA/ZrO 2 materials in 12 well culture plates and Alizarin red staining was performed at 7, 14, and 21 days. Cells were fixed with 90% ethanol for 10 min and then stained using 0.2% Alizarin red solution with a pH of 6.4 for 30 min at room temperature. After washing with double-distilled water to remove unbound dye, cells were imaged by light microscopy. The expressions of Runx2, COL1A1, ALP, and OCN, which were specifically related to osteogenic differentiation of iPS-MSCs and BMSCs, were determined by fluorescence qPCR in different time periods (7, 14, and 21 days). Methods of fluorescence qPCR were as previously described. Primer design sequences are shown in Table 1. Repair of HA/ZrO 2 combined with iPS-MSCs on rat skull defects Animal experiments were performed under a project license (No. 10296) granted by the Ethics Committee of Xiaoshan Hospital of Traditional Chinese Medicine, in compliance with the hospital guidelines for the care and use of animals. A protocol was prepared before the study without registration. The animals were housed and surgically manipulated following the guidelines established by the animal ethics committee in the hospital, which gives laboratory animal care and reduces suffering. Male SD rats aged 8-10 weeks were used in the experiments, which were purchased from the animal experimental center of Zhejiang University. Anesthesia was achieved by intraperitoneal injection of ketamine (70 mg/kg bodyweight) and xylazine (10 mg/kg), followed by local anesthesia with 2 mL lidocaine injected on rats' calvaria. A linear sagittal incision was made along the middle calvaria, followed by a full thickness incision to expose periosteum. A bone window of approximately 0.6 cm in diameter was drilled on both left and right side of the rat skull using an electric dental bur. A total of 18 numbered rats were randomly grouped by random number table. The experiments were divided into 3 groups: HA/ZrO 2 material alone (n=6), HA/ZrO 2 combined with iPS-MSCs osteogenic differentiated in vitro for 7 days group (IPS-MSc + HA/ZrO 2 , n=6), and HA/ZrO 2 combined with BMSC osteogenic differentiated in vitro for 7 days group (BMSC + HA/ZrO 2 , n=6). All groups had material grafted on the bone window of the left side and no material grafted on the right side as blank control. Rats were housed in a constant greenhouse at 20 ℃ with a 12 h light/dark cycle. Food and water were provided ad libitum. All the rats received an injection of immunosuppressant cyclosporine A (30 mg/kg bodyweight) every day. At 12 weeks after surgery, all rats had survived well and were included in the outcome study, which involved euthanasia by ether overdose anesthesia. The skull tissues were scanned using animal micro-computed tomography (CT) (SkyScan Bruker Belgium) and MicroView ABA software (GE Healthcare, Chicago, IL, USA) for bone mass analysis. Statistical analysis Statistical analyses were performed using SPSS 22.0 (IBM Corp., Armonk, NY, USA). All data were expressed as the mean value ± standard deviation (SD). Statistical significance was analyzed using one way analyses of variance (ANOVA), followed by post hoc least significant difference (LSD) tests. A confidence level of 95% (95% CI) was considered significant. Generation and characterization of iPS-MSCs Differentiation was induced by serial passaging (Figure 1). After several passages, the cells changed from the original clonal mass shaped iPSCs ( Figure 1A) into fibroblast-like iPS-MSCs ( Figure 1E), which was morphologically similar to BMSCs ( Figure 1I). The results of immunofluorescence analysis showed that the positive expressions of pluripotency genes OCT4 and NANOG in iPSCs ( Figure 1B,1C) were barely detected in iPS-MSCs ( Figure 1F,1G) and BMSCs ( Figure 1J,1K), which were contrary to mesoderm gene vimentin ( Figure 1D,1H,1L). The RT-PCR analysis showed high expression of genes specific for iPSCs (OCT4, NANOG), and downregulation of pluripotency genes in iPS-MSCs and BMSCs. Meanwhile, vimentin, originally lowly expressed on iPSCs, was highly expressed on both iPS-MSCs and BMSCs ( Figure 1M-1O). The results were consistent with the immunofluorescence detection. The surface molecular hallmarks were detected of passage 6 iPS-MSCs by flow cytometric analysis. BMSCs served as a positive control, while iPSCs were also tested. The results showed that iPS-MSCs highly expressed CD73, CD90, CD105, and CD44 (larger than 95%) with low expression of hematopoietic stem cell surface molecular markers (CD34 and CD45), macrophage molecular marker CD14, lymphocyte molecular marker CD19, and HLA-DR (Figure 2), which were in compliance with the International Stem Cell Society for the surface molecular characterization of MSCs. Besides, the results showed that iPS-MSCs were similar to BMSCs, in addition to having low or no expression of the rest of the surface molecular markers except CD90. We compared the capacity for tri-lineage differentiation after 21 days of iPS-MSCs and BMSCs (Figure 3). After osteogenic induction, a wide red area of calcium nodules could be seen under a microscope by Alizarin red staining, which indicated that the cells had good osteogenic differentiation function. Adipogenically differentiated cells were stained with oil red O and obvious lipid droplet formation was visible. After chondrogenic differentiation, the cells showed positive staining of Alcian blue, which indicated that they could differentiate into cartilage. In summary, iPS-MSCs had the same ability about multilineage differentiation as BMSCs. To verify tumorigenicity, we injected iPS-MSCs subcutaneously into the hind limbs of NOD-SCID mice, which were also injected with iPSCs on the other side. After 8 weeks, clear tumor growth could be observed in mice transplanted with iPSCs, while the side of transplanted iPS-MSCs did not generate tumors even after dissection ( Figure 4). It was obvious that iPS-MSCs were safer than iPSCs. Biocompatibility and osteogenic differentiation of iPSC-MSC seeded on HA/ZrO 2 As reported in our previous study (9), the HA in powdered composites was transformed into CaH 2 P 2 O 7 , Ca 2 P 2 O 7 , CaP, CaH 2 phases. After being in contact with water, these nonhydrated phosphate phases could provide the necessary concentration of calcium and phosphate ions for bone mineralization. Meanwhile, high concentrations of calcium phosphate could form HA again in body temperature to stimulate bone formation. The addition of the inert ZrO 2 greatly improved the biomechanical strength of the composite materials. The average flexural strength of materials was 898.67 MPa, which was much higher than the usage requirements of human weight-bearing sites. The HA/ZrO 2 porosity was 25 ppi and uniform with pore size of 150-300 μm. These were prepared as round pieces of different sizes according to experimental requirements. After iPS-MSCs and BMSCs were cultured on HA/ ZrO 2 with normal medium for 7 days, under scanning electron microscopy (SEM), cells tightly adhered to the surface of the materials, spread out, and grew in a fibrous configuration, and spanned the pores on the surface of the materials with good condition ( Figure 5). The CCK-8 studies showed that ( Table 2), cells proliferated continuously in both culture solutions with increasing time, and the evaluation of cytotoxicity grade was grade 0 in both cultures. Therefore, iPS-MSCs and BMSCs can adhere, grow, proliferate, and differentiate well with HA/ZrO 2 in vitro. All experiments showed that HA/ZrO 2 had good biocompatibility. We compared the osteogenic differentiation of iPS-MSCs and BMSCs on HA/ZrO 2 materials at different times (7, 14, and 21 days) in vitro by detecting the expression of osteogenesis related genes (Runx2, COL1A1, ALP, and OCN). The gene Runx2 can promote early osteogenic differentiation as a key factor necessary for osteogenic differentiation of mesenchymal stem cells; Col1a1 is a kind of collagen associated with the formation of the extracellular matrix (ECM) of the pre-osteoblast, which progressively expresses ALP during the maturation stage and then OCN during the mineralization phase. First of all, the results of fluorescence qPCR showed that the expression of osteogenesis related genes in both iPS-MSCs and BMSCs increased significantly with time and there was no significant difference between the two groups ( Figure 6A-6D). Besides, it was shown that both iPS-MSCs and BMSCs had a good mineralization ability on the HA/ZrO 2 materials after Alizarin red staining. With increasing time, the color was darker and the area of red larger, which indicated the increasing calcium nodule ( Figure 6E). Furthermore, the results of calcium nodule quantification testing showed no significant difference in calcium content between the two types of cells during the same period ( Figure 6F). Repairing effects of HA/ZrO 2 combined with IPS-MSCs on rat skull defects To validate the osteogenic capacity of iPS-MSCs in vivo, we used BMSCs as controls and individually combined with HA/ZrO 2 as composite materials after 7 days of osteogenic induction in vitro and then transplanted them into rats with skull bone defects. After 12 weeks, the results examined by micro-CT revealed that almost all defects transplanted by HA/ZrO 2 + iPS-MSCs group and HA/ZrO 2 + BMSCs group were repaired better than transplanting with HA/ZrO 2 alone group ( Figure 7A-7C). For bone mass analysis, the Figure 7D-7F). Discussion The MSCs belong to adult stem cells and have a strong ability to self-renew and proliferate. They can be transformed into bone cells, adipocytes, nerve cells, muscle cells, and endothelial cells under different induction conditions. They have many advantages such as being easy to manipulate gene and weak immunogenicity, which gradually becomes an appropriate cell carrier for gene therapy. At present, although the treatment of BMSCs has been more widely applied in the clinic (10-12), several factors limit their further clinical applications. Firstly, the acquisition of BMSCs is an invasive operation, and the number of BMSCs provided by patients themselves is limited. Besides, there are individualized differences in the activities of MSCs, such as disease, age, and other factors. Moreover, poor pericellular environment such as inflammatory reaction, immune rejection, hypoxia, and oxidative stress also decreases the ability of BMSCs (13). There are similarities between iPSCs and ESCs in morphology, epigenetic modifications, gene and protein expression, and iPSCs have great potential regarding selfrenewal, high proliferation, and multilineage differentiation. They can be reprogrammed from differentiated mature cells, which solve the ethical concerns of ESCs. Besides, iPSCs are induced from patients' autologous cells so that they avoid immune rejection. With the development of reprogramming technology, iPSCs derived by induction with adenovirus, transient expression of plasmid vectors, or recombinant proteins have a lower tumorigenic risk. Some results have shown that hiPSCs-derived MSCs have higher telomerase activity, and better immunomodulation and tissue repair abilities than MSCs. No tumor was observed in animals after implanting iPS-MSCs, which indicates their security. The investigators also contrasted the finding that iPS-MSCs have a greater capacity for vascular repair than BMSCs. All studies have indicated that iPS-MSCs are more promising for cell therapy and regenerative medicine (14)(15)(16)(17). At present, there have been several reports on the methods of iPSCs or ESCs differentiated into MSCs, including spontaneous differentiation in embryoid bodies (EBs) or inducing and differentiating directly in conditioned media (18)(19)(20)(21)(22). Inducting EBS differentiation is a classical method, and iPSCs can differentiate into the inner cell mass of spherical blastocyst under a specific suspension culture. Mature EBs include many types of cells which represent derivatives of the 3 embryonic germ layers. However, the procedure of differentiation in EBS is tedious, is more timeconsuming, and is considered inefficient. The methods of direct conditioned medium induction are mostly achieved by the way of adding cytokines and small molecule compounds, but also by using direct culture with BMSCs and iPSCs or ESs after serial passage to acquire MSCs. During the differentiation of iPSCs, the transforming growth factor-β (TGF-β) pathway maintains stemness of iPS cells. Acting as a TGF-β signal inhibitor, SB431542 can promote the differentiation of iPSCs (23)(24)(25). In this study, we cultured BMSCs in the presence of SB431542, combined with the method of serial subculture to induce iPS cells. For 7 consecutive days in medium supplemented with 10 uM SB431542, the first passage was performed with milder accutase enzymatic digestion. 1%. Using immunofluorescence staining to contrast the expression of related proteins before and after iPSCs differentiated, it was shown that iPS-differentiated cells did not express the pluripotency marker protein OCT4 and the ectoderm marker protein Nestin, but did express the mesoderm marker protein vimentin (27). The iPS-BMSCs were induced for osteogenic, adipogenic, and chondrogenic differentiation, and after 21 days stained positive with Alizarin red, oil red O, and Alcian blue. The results were consistent with qRT-PCR related gene assays. The above results befit the definition of MSCs by ISCT, indicating the successful establishment of a highly efficient method for differentiating iPSCs into MSCs. The application of tissue-engineered bone to repair bone defects is a hot spot in current clinical medicine (28)(29)(30). As a scaffold for cells, the selection criteria of tissue-engineering bone scaffold materials include: (I) excellent biomimetic properties. Bone tissue is a structure with 3D porousness, which is conducive to the metabolic absorption of nutrients. Artificial scaffold materials also need a porous structure similar to bone tissue, which is conducive to the ingrowth of cells and better promotes the reconstruction of bone repair; (II) a certain mechanical strength; (III) good osteoconductive and osteoinductive effect; (IV) be degradable. The degradative components of scaffold material are similar to bone composition, which not only cause no toxicity but also promote the generation of bone. Currently, cell scaffolds used for bone repair mainly include fiber scaffolds, microspheres, porous scaffolds, hydrogels and composite scaffolds (31). Hydroxyapatite (HA), a bioactive material close to natural apatite mineral, is the major inorganic substance of human bone (32)(33)(34). After implantation, HA will be partially degraded to release the necessary calcium and phosphorus, after which the elements will be absorbed, utilized, and incorporated into new tissues, so that the HA implant and bone tissues can be well integrated. HA can facilitate the formation of extracellular matrix including collagen I, fibronectin, peptides, growth factors, glucosamine, and other active molecules, which activate the related signaling pathways for the adsorption of stem cells onto biomaterials (35). In a study by Shie Groups Groups Groups E D F et al. (36), inhibition of MAPK/ERK and MAPK/p38 signaling pathways significantly decreased the adhesion, proliferation, and differentiation of hMSCs and HDPCs in calcium silicate cement. Chen et al. (37) found that the adhesion and osteogenic differentiation of BMSCs cultured on a HA-coated surface could be better after using low-magnitude and high-frequency vibrations, while the Wnt10B, β-catenin, Runx2 and osterix were significantly increased, as a result of which vibration may directly induce osteogenesis by activating the Wnt/β-catenin signaling pathway. Although HA has good osteoconductivity, osteoinductivity, and biocompatibility (38), simple HA as a scaffold material has defects such as low strength, poor toughness, and degranulation. Being fabricated into porous materials further reduces the flexural strength and fracture toughness of HA, as a result of which the scaffold material with single HA cannot meet the requirements for bone replacement in load-bearing parts of the human body. Zirconium dioxide (ZrO 2 ) is a mineral raw material of zirconia with a very high density, the hardness of which is second only to diamond. Besides, it has a good biocompatibility, without allergy, irritation, corrosion, or other adverse reactions (38)(39)(40). Combining HA and ZrO 2 to prepare HA/ZrO 2 composites both improved the physical strength of HA and retained the good biocompatibility of HA. In this study, the porous composites of HA/ZrO 2 were prepared by gradient recombination by high-temperature sintering adding pore former. Adjusting the strength by changing the HA/ZrO 2 voids can produce highly simulated artificial bone materials with mechanical properties that most closely resemble natural bone. Biomechanical examination revealed that HA/ZrO 2 composites exhibited an average flexural strength of 898.6 MPa, which could reach up to 1,112.6 MPa, far exceeding the natural bone strength in humans (41,42). Subsequently we used MSCs differentiated from hiPSCs to construct novel tissue-engineered bone by seeding in HA/ ZrO 2 porous foam ceramic materials, and then examining the biocompatibility. Seeding iPS-MSCs and BMSCs on HA/ ZrO 2 for SEM at days 2, 7, and 14 showed that both cells adhered to the surface of the materials, proliferated well, and gradually grew into the internal voids of the materials. The CCK-8 assay was used to detect cell proliferation, and the OD values of the cells cultured with the extract of the dip solution from HA/ZrO 2 showed a gradual increase on days 1, 4, and 7, which were not significantly different in the control cells cultured in complete medium. The cytotoxic grade ratings of HA/ZrO 2 were all grade 0, which indicated that HA/ZrO 2 porous foam ceramic materials had good biosafety. We seeded iPS-MSCs and BMSCs on HA/ZrO 2 porous materials, respectively, for osteogenic induction and differentiation in vitro, after which we performed calcium nodule assay by Alizarin red staining and related gene of osteogenesis expression assay at different time points (days 7, 14, and 21) to compare the osteogenic potential of the 2 cells on HA/ZrO 2 materials. The results showed that both iPS-MSCs and BMSCs composite materials were deeply stained with Alizarin red dye solution over time. Besides, the quantitative analysis of calcium content showed that calcium content gradually increased in both groups, indicating that iPS-MSCs have a good osteogenic capacity like BMSCs. The results of qRT-PCR confirmed that Runx2, Col1a1, ALP, and OCN expressions all increased with time, in which OCN reached a high value at day 21. As a specific transcriptional regulator (43), Runx2 is necessary and sufficient for osteogenic differentiation of BMSCs. The expression of Runx2 can promote the transcriptional maturation of osteogenesis-related protein genes and the sustained expression of Runx2 is beneficial to the progress of osteogenic differentiation (44). The gene Col1a1 is an important marker of osteogenic differentiation, which accounts for more than 90% of the bone matrix and is an important component of osteogenesis (45,46). As an essential enzyme in osteogenesis, ALP can hydrolyze organic phosphate, release inorganic phosphorus, and then form hydroxyapatite, which is a marker about early maturation in osteogenic differentiation of MSCs (45,47). The ECM protein OCN is only found in the ECM secreted of osteoblasts currently. Its appearance marks the beginning of the mineralization phase in osteogenic differentiation, which is well recognized as a marker of mature osteoblast differentiation (48,49). The results illustrated that HA/ ZrO 2 could promote the osteogenic differentiation of iPS-MSCs from BMSCs. Finally, we transplanted iPS-MSCs on HA/ZrO 2 to repair rat calvarial defects. At 28 days after transplantation, the calvarial defects repaired by iPS-MSCs + HA/ZrO 2 composite healed at a rate close to that of bone defects repaired by BMSCs + HA/ZrO 2 composite. All of them were faster than the control group, which indicated that HA/ZrO 2 not only has good biocompatibility, but also can promote osteogenesis and accelerate the healing of bone defects later. Conclusions In conclusion, we obtained IPSC-MSCs by inducing UC-derived iPSCs, which were characterized by phenotype, gene, and multi-differentiating capacity resembling normal human BMSCs. The iPSC-MSCs were able to continue their proliferation and osteogenic differentiation like BMSCs after being seeded on HA/ZrO 2 . Likewise, the compound of iPS-MSCs and HA/ZrO 2 can promote the healing of calvarial defects in rats after transplantation. This study provides a novel approach for bone tissue engineering, and a substantial reference about iPSC-based therapy for bone tissue repair and orthopedic diseases.
2021-12-17T16:07:40.796Z
2021-12-01T00:00:00.000
{ "year": 2021, "sha1": "8617223ee927fd88825dd62ae098fb4c2b8f5a35", "oa_license": "CCBYNCND", "oa_url": "https://atm.amegroups.com/article/viewFile/85651/pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "643371bbd40ef2c9113d4794888ac4ab8e21374d", "s2fieldsofstudy": [ "Medicine", "Engineering", "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
14119744
pes2o/s2orc
v3-fos-license
Synchronization of Heterogeneous Kuramoto Oscillators with Graphs of Diameter Two In this article we study synchronization of Kuramoto oscillators with heterogeneous frequencies, and where underlying topology is a graph of diameter two. When the coupling strengths between every two connected oscillators are the same, we find an analytic condition that guarantees an existence of a Positively Invariant Set (PIS) and demonstrate that existence of a PIS suffices for frequency synchronization. For graphs of diameter two, this synchronization condition is significantly better than existing general conditions for an arbitrary topology. If the coupling strengths can be different for different pairs of connected oscillators, we formulate an optimization problem that finds sufficient for synchronization coupling strengths such that their sum is minimal. The two main features that describe the behavior of a system of coupled oscillators are the coupling function and the interconnection topology. In the case of the Kuramoto model a trigonometric sin() is used as the coupling function; a broader class of the coupling functions, however, has also been discussed [4], [13], [18], [19]. The most popular assumption on the interconnection topology is that all oscillators are connected to each other, which corresponds to a fully connected graph or a graph of diameter one [6], [7]. A much more general approach is to study the systems of oscillators with an arbitrary underlying topology [5], [8], [12], [14], [24]. Several additional assumptions can be made to make analysis of the Kuramoto model more tractable. First, one may consider a limit case when the model contains infinite number of oscillators [11], [15], [16], [26]. Second, it can be assumed that all oscillators have equal intrinsic frequencies, and therefore form a gradient system of homogeneous oscillators [18]. Alternatively, as we do in this article, one may let the frequencies to take distinct values and thus analyze a system of heterogeneous oscillators [2], [6], [7], [12], [22], [25], [27]. Finally, the coupling strengths can be equal for all pairs of connected oscillators, or are allowed to take different values for different connections. In this paper, we consider a system of finite number of heterogeneous Kuramoto oscillators in which the underlying topology is a graph of diameter two, a natural step to further generalization of the complete graph (diameter one) case. First, we consider the case when the coupling strength is the same for all pairs of connected oscillators and formulate an analytic condition that guarantees boundedness of the trajectories, which in our case also implies synchronization. While there exist more general synchronization conditions [5], [8], [12], [14], [24] that are applicable to the systems with an arbitrary topology, they are significantly more restrictive (when applied to the diameter two graphs) compared to our analytic condition. We provide simulation results that illustrate the improvement over existing results for the graphs of diameter two. Second, when the coupling strengths are allowed to be different for different pairs of interconnected oscillators, we formulate an optimization problem that finds the coupling strengths such that their sum is minimal while synchronization is preserved. The rest of the paper is organized as follows: in Section II we describe the problem setup as well as the main challenge for guaranteeing synchronization, i.e. showing boundedness of trajectories. This challenge is addressed in Section III-A, where we show a general, yet hard to check, condition for synchronization (Proposition 1). This condition is made tractable in Section III-B for the case of equal coupling strengths. Further, in Section III-C we present an optimization approach to study the case when the coupling strengths can be different for different pairs of connected oscillators. We illustrate our findings using simulations in Section IV and conclude in Section V. II. PROBLEM FORMULATION In this article we study a system of Kuramoto oscillators in which each oscillator is described by the following equation: where N i is a set of oscillators connected to oscillator i, i.e. the set of its neighbors, K ij is the coupling strength between oscillators i and j, and n is the total number of oscillators in the system. The coupling strength is symmetric (K ij = K ji , ∀i, j), and can be the same for all connections as assumed in Section III-B, or can be different for different pairs of connected oscillators as in Section III-C. We also assume that the intrinsic frequencies of oscillators ω i are heterogeneous, which implies that they are not necessary equal. Frequencies, however, do not change their values with time, so each w i is a constant. We will show frequency synchronization of system (2) by providing a Lyapunov function and using LaSalle's Invariance Theorem [17]. When the oscillators are homogeneous, all the intrinsic frequencies are equal, i.e. deviationsω 1 = · · · = ω n = 0, and the following Lyapunov function can be used: where φ ∈ R n and E is the edge set of a given graph. It can be verified that:V Since function V 0 ( φ) is well-defined on a n-dimensional torus T n which is compact, applying the LaSalle's Invariance Theorem (on T n ) guarantees synchronization of the oscillators. When the intrinsic frequencies are not equal, we have a system of heterogeneous oscillators, and we still can provide a potential function for this case: We can check that the derivative of this function is also nonpositive and is equal to zero only at equilibrium, i.e. when the frequencies are synchronized. The key problem here is that function V ( φ) is not bounded from below and cannot be defined on T n . Therefore, we are not able to apply directly the LaSalle's Invariance Theorem. However, if we show that the trajectories φ ∈ R n of (2) are bounded, then the function V ( φ) is bounded as well, hence synchronization follows. One of the techniques for showing boundedness of the trajectories is to find a bounded Positively Invariant Set (PIS) for the oscillators' phases. The goal of this article is to show that when some conditions are met, a PIS exists, and if the initial phases are in this PIS, then the trajectories will be bounded and, therefore, system (2) will achieve frequency synchronization. III. MAIN RESULTS This section is organized as follows: we first introduce the notations used in this article and provide a general synchronization condition in Proposition 1. We also demonstrate by means of an example that existence of an equilibrium does not guarantee that system (2) achieves frequency synchronization for all initial phase values. In Subsection B we provide an analytic synchronization condition for system (2) with equal coupling strengths. In Subsection C we study a more general case when the coupling strengths can be different for different edges. A. Preliminary Results Let G = (V, E) be an undirected graph with vertex set V and edge set E that defines the topology of the system (2). Distance between vertices i and j is defined as a number of edges in the shortest path between i and j, where the length of a path is defines as the number of edges in it. Diameter of a graph is defined as the maximum distance between its two vertices. All the results presented in this article are formulated for the graphs of diameter two. We denote by A the symmetric adjacency matrix of a graph G, and define for each pair of vertices i, j constant P ij : where a i and a j are the i th and j th rows of matrix A and A ij is the (ij) th element of matrix A. The dot product a i · a T j is equal to the number of common neighbors of vertices i and j, and A ij = 1 if and only if there is an edge between i and j in E. For example, if i and j are connected and have 3 common neighbors, then P ij = 5. Since diameter of the graphs considered in this article is less than three, P ij ≥ 1 for all pairs of vertices i, j. We denote the maximum and minimum phase values at time t by φ t Let D t be defined as a maximum phase difference between two oscillators at time t (t ≥ 0), i.e. In other words, each phase lies between the minimal and maximal phases φ t min and φ t max . Maximum initial (at time t = 0) pairwise phase difference is denoted by D 0 : then the trajectories will be also bounded since the phase average remains the same (for system (2):φ 1 + · · · +φ n = 0). The PIS, therefore, is defined through the maximum phase difference that is bounded by the value of D: which is obviously a compact. We now formulate a general sufficient condition that guarantees that the maximum phase difference is always bounded by a constant D and thus the trajectories are also bounded. for every two oscillators k and l such that φ t k = φ t max and φ t l = φ t min , then the maximum phase difference is bounded by D, i.e. D t ≤ D for all t ≥ 0, trajectories of system (2) are bounded, and system (2) achieves frequency synchronization. Proof: Condition (6) says that when the maximum phase difference achieves value D, it can not grow anymore and thus does not exceed D. This implies that the trajectories of system (2) are bounded in R n since the phase average is always equal to zero. Further, function V ( φ) is well-defined in R n and we can apply LaSalle's Invariance Theorem to guarantee that each solution of (2) approaches the nonempty set {V ≡ 0} = {φ i = 0, 1 ≤ i ≤ n}, and system (2) achieves frequency synchronization. It is possible that when φ t max − φ t min = D, several oscillators have phase values equal to φ t max or φ t min . In this case condition (6) should be satisfied for each pair of oscillators with a phase difference equal to D. Condition (6) is very general by itself and cannot be directly applied to ensure boundedness of the trajectories and frequency synchronization of a given system. In the next two subsections we derive two conditions that can be easily verified for each given system and guarantee that condition (6) of Proposition 1 is satisfied. In particular, in Subsection B we derive an analytic condition for the case of equal coupling strengths, and in Subsection C we formulate an optimization problem for the case of non-equal coupling strengths. An alternative line of works [8]- [10] focuses on results that guarantee existence of a locally stable equilibrium manifold for system (2). These local results, however, cannot guarantee synchronization for any given values of the initial phases (different from the equilibrium phases). We finish this subsection with an example that demonstrates that existence of a locally stable equilibrium for system (2) does not imply synchronization of this system for all possible initial phases. Therefore, existence of an equilibrium is not a sufficient condition of synchronization for all initial phases. Example 1 In this example three oscillators are connected as shown on Fig. 1, i.e. they form a star graph with three nodes. We assume thatω 1 = 2 − ,ω 2 =ω 3 = −1 + 2 , where is a small positive constant, and all coupling strengths are equal: It is easy to verify that this system possesses a locally stable equilibrium: . However, there are initial phases φ 0 1 , φ 0 2 and φ 0 3 for which the system does not achieve synchronization. On Fig. 2 the behavior of oscillators is demonstrated for φ 0 1 = 0, φ 0 2 = π/2 and φ 0 3 = −π/2, and for = 0.1. A graph on the right side of Fig. 2 is a graph of the Lyapunov function V ( φ). This function decreases but is not bounded in this example. B. Analytic Synchronization Condition for System (2) with Equal Coupling Strengths In this subsection we consider a special case of system (2) when the coupling strengths are equal for all connected oscillators, i.e. we study the following system: The main result of this subsection is Theorem 1 which contains requirements on the initial phases and the coupling strength such that condition (6) of Proposition 1 is satisfied and therefore system (7) achieves frequency synchronization. for all i, j = 1, . . . , n, then D t ≤ D ∀ t ≥ 0 for the system (7) in which the underlying topology is a graph with diameter ≤ 2, and this system achieves frequency synchronization. Proof: Assume that at time moment T ≥ 0, the value of D T is equal to D and before this moment it never exceeded D, i.e. D t ≤ D ∀t ≤ T . We will show that under the conditions of this theorem, the maximum phase difference does not start to increase at time T by showing that requirement (6) of Proposition 1 is satisfied. This will guarantee that the maximum phase difference D t will be always bounded by D. Condition (6) must be satisfied for every two oscillators k and l with φ T k = φ T max and φ T l = φ T min : This condition will be satisfied if K ≥ n·|ω k −ω l | P kl ·sin D and if we can show that: Because φ T k and φ T l are respectively the maximum and minimum phase values at time T (see Fig. 3): Therefore, each summand in the left side of the inequality (9) is nonnegative. If vertices k and l are connected by an edge, both sums contain sin(φ T k − φ T l ) = sin D, and thus the left side of (9) contains 2 sin D. Assume now that vertices k and l have a common neighbor -vertex m. Then, the left side of inequality (9) contains the following sum: Inequality above holds because sin so that Therefore, the left side of (9) contains a sum that is greater or equal than sin D for each common neighbor m of vertices Fig. 3: Oscillators φ T k and φ T l with the maximum and minimum phases, respectively k and l. In addition, if k and l are connected by the edge, there is a term 2 sin D in the left side of (9), and thus, inequality (9) holds. This proves that condition (6) of Proposition 1 is satisfied under the theorem's conditions. Remark 1 If D 0 ≤ π 2 , the smallest value of bound (8) will be achieved for D = π 2 . When π 2 < D 0 < π, bound (8) takes its smallest value if D = D 0 . Remark 2 In the case of a complete graph, P ij = n for each pair i, j of vertices, and the sufficient condition on K is the following: K ≥ |ωi−ωj | sin D0 for all i, j. This bound coincides with the bound obtained in [8] for a complete graph. Remark 3 When diameter of a graph is larger than two, Theorem 1 cannot be applied in general, and condition (6) of the Proposition 1 can be violated. For instance, if the distance between vertices k and j is more than two, then in condition (6) both sums may be equal to zero: sin(φ t k − φ t i ) = 0 ∀i ∈ N k , sin(φ t j − φ t l ) = 0 ∀j ∈ N l , and (φ t k −φ t l ) > 0 ifω k > ω l . However, Theorem 1 can be applied to the graphs with a diameter more than two if every two oscillators with a shortest path between them of a length more than two, have equal frequencies. In this case condition (6) is always satisfied for such two oscillators. Indeed, ifω k =ω l and (φ t k − φ t l ) = (φ t max − φ t min ) = D < π, thenφ t k −φ t l ≤ 0, because sin(φ t k − φ t i ) ≥ 0 and sin(φ t j − φ t l ) ≥ 0 for all i ∈ N k and j ∈ N l . C. Optimization Approach for System (2) with Non-equal Coupling Strengths In this subsection the equal coupling strength assumption is relaxed. Instead of one coupling parameter K as was in the previous subsection, there are now |E| coupling parameters K ij , where |E| is the cardinality of the graph's edge set E. Similarly to condition (8) in the Theorem 1, we will find bounds on the coupling strengths K ij to guarantee frequency synchronization of system (2), but instead of providing an analytic condition (8), we will formulate an optimization problem whose solution contains the coupling strengths K ij that guarantee (6) and are sufficient for synchronization. While in the Theorem 1 the goal was to find the minimum value of the coupling parameter K that guarantees synchronization, minimizing the sum of all coupling strengths ij∈E K ij will be the goal for the case of non-equal coupling strengths 1 . In condition (6) we assume that φ t k = φ t max , φ t l = φ t min and φ t k − φ t l = D. Since D < π, all values of sin() functions in each sum of (6) are nonnegative. Instead of condition (6) we will consider a more strict condition on the coupling parameters, where we keep only summands corresponding to the neighbor oscillators of both oscillators k and l: where N kl = N k ∩ N l -is the set of common neighbors of oscillators k and l. If there is no edge kl between oscillators k and l, then K kl = 0 in (10). We will introduce constraints that do not contain phases and guarantee that condition (10) (and (6) as well) is satisfied for all phase values. Optimization problem, whose |E| variables are the coupling strengths K ij (ij ∈ E), that allowed to take nonnegative values, is formulated as follows: where 1 ≤ k, l ≤ n, and each δ m may take values {0, 1}. Since either δ m or (1 − δ m ) takes a zero value, variables K km and K lm do not appear together in each constraint. For each possible combination of values of δ m there is a corresponding constraint, and, therefore for each pair of oscillators k and l there are 2 |N kl | constraints in the optimization problem, where |N kl | is the number of common neighbors of oscillators k and l. For example, suppose that oscillators k and l are connected and have a single common neighbor m, then optimization problem (11) will contain two constraints for oscillators k and l: If, for example, oscillators k and l are not connected and have two common neighbors m 1 , m 2 , then there will be four constraints for k and l: Thus, optimization problem (11) contains in total 1≤k<l≤n 2 |N kl | constraints. Although the number of constraints can be exponential in number of oscillators n, for some types of graphs it is polynomial in n. For example, for the graphs with star-tree topology, each pair of oscillators has at most one common neighbor, and thus, not more than two corresponding constraints. Remark 4 If all coupling strengths are required to be equal in optimization problem (11), then its solution is bound (8) from the Theorem 1. Indeed, when all coupling strengths are equal, then K kl = K km = K lm in the constraint of (11) for w k andw l , m∈N kl δ m · K km + (1 − δ m ) · K lm = |N kl |, and the constraint becomes: K ≥ n·|ω k −ω l | P kl ·sin D . We will now show that solution to this optimization problem satisfies conditions (10) for all possible phase values. Proof: Suppose that K * ij , where ij ∈ E is a solution of the optimization problem (11). We are going to show that condition (10) is satisfied for two arbitrary oscillators k and l with φ t k − φ t l = D. This would imply that condition (6) is also satisfied since condition (10) is more restrictive than (6). For arbitrary phases φ t m , such that φ t l ≤ φ t m ≤ φ t k for all m ∈ N kl , from condition (10): Now we can observe that for the right side of the last inequality there exists a constraint in (11) that guarantees that the right side is non-positive. If, for example, min(K * km , K * lm ) = K * km , then corresponding constraint in (11) has δ m = 1, otherwise δ m = 0. We finish this section with an example for which we found values of the coupling strengths that are sufficient for synchronization: first, under the condition that all coupling strengths must be equal and using the Theorem 1, and then, assuming that the strengths are allowed to be non-equal and solving the optimization problem (11). Example 2 In this example we consider four oscillators connected as shown on Fig. 4 and with following frequencies: Analytic condition (Theorem 1) Numerical condition (Theorem 2) Solution to (11) D0 < π There are four edges in this graph, i.e. four coupling strengths K ij , and thus four variables in problem (11). Notice, that P 12 = P 13 = P 23 = 3, P 14 = 2, and P 24 = P 34 = 1. If we assume that all the coupling strengths are equal, then by the Theorem 1 from previous subsection, sufficient for synchronization value of the coupling strength is: K = 0.5 · n = 2 (from the inequality for pair 34). Then, the sum of all coupling strengths is 4 · 2 = 8. If we let the coupling strengths be different for the different edges, the optimization problem has a solution: K 12 = 0.8, K 13 = 2, K 23 = 0.2, and K 14 = 2. Now the sum of the coupling strengths is 5. Optimization problem for this example contains eleven inequality constraints (besides the constraints K ij ≥ 0). For optimization we used Matlab's R2012a fmincon function with default options. IV. NUMERICAL SIMULATIONS In this section we present the results of simulations performed to demonstrate that for the graphs of diameter two, synchronization condition formulated in Theorem 1 is a less restrictive condition compared to existing ones. Since Theorem 1 guarantees existence of a Positively Invariant Set and frequency synchronization of system (2), we compared our bound with the similar conditions that also guarantee existence of a PIS and frequency synchronization. To the best of our knowledge, there are three such conditions: Theorem 4.6 from Fig. 4: Connections between oscillators in Example 2 [8], results from [14], and conditions (analytic and numerical) in [12]. Therefore, we did not include into comparison analysis conditions from [8], [9] and [10] that only provide existence of an equilibrium and local stability. The numerical condition of [12] is less restrictive then the analytic condition of the same article, and we here consider only the former one. In addition, we added to our comparison analysis a numerical synchronization condition from Theorem 2, which allows the coupling strengths to be different, and for each given example we calculated an average coupling strength of the solution to (11). Each of the five synchronization conditions compared in this section consists of a bound on the coupling strength and constraints on the initial phases of oscillators. In particular, all synchronization conditions require that the difference between any two initial phases is less than π (i.e. D 0 < π). Additionally, synchronization conditions from [8], [12] and [14] have their own special constraints on the initial phases. The bounds on the coupling strength and corresponding requirements on the initial phases are summarized in Table 1. In the simulations we assigned a value of max{ π 2 , D 0 } to the constant D for our synchronization condition, because in this case bound (8) is the least restrictive as mentioned in Remark 1. In the bound from [8], λ 2 is the algebraic connectivity of a given graph, B c ∈ R n×n(n−1)/2 is the incidence matrix of the complete graph with n nodes,ω is a vector of frequencies, φ(0) -vector of initial phases and γ max = max{ π 2 , B T c φ(0) 2 }. In the condition from [14], E 0 is the squared Euclidean norm of a vector of the initial phases: σ(ω) denoted he Euclidean norm of a vector of the intrinsic frequencies deviations: D is a constant whose value is defined as max{ π 2 , where E comp is a set of n(n−1) 2 edges of a complete graph with n nodes. In our analysis we compared the requirements on both, the initial phases, and on the coupling strength of the five synchronization conditions. Experiment 1 (comparison of the constraints on initial phases). In the first experiment we checked the restrictiveness of constraints on the initial phases of each of five synchronization conditions under consideration. We created 10 5 samples of the initial phases such that each phase was chosen from the (0, π) interval. Then, for each sample we subtracted its mean phase value from each phase belonging to this sample. Therefore, the sum of the initial phases was equal to zero, and the maximum phase difference was less than π in each sample. We shifted the phase values of each sample by the sample's mean because condition from [14] requires that n i=1 φ 0 i = 0, and other synchronization conditions only depend on the relative values of the initial phases and thus are rotationally invariant. Next, for each sample we checked if it satisfies the constraints on the initial phases of the synchronization conditions, and for each condition we calculated fractions of samples that satisfy its initial phase requirements. We repeated this experiment for different numbers of oscillators n in the system: n = 5, . . . , 10 and the experiment's results are shown on Fig. 5a. Since our synchronization conditions in Theorems 1 and 2 do not contain any additional requirements on the initial phases, they can be applied for each generated sample of phases, and thus the fraction of acceptable initial phases is equal to one for all values of n. Fractions of acceptable initial phases for conditions from [8], [14] and [12] monotonically decrease with the number of oscillators n as can be observed on Fig. 5a. Experiment 2 (comparison of the bounds on coupling strength). In the second experiment we compared the bounds on the coupling strength. For each fixed number of oscillators n = 5, . . . , 9 we randomly created 1000 graphs with n vertices and of diameter two. The initial edge set of each graph was empty, and we successively added random edges to it until the diameter was equal to two. For each graph we then created a random sample of initial phases, a random sample of frequencies, and calculated the bounds on K for each condition. For the numerical condition in the Theorem 2 we calculated an average value of K for each example. The average values of bounds for each of five conditions under comparison are plotted on Fig. 5b in logarithmic scale. In this experiment we sampled values of the frequencies from (0, 1) interval, but the relative performance of the bounds does not noticeably change with the interval. The simulation results of Experiments 1 and 2 show that for graphs of diameter two our synchronization condition formulated in Theorem 1 is less restrictive in terms of both, initial phases and coupling strength compared to the existing conditions. Additionally, optimization-based condition in Theorem 2 provides a further improvement if the value of its bound is defined as the average coupling strength for each example. V. CONCLUSION In this article we employed the notion of a Positively Invariant Set to find a sufficient condition for frequency synchronization of heterogeneous Kuramoto oscillators connected by a graph of diameter two. We showed that an existence of a PIS ensures the boundedness of the trajectories of oscillators, which in turn, provides synchronization. For the case when the coupling strength is the same for every two connected oscillators, we provided an analytic synchronization condition, and demonstrated with simulations that this condition is significantly less restrictive than existing ones. For the case when the coupling is allowed to take distinct values for different pairs of oscillators, we formulated an optimization problem whose solution -a set of coupling strengths -guarantees frequency synchronization.
2015-02-21T19:44:16.000Z
2015-02-21T00:00:00.000
{ "year": 2015, "sha1": "e14ae8820b5aa79e9d93d369ef962bdaf6d6fc5b", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1502.06137", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "9397bbc709400dc1fe520ba9ef3b0d03e112da2f", "s2fieldsofstudy": [ "Mathematics", "Physics" ], "extfieldsofstudy": [ "Mathematics", "Physics", "Computer Science" ] }
137589529
pes2o/s2orc
v3-fos-license
Thermal and Mechanical Stability of Retained Austenite in Aluminum-containing Multiphase TRIP Steels Low-alloyed multiphase transformation-induced plasticity (TRIP) steels have attracted a growing interest in recent years due to their high strength and enhanced formability. These excellent mechanical properties mainly arise from a martensitic transformation of metastable retained austenite under the influence of external tensile stress. Detailed insight in the stability range of retained austenite is thus regarded to be of the highest importance for controlling the materials properties. Numerous modelings and experiments show that the factors influencing the stability of retained austenite in TRIP steels include the chemical composition in austenite, size and shape of the grain and dislocation density, etc. In the present study, the effect of carbon concentration in retained austenite is highlighted. This is due to the fact that during thermal process of the TRIP steels significant inhomogeneity of the carbon distribution in austenite is introduced. As reported, the austenite could be retained along the bainite–ferrite boundary (so-called intergranular austenite), inside ferritic grains (inter-ferritic austenite) or between bainitic plates (interlath film-like austenite). The inter-ferritic austenite would have a lower carbon concentration than the intergranular or interlath austenite since it is probably formed during intercritical annealing and carbon enrichment occurs during subsequent cooling. It is understandable that the interlath filmlike retained austenite has a higher carbon concentration than the intergranular austenite because it is enriched from both sides of the bainite plates. From the macroscopic point of view, it has been widely found that the austenite volume fraction in TRIP steels decreases with increasing the strain during tensile test. To describe the relation between the fraction of retained austenite (f g) and strain (e), Matsumura et al. analyzed f g versus e data using the Austin and Rickett type of equation. They found that the autocatalytic effect, i.e. the ability of martensite to accelerate the formation of additional martensite, in TRIP steels and dual phase steels is suppressed due to a large amount of ferrite or bainite acting as barriers against the autocatalytic propagation. Therefore, the f g versus e relation is as follows 1/f g 1/f 0 g ke .............................. (1) ISIJ International, Vol. 42 (2002), No. 12, pp. 1565–1570 Introduction Low-alloyed multiphase transformation-induced plasticity (TRIP) steels have attracted a growing interest in recent years due to their high strength and enhanced formability. 1,2) These excellent mechanical properties mainly arise from a martensitic transformation of metastable retained austenite under the influence of external tensile stress. Detailed insight in the stability range of retained austenite is thus regarded to be of the highest importance for controlling the materials properties. Numerous modelings 3,4) and experiments [5][6][7][8][9] show that the factors influencing the stability of retained austenite in TRIP steels include the chemical composition in austenite, 6,7) size 4,5) and shape 9) of the grain and dislocation density, 8) etc. In the present study, the effect of carbon concentration in retained austenite is highlighted. This is due to the fact that during thermal process of the TRIP steels significant inhomogeneity of the carbon distribution in austenite is introduced. As reported, [9][10][11][12] the austenite could be retained along the bainite-ferrite boundary (so-called intergranular austenite), inside ferritic grains (inter-ferritic austenite) or between bainitic plates (interlath film-like austenite). 9) The inter-ferritic austenite would have a lower carbon concentration than the intergranular or interlath austenite since it is probably formed during intercritical annealing 13) and carbon enrichment occurs during subsequent cooling. It is understandable that the interlath filmlike retained austenite has a higher carbon concentration than the intergranular austenite because it is enriched from both sides of the bainite plates. From the macroscopic point of view, it has been widely found that the austenite volume fraction in TRIP steels decreases with increasing the strain during tensile test. To describe the relation between the fraction of retained austenite (f g ) and strain (e), Matsumura et al. 14) analyzed f g versus e data using the Austin and Rickett type of equation. [15][16][17][18] They found that the autocatalytic effect, i.e. the ability of martensite to accelerate the formation of additional martensite, in TRIP steels and dual phase steels is suppressed due to a large amount of ferrite or bainite acting as barriers against the autocatalytic propagation. Therefore, the f g versus e relation is as follows where f 0 g is the initial volume fraction of retained austenite and k the constant regarding as a measure for the mechanical stability of retained austenite. As the engineering strain is used in this paper, 17) the above relation is thus called the Ludwigson and Berger equation. The strain dependence of the austenite fraction, from which the mechanical stability of retained austenite can be determined, was usually determined by X-ray diffraction (XRD) in the samples unloading at different strain levels. 6,7,19) In the present work, the austenite fraction was determined under stress condition. To clarify the effect of orientation between the applied stress and the diffraction planes, the in situ XRD measurements are also performed using synchrotron radiation diffraction technique. On the other hand, no work on the thermal stability of retained austenite in TRIP steels has been reported, although such studies can give useful information on the stability of retained austenite grains. The thermal martensitic transformation of retained austenite in TRIP steels is thus also measured in the present work. Experimental Procedure Two Al-containing low-alloyed multiphase TRIP steels, Al1.8 grade and Al1.0 grade, were used in this work and their main compositions are shown in Table 1. The materials were machined to tensile samples for in situ tensile tests or to cylindrical samples for the thermo-magnetization measurements. The samples were pre-annealed at 900°C for 10 min, which is in the ferrite/austenite two-phase region, and subsequently quenched to 400°C. Holding time at 400°C is 2 min for A11.8 grade steel or 1.5 min for Al1.0 grade steel, which was found to be close to the time leading to maximum volume fraction of retained austenite, 10,20) as also listed in Table 1. Samples with a diameter of 5 mm and a thickness of about 1 mm were used for the thermo-magnetization measurements, which were performed on a Quantum Design SQUID magnetometer (MPMS-5S). During the measurements, the samples were thermally cycled from 300 to 5 K at a constant magnetic field of 5 T. Heating and cooling rates were very low (about 0.5 K/min) so that it can be assumed that the sample is in the equilibrium condition. The applied magnetic field is sufficiently large to approach the magnetic saturation according to previous investigations. 20) To analyze the chemical driving force for the martensitic transformation, the Gibbs free energy of austenite and martensite was calculated employing the computational thermodynamics program MTData ® (version 4.71). The SGTE (Scientific Group Thermodata Europe) database was employed during the calculation. In situ conventional X-ray diffraction (XRD) measurements under stress were performed on a Brucker D5005 Xray diffractometer by means of a home-made tensile tester. After application of each stress step, a thin layer of standard silicon powder (NBS402) was pasted onto the sample surface to correct for the displacement of the surface during the tensile test by monitoring the shift of the {220} silicon peak. CoKa radiation was used at 45 kV and 35 mA and 2q value is ranged from 45°to 95°. Three austenite (g) peaks, {111} g , {200} g and {220} g , and three ferrite (a) peaks, {110} a , {200} a , {211} a , were thus observed. From the net integral intensity and peak position, the volume fraction of retained austenite 21) and lattice parameters were thus calculated. In situ synchrotron radiation measurements were performed at beamline ID-11 of European Synchrotron Radiation Facility (ESRF). A radiation beam with a size of 25ϫ25 mm 2 and a wavelength of 0.155 Å was applied. The diffraction patterns were detected by a charge-coupled device (CCD) with an exposure time of 30 s and an oscillation angle of 0.5°. The tensile samples have a thickness of 0.4 mm, a width of 10 mm and a gauge length of 50 mm, in which the length direction of the sample is parallel to the rolling direction. The tensile tests were performed on an Instron 25 kN stress-rig and the diffraction patterns were taken at strain levels ranged from 0 to 0.12. Figure 1 depicts the mass magnetization as a function of temperature at a constant magnetic field of 5 T during the thermal cycle from 300 to 5 K and back in the Al1.0 grade steel. The magnetization increases significantly with decreasing temperature. This is due to, for the ferromagnetic phases, ferrite and martensite, the increase in the alignment of the atomic magnetic dipoles, and to the increase of martensite formed from retained austenite with decreasing temperature. From the results that the heating and cooling curves at low temperature level are coincident, one may see that nearly all retained austenite transforms to martensite after cooling, i.e. f g ≈0. One can also understand that the volume fraction of ferrite remains unchanged during the thermal cycle, i.e. f a ϭconstant. The magnetization during cooling (M c ) and heating (M h ) is thus equal to M c ϭ f a · M a ϩ(1Ϫf a Ϫf g ) · M aЈ and M h ϭf a · M a ϩ(1Ϫf a ) · M aЈ . Denoting r as the ratio of the magnetization of martensite (aЈ) and ferrite (a), rϭM aЈ /M a , the temperature dependence of the austenite fraction during cooling can be determined by: Thermal Stability by Magnetization Measurements ......... (2) where the factor (rϩ(1Ϫr)f a )/r arises from the difference of magnetization between the ferrite and martensite phase. The ratio r is here taken as 0.90, which is the literature data for an Fe-1.4C steel at room temperature. 22) f a is estimated to be 0.85. Therefore, the austenite fraction as a function of temperature can be calculated, as shown by dots in Fig. 2. One can see that the austenite fraction decreases with decreasing the temperature as the transformation proceeds till the M f temperature. However, the initial austenite volume fraction at 300 K is found to be only 0.023, which is much lower than expected. To analyze the transformation behaviour in detail, the Gibbs free energy (G) of retained austenite and martensite was calculated. It is assumed that para-equilibrium is established during the heat treatment. Retained austenite has therefore an average composition of 1.40C-1.52Mn-0.25Si-0.96Al (in wt%), where the carbon concentration was determined from XRD. 23) From the Gibbs free energy, the chemical driving force for the martensitic transformation, DGϭG aЈ ϪG g , is obtained. Taking the critical driving force for the start of martensite transformation as 1 260 J/mol, 23) the M S temperature is thus calculated to be 345 K. Furthermore, it is known that a magnetic field would assist martensitic transformation by giving an additional driving force of HM (H: applied magnetic field). 25) In a constant applied field of 5 T, this additional driving force is about 53 J/mol, which raises the M S temperature by about 10 K. The M S temperature is thus expected to be 355 K at a magnetic field of 5 T. Part of the retained austenite after the heat treatment therefore have transformed upon cooling to room temperature. Another important information from the thermodynamic analysis is the calculation of the b values, the ratio of the slopes of the temperature dependence of the Gibbs free energy for martensite (aЈ) and austenite (g), which are around 0.8 and decrease slightly with decreasing temperature. The b values were used to predict the following fraction-temperature relation, which is slightly modified from a relation developed by H. Y. Yu 26) ............... (3) where f o g is the austenite fraction at M S , T the temperature. As shown in Fig. 2, the predicted results from this equation were found to be well consistent with the experimental results. If the f g -T curve is extrapolated to the M S temperature, one can obtain that the austenite fraction at the end of bainitic holding is 0.035. Assuming that the thermal stability of retained austenite is only microscopically related to the carbon concentration in austenite, the f g -T relation can be thus explained. The transition temperature of individual austenite grain with different carbon concentration (C%), M S temperature, can be calculated using Andrews' empirical equation: 27) M S ϭ766Ϫ425ϫC% (in K) ..................... (4) where the constant 766 is calculated considering the alloying elements re-distribution during intercritical annealing. 23) The calculated C%-M S relation is plotted in Fig. 2 and it can be understood from this relation that, for instance, at room temperature, the possible carbon contents in different austenite grains vary from 1.10 to 1.55 wt%. This is due to the fact that the austenite with lower carbon contents would have M S temperature higher than room temperature and would transform to martensite and the austenite with higher carbon contents would start to transform at temperatures lower than 120 K. With decreasing temperature, the austenite grains with lowest carbon concentration transform first. Mechanical Stability Measured by Conventional XRD From the in situ conventional XRD measurements at different strains, the volume fraction of retained austenite in the Al1.8 grade steel was determined, as shown in Fig. 3. The error bars indicated in the figure are those calculated from the counting statistics only. 20) Figure 3 confirms that Eq. (1) is also valid for the Al-containing TRIP steel investigated, that is, the auto-catalytic effect is suppressed due to the existence of ferrite and bainite surrounding the trans- forming austenite. Using Eq. (1), the k-value, regarded as a measure for the mechanical stability of retained austenite, is determined to be 72. In comparison with the k-values for the silicon containing TRIP steels 7,14,19) and dual phase steels, 14) the Al-grade steels are in general less stable than silicon-containing TRIP steels but much more stable than dual-phase steels. The lattice parameter, a, of austenite was determined from the peak position of the {220} g peak. The reason to choose this peak lies in the fact that the 2q value of this peak is highest, which leads to highest accuracy. Figure 4 shows the lattice parameter as a function of stress. One can see that the lattice parameter decreases with increasing stress. This is due to the fact that the diffraction plane is parallel to the sample surface, and tension along a direction parallel to the diffraction planes leads to a decrease of interplanar spacing of the diffraction planes. When the stress is less than 250 MPa, the lattice parameter decreases more or less linearly with increasing stress. However, when the stress is higher, the lattice parameter deviates from this linear relation. This is due to the fact that a significant fraction of austenite transforms to martensite as a result of stressor strain-induced martensitic transformation. As analyzed below, the austenite with a lower carbon concentration is more likely to transform at lower stress values. As a consequence, the lattice parameter of remaining austenite shifts to values that are larger than given by the linear relation. Assuming that there is no occurrence of the martensitic transformation at the stress below 250 MPa, the effect of stress on the parameter is thus extrapolated from the linear relation at the low stress levels to the maximum stress, as shown by a dotted line in Fig. 4. Subtracting this linear lattice parameter-stress relation from the measured values, the contribution of carbon to the austenitic lattice parameter is thus established using the following relation a C (Å)ϭ3.5980 (Å)ϩ0.033 C (wt%).............. (5) where a C is the austenitic lattice parameter only influenced by the carbon concentration (C) and the effect of other alloying elements (Al and Mn) is reflected in the constant of the relation above. 23) Carbon concentration as a function of stress is also plotted in Fig. 4, and one can see that the average carbon concentration increases from about 0.94 to 1.06 % during the designed tensile test. From a thermodynamic point of view, this composition change means that there is only about 50 K decrease of the M S temperature 27) or about 300 J/mol decrease of the chemical driving force. 4) Therefore, the XRD measurements show a small carbon concentration variation in the retained austenite. Mechanical Stability Measured by Synchrotron Radiation To understand the orientation effect on the mechanical stability of retained austenite, in situ synchrotron radiation measurements of the Al1.8 grade steel were performed in a stress rig. Figure 5 shows an example of the measured diffraction pattern and the definition of the angle (h) between the normal of the diffracting plane and the direction of the applied stress. The first quarter of the austenite {200} g diffraction ring, at which 2q≈4.9°, was analyzed in this paper. The austenite fraction is determined from the relative inten- sity change and the fraction as a function of strain is shown in Fig. 6. The graph shows that the fraction of austenite decreases with increasing strain, except for a few points. Fitting the fraction versus strain relation in Fig. 6 by Eq. (1) with variable k and f 0 g for different orientations, the k-values were obtained, as listed in Table 2. One can see that the retained austenite at hϭ0°or 90°is less stable and transforms preferentially while the austenite is most stable at hϭ45°or 60°. The mechanical stability at hϭ45°or 60°is close to the one by the conventional XRD measurements. The variation in f 0 g indicates the presence of a texture for the retained austenite. Similar to the analysis in the previous section, both stress and the carbon content in the retained austenite determine the interplanar spacing of the diffracting plane, which is proportional to the diameter of the diffraction ring. To calculate the stress (s) effect, the following equation is derived 28) .............. (6) where c 11 and c 12 are the elastic constants. e 2 is the strain tensor normal to the diffraction plane and is resulting from the change of interplanar spacing. Using c 11 ϭ217 GPa and c 12 ϭ145 GPa, the strain tensor is thus calculated and presented by the lines in Fig. 7(a), and one can see that the stress effect makes the diffraction ring have an oval shape, rather than an exact circle. From the relative values of the distance between the center and the edge of the diffraction ring, the strain tensor is also determined from the experimental data. After subtracting the stress effect on the strain tensor, the remaining strain is thought to be due to the change in the carbon content of the austenite. Therefore, the carbon concentration in remaining austenite is determined using Eq. (5) and the results are shown in Fig. 7(b). There is a significant trend that the carbon concentration increases for decreasing austenite fraction, in line with an expected increase in retained austenite stability with increasing carbon concentration. The variation in average carbon concentration is comparable to that obtained from the conventional XRD measurements. Conclusions In this work, both thermal and mechanical stability of retained austenite in TRIP steels are investigated by in situ X-ray measurements and thermo-magnetization measurements. Main conclusions can be drawn as follows. (1) Both thermal and mechanical stabilities are mainly attributed to the fluctuations of carbon concentration among ε η η σ different austenite grains. The austenite grains with a low carbon concentration transform more readily than grains with a higher carbon concentration. (2) Thermo-magnetization measurements show that almost all austenite transforms to martensite upon cooling to 5 K and M S and M f temperatures are analyzed to be 355 and 115 K, respectively. Transformation kinetics on the fraction versus temperature relation were found to be well described by a model based on thermodynamics. (3) From the in situ X-ray measurements, it is found that the volume fraction of retained austenite decreases as the strain increases according to Ludwigson and Berger relation, and the mechanical stability, characterized by the kvalue, is strongly orientation-dependent.
2019-04-28T13:07:47.960Z
2002-12-15T00:00:00.000
{ "year": 2002, "sha1": "b94f009c6f4ce1cd35d5b4ce543ea31ddbb1e033", "oa_license": null, "oa_url": "https://www.jstage.jst.go.jp/article/isijinternational1989/42/12/42_12_1565/_pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "6fafd29a8dcc6bd93eec4e378581aedf501483e4", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science" ] }
119184466
pes2o/s2orc
v3-fos-license
A Constraint on the Organization of the Galactic Center Magnetic Field Using Faraday Rotation We present new 6 and 20 cm Very Large Array (VLA) observations of polarized continuum emission of roughly 0.5 square degrees of the Galactic center (GC) region. The 6 cm observations detect diffuse linearly-polarized emission throughout the region with a brightness of roughly 1 mJy per 15"x10"beam. The Faraday rotation measure (RM) toward this polarized emission has structure on degree size scales and ranges from roughly +330 rad/m2 east of the dynamical center (Sgr A) to -880 rad/m2 west of the dynamical center. This RM structure is also seen toward several nonthermal radio filaments, which implies that they have a similar magnetic field orientation and constrains models for their origin. Modeling shows that the RM and its change with Galactic longitude are best explained by the high electron density and strong magnetic field of the GC region. Considering the emissivity of the GC plasma shows that while the absolute RM values are indirect measures of the GC magnetic field, the RM longitude structure directly traces the magnetic field in the central kiloparsec of the Galaxy. Combining this result with previous work reveals a larger RM structure covering the central ~2 degrees of the Galaxy. This RM structure is similar to that proposed by Novak and coworkers, but is shifted roughly 50 pc west of the dynamical center of the Galaxy. If this RM structure originates in the GC region, it shows that the GC magnetic field is organized on ~300 pc size scales. The pattern is consistent with a predominantly poloidal field geometry, pointing from south to north, that is perturbed by the motion of gas in the Galactic disk. INTRODUCTION Diverse observations of the center of the Milky Way have found evidence for magnetic field strengths from 10-1000 µG with both poloidal and planar geometries. Zeeman splitting of H I absorption lines shows that mGstrength magnetic fields exist in the central 2 pc (Plante et al. 1995). On the basis of their submillimeter polarimetric maps of the Galactic center region, Chuss et al. (2003) argue that molecular clouds in the central 100 pc have mG-strength fields oriented parallel to the plane of the Galaxy. At the same time, the detection of diffuse, polarized radio continuum emission implies that the central few hundred parsecs is permeated by magnetic field of strength 10 to 100 µG (Haynes et al. 1992). The most striking polarized structures in the GC region were discovered in radio continuum images: the nonthermal radio filaments (NRFs; Yusef-Zadeh et al. 1984). NRFs are long (several parsecs), polarized filaments found only in the central few degrees of our Galaxy and believed to be physically in the central few hundred parsecs (LaRosa et al. 2001;Nord et al. 2004;Yusef-Zadeh et al. 2004;Lasenby et al. 1989). Their tendency to align perpendicular to the Galactic plane shows that the GC region has some poloidal (i.e., vertical) magnetic field component with a local strength 1 mG (Yusef-Zadeh et al. 1997). This vertical structure is interesting in light of infrared polarimetry showing that the magnetic field tends to have a toroidal configuration in the plane, but becoming poloidal at elevations above 0. • 4 (Nishiyama et al. 2010). Despite the wealth of observations, it has been difficult to merge these observations into a coherent model for the Galactic center (GC) magnetic field. How are the magnetic fields measured globally and locally related to each other (LaRosa et al. 2005;Ferriére 2009;Crocker et al. 2010)? What physical processes create the NRFs and determine their orientations )? Is the current state of the GC normal or does it represent a short phase of its evolution (Morris & Serabyn 1996;Law 2010)? Furthermore, the complexity of the range of polarimetric observations (some measure line-of sight field, some measure total field) argues for simulations to aid interpretation. To understand the structure and strength of the GC magnetic field, we present new observations and modeling of the polarized continuum emission toward the GC region with the VLA. The observations were originally conducted in a study of the GC Lobe, a degree-tall, looplike structure spanning the central degree of the GC region (Sofue & Handa 1984;Bland-Hawthorn & Cohen 2003;Law 2010). In § 2, the observations are described; § 3 discusses some of the techniques used to analyze the polarized emission. This survey is the largest-area, interferometric survey of diffuse polarized emission ever done in the GC region. Section 4 describes the detection of extended, polarized emission throughout the region and the large-scale rotation measure (RM) structure seen towards it. Section 5 uses the observed RM to constrain a simple model of the Galaxy's electron density and magnetic field. Modeling of the emission and Faraday rotation argues that the GC magnetic field geometry is predominantly poloidal with a perturbation by motion of gas in the disk of the Galaxy. OBSERVATIONS AND DATA REDUCTION Between January and August of 2004, we surveyed the GC region with the VLA at 6 cm in the DnC configuration and at 20 cm in the CnB, DnC, and D configurations. The goal of the observation was to create wide-field mosaics of the GC lobe, as described in Law et al. (2008a). That paper presents catalogs of discrete polarized and unpolarized sources in the survey; extended polarized emission, particularly at 6 cm, is discussed here. The 6 cm observations covered roughly half a square degree from l =359. • 2 to 0. • 2, b =0. • 2 to 0. • 7. Critically for the present work, the default continuum mode observed with two, adjacent 50 MHz bands centered at 4.835 and 4.885 GHz. Observations of J1751-253 were used for phase calibration, while J1331+305 (3C 286) was used for flux calibration. Observations of the unpolarized phase calibrator covered a parallactic angle range of 80 • . This is wide enough to measure receiver "leakage", the detection of left-circular polarization by the right-circular receiver and vice versa (Cotton 1999). After applying the leakage corrections to the scan of 1331+305, the phase delay between left and right polarizations was set to produce the known polarization angle of 66 • . 4 Images were produced with AIPS 5 using both the multi-resolution and the standard CLEAN algorithms. The resulting mosaics and derived properties were similar within their errors. The final mosaics presented here were deconvolved with a multi-resolution CLEAN algorithm. The Stokes Q and U images were cleaned independently with resolutions of 1, 3, and 9 times the beam size to produce a single image per Stokes parameter. The entire primary beam was cleaned until the maximum residual brightness was less than the noise level outside the primary beam. The same number of iterations was used to clean both bands. Images were restored with a single beam of size 15 ′′ ×10 ′′ with PA= 70 • , which is representative of the whole mosaic. The Stokes Q and U images were subsequently primary-beam corrected and combined to form mosaics for each band. Figure 1 shows the polarized intensity mosaic after averaging over both bands. The polarized intensity is visible on scales of a few arcminutes because it is laced with depolarized "canals" (Wieringa et al. 1993;Yusef-Zadeh et al. 1986;Haverkorn et al. 2004). Figure 2 shows an example of canals in the polarized emission in the eastern half of the survey. To correct for noise bias in maps of polarized emission, a noise mosaic was constructed by quadratically adding noise images from each field. A noise image was created for each field by applying the primary beam correction to an image with each pixel value set to the image noise. The noise level for each field was measured outside the primary beam, since the image centers are filled with emission. Observed noise values range from 50 to 120 µJy. The observed noise is a factor of 2-4 times higher than the theoretical sensitivity, which is consistent with the expected additional noise from sidelobes and calibration errors. Finally, mosaics of polarized intensity, Fig. 1.-Mosaic of 6 cm polarized intensity observed in 42 VLA pointings with contours of 6 cm total intensity from the GBT with Galactic coordinates (Law et al. 2008b). The beam size is 15 ′′ ×10 ′′ with a position angle of 70 • . Gray scale shows VLA 6 cm polarized intensity from 0 to 2 mJy beam −1 as indicated on the colorbar in units of Jy. Contours show GBT 6 cm brighness at 33 * 3 n mJy per 150 ′′ circular beam, for n = 0 − 4. Most of the polarized emission seen is significant. The polarization angle changes by 90 • across the canals, indicating depolarization within the beam by small-scale changes in the Faraday-rotating medium (Haverkorn et al. 2004). position angle, and their associated errors were created for each band. Leakage calibration is most valid at the phase center, as errors are known to increase away from there. The VLA position-dependent leakages are based in the antenna and induce a false linear polarization that is radially oriented (Cotton 1994(Cotton , 1999. The magnitude of the false polarization is roughly 3% of Stokes I at the full-width at half-max (FWHM) of the primary beam at 1.4 GHz. It is not measured at other frequencies, but we use 3% as a rough estimate of errors at 5 GHz. Since this error is antenna based, our observations covering a parallactic angle range of 80 • reduces the effect by about 30%. More importantly, the diffuse polarized emission discussed in this work has no total intensity counterpart, so errors in Stokes Q and U scale with Stokes Q and U, instead of Stokes I (Sault et al. 1996). Furthermore, measurements of RM are not biased by the leakage, but the change in leakage with frequency, which tends to be smaller. Considering all these effects, we expect positiondependent leakage errors to be less 3% of Stokes Q and U, less than the typical flux calibration errors and unlikely to affect the results presented here. Section 3.2 compares our results to previous GC polarimetry observations and generally confirms this assumption. Finally, it is important to consider the fact that interferometric observations are not sensitive to emission on large angular scales (Haverkorn et al. 2004;Schnitzeler et al. 2009). Missing Q and U flux can create spurious polarization and bias RM values. Haverkorn et al. (2004) show that a wide distribution of RM randomizes any uniform polarized background and reduces the missing flux. The RM distribution observed here (described in detail in § 3.1) has a width of about 500 rad m −2 on size scales used in this study (>100 ′′ ), which limits the missing flux to less than 0.2%. As an alternative derivation of missing flux, Schnitzeler et al. (2009) show that a gradient in RM can shift the spatial scale at which polarized emission is visible. For the RM gradient seen here (≈ 5 rad m −2 arcsec −1 ), that technique predicts a shift in spatial scales from zero to about 1200 λ, larger than our shortest baseline. Both techniques indicate that an insignificant amount of polarized flux is missed by the present observations. Polarization Angle Difference Across Bands To study the RM across this field, mosaics of the polarization angle were differenced between the two bands. The polarization angle difference image (hereafter ∆θ image) was created by differencing the polarization angle images (θ 4.885GHz − θ 4.835GHz ) and remapping each value of ∆θ to the range -90 • to 90 • . Figure 3 shows the ∆θ image and its error. Observationally, the rotation measure is defined as RM= ∆θ/∆(λ 2 ). Assuming this λ 2 law, a position angle difference of 1 • corresponds to a rotation measure of -220 rad m −2 . More generally, the observed polarization is the sum of polarized emission emitted with a range of RM (Burn 1966;Brentjens & de Bruyn 2005). Such complex sources can have non-quadratic changes in the polarization angle that can confuse a simple analysis. In these situations, the pertinent physical quantity is the "Faraday depth": where n e is in cm −3 , B is in G, and d l is in pc. For simple physical distributions of n e and B, φ is equal to RM. However, robustly tying the RM to physical conditions in complex cases requires measurements at many wavelengths (Brentjens & de Bruyn 2005). Since we only have two wavelengths to study the polarization in this region, we instead use this formalism to define the limits of deriving physical conditions from the observed RM. !"#!! !"$!! !"!!! %&'"(!! %&'")!! %&'"#!! %&'"$!! %&'"!!! First, the formalism of Brentjens & de Bruyn (2005) shows that the spacing of the bands in wavelength determines the "RM resolution" and possible nπ ambiguities. For the two bands used here, the RM resolution is 4×10 4 rad m −2 and any aliasing occurs at RM= n * 4 × 10 4 rad m −2 , for an integer n. The RM expected in the GC region covered by this survey is typically < 2000 rad m −2 (Yusef-Zadeh et al. 1984;Tsuboi et al. 1986;Roy et al. 2005), so there is little chance of an nπ ambiguity. Second, the bandwidth determines the amount of Faraday rotation within a band, which limits the maximum Faraday depth detectable to φ < 2 × 10 4 rad m −2 . Finally, a source that emits over a range of Faraday depths, known as "Faraday thick", can be internally depolarized. The maximum Faraday thickness detectable to the present observations is 830 rad m −2 . Some sources, such as the Radio Arc (Yusef-Zadeh & Morris 1987Morris , 1988, have an RM that changes by more than 830 rad m −2 , so parts of the GC region may be Faraday thick to our observations. Figure 4 shows histograms of ∆θ (number of independent spatial beams per degree of ∆θ) from the entire survey and a smaller region. Intrinsically, we expect the ∆θ histogram to have contributions from many distinct regions of varying peak ∆θ and width. We found that The green dashed line shows the best-fit Lorentzian model, which has a peak of 2200 beams per degree, a half-width of about 10 • , and a constant background of about 13 beams per degree of ∆θ. Bottom: A similar plot as shown at top, but for a 125 ′′ ×125 ′′ box in the eastern half of the survey. This histogram shows the amount of data in each pixel of the smoothed maps shown below. a Lorentzian profile fits these heterogeneous distributions better than a single Gaussian. For smaller regions, where ∆θ has a single-valued, noise-like distribution, the Lorentzian can also approximate a single Gaussian. In the limit of a single-valued, noise-like ∆θ distribution, the Gaussian noise is equivalent to a Poisson distribution in the large-N limit. We use this similarity to approximate the ∆θ bin count errors as σ N = 1 + √ N + 0.75, where N is the number of independent beams in a bin (Gehrels 1986). In §3.2, we show that comparing RM measured histogram methods to previous work shows that the errors are conservative. Aside from theoretical expectations, the distribution of ∆θ values shows that the apparent ∆θ can generally be reliably converted to RM. Most values of ∆θ measured have small offsets from 0 • , with ∼50% within ±10 • , ∼75% within ±20 • , and ∼90% within ±45 • . This is consistent with the ∼9 • rotation expected for RM≈ ±2000 rad m −2 . The typical angle change is ≪ 1 rad, so relative angles may be treated as roughly linearly distributed. The best constraint on the mean ∆θ has a typical error of about 1 • , or 220 rad m −2 . For a change of 1 • between our two bands, the Faraday rotation from λ = 0 cm is 49 • . This typical uncertainty in ∆θ makes the calculation of the intrinsic polarization angle highly uncertain, so no such results are presented here. Visualizing the images of ∆θ and RM is difficult, since the per-pixel sensitivity is poor and varies across the field of view. Convolution and other image processing techniques can be used to extract this information even in poorly-calibrated VLA data (Rudnick & Brown 2009). We tested two statistical techniques to spatially smooth the RM: averaging and histogram fitting. These methods and a comparison of their results are described in Appendix A. In general, the two methods have similar results. The histogram-fitting method is less sensitive to outliers and has more conservative errors, so it is used in all results described below. Comparison to Earlier Work Since this work is applying a relatively new technique to a complex region, it is important to test the results against known sources. This section compares our results shown in Figures 1 and 3 to the RM for specific regions studied previously (LaRosa et al. 2001;Tsuboi et al. 1986;Haynes et al. 1992;Roy et al. 2005;Yusef-Zadeh et al. 1997). LaRosa et al. (2001) present images of 6 cm polarized intensity near the nonthermal radio filament G359.85+0.39 from VLA data with similar sensitivity and resolution as the present study. The two surveys have similar brightness distributions and structure in the polarized emission, particularly the depolarized regions on the southeast and northeast sides of G359.85+0.39 (see also Law et al. 2008a). The similarity shows that the calibration and imaging quality is similar to that of LaRosa et al. (2001). Haynes et al. (1992) and Tsuboi et al. (1986) conducted independent, single-dish surveys near 3 cm, covering a few square degrees of the GC region. Although depolarization is weaker near 3 cm and their beam is larger, there is general agreement between our Figure 1 and their polarized intensity maps. Figure 5 shows a comparison of our RMs with the four-band measurements of Tsuboi et al. (1986). Near (0. • 17, 0. • 22) and (0. • 1, 0. • 35), Tsuboi et al. (1986) find the RM has a maximum of +1000 rad m −2 , while the present survey finds a maximum of 770 ± 110 rad m −2 . The maps are similar moving north across (0. • 15, 0. • 4), where the RM switches from positive to negative values; Tsuboi et al. (1986) measure RM≈ −250 rad m −2 while the present survey finds −220 ± 130 rad m −2 . There is some agreement at the northwestern edge of the polarized emission of the Radio Arc, shown in Figure 5, where the RM switches back to positive values. The exact location of this second RM sign change is slightly different and may reflect the different RM depths each survey is sensitive to. Yusef-Zadeh et al. (1997) presented a detailed study of the polarization properties of the nonthermal filament G359.54+0.18 (RF-C3) at 6 and 3.6 cm. Figure 3 of that work has a similar 6 cm brightness and RM distribution as the present work, both presented here and in Law et al. (2008a). The RM map of the filament shows three distinct, bright clumps each having relatively uniform values. The morphology seen in the present survey is similar to that of Yusef-Zadeh et al. (1997), although it had roughly three times better resolution (4 ′′ compared to 12 ′′ in the present work). The first clump, at RA, Dec (B1950) = (17:40:41, -29:12:30) has RM≈ −2700 rad m −2 , compared to −3960 ± 1100 rad m −2 in the present survey. The second clump, at (17:40:43, -29:12:40), has RM≈ −2000 rad m −2 , compared to −2200 ± 440 rad m −2 in the present survey. The third clump, at (17:40:44,-29:12:45), has RM≈ −1500 rad m −2 , compared to −1540 ± 660 rad m −2 in the present survey. We conclude that, in general, there is good agreement between the RM of the present survey and that of Yusef-Zadeh et al. (1997). In summary, the polarized intensity and RM of the present 6 cm survey shows good agreement with those of other surveys. This is consistent with the fact the polarimetric leakgage is expected to have relatively little frequency structure for the VLA feed design (Cotton 1994(Cotton , 1999; any systematic errors in the polarization angle are subtracted when forming the ∆θ image. It also shows that histogram fitting of the ∆θ values is a reasonable estimate of the RM and its uncertainty at 6 cm in this region. Extended Polarized Emission The 6 cm polarized continuum intensity of the northern extension of the Radio Arc is several mJy beam −1 and spans the entire eastern edge of the survey up to a latitude of b ∼ 0. • 8. To test for frequency structure in the polarized intensity, the polarized intensity maps in the two bands were differenced. The lack of diffuse emission in the difference map shows that the two maps have similar diffuse emission within roughly 1 mJy. The comparable 20 cm mosaic of polarized continuum shows no extended emission down to a level of about 0.1 mJy beam −1 (more detail in Law et al. 2008a). For latitudes up to b = 0. • 3, the polarized continuum emission seen in the 6 cm interferometric maps (Fig. 1) has a total intensity counterpart in the same data. However, north of b = 0. • 3, the total intensity counterpart is too extended to be detected by the VLA 6 cm observations. Since the polarized emission is broken into small spatial scales (as shown in Figure 2), it is detected throughout the region and the apparent polarization fraction often exceeds 100%. To estimate the polarization fraction without the effect of missing flux, we compare the VLA polarized-intensity maps to continuum maps from the Green Bank Telescope (Law et al. 2008b). We convolve the VLA maps to the GBT resolution to estimate the polarization fraction; this will be a lower limit, since the VLA emission is laced with depolarized canals. At 6 cm, the peak polarization fraction is 25% in the eastern half of the survey and 10% in the western half of the survey. These values are consistent with other single-dish surveys (Tsuboi et al. 1986;Haynes et al. 1992), which confirms the validity of techniques and maps of the VLA survey. At 20 cm, the upper limit on the polarization fraction is roughly 1% of the total intensity measured by the GBT. Figure 6 shows two maps of RM smoothed over 125arcsec tiles with the histogram-fitting method. The images show there is coherent structure on degree size scales. The east side of the survey tends to have RM greater than zero and the west side less than zero. Localized Features in the RM Image There are three arcminute-scale RM features that deviate from the simple structure described above. One of the regions with the largest positive RM is at the southern border of the survey, near (−0. • 1, 0. • 2). Figure 8 shows that the region with large RM covers a region about 8 ′ across, just north of Sgr A. The average RM for this region is 1188 ± 198 rad m −2 . The average RM for all The region with the most negative RM is at (−0. • 6, +0. • 5), on the right side of Figure 6. Over an area about 8 ′ across, the mean RM is −1320 ± 110 rad m −2 . Figure 7 shows that the mean RM at l ∼ −0. • 6 is ≈ −880 ± 110 rad m −2 . This feature may partially explain why the amplitude of the east-west RM asymmetry is larger when averaging over the top half of the survey. A third unusual RM structure is a ridge extending from (0. • 25, +0. • 4) to (0. • 0, +0. • 5), seen at the left of Figure 6. The feature has a negative RM, but the surrounding region has a positive RM. The average RM along this ridge is ≈ −220 ± 110 rad m −2 , as compared to the mean value of ≈ 330 ± 60 rad m −2 for all latitudes near l = 0 • . This structure is seen in the RM map of Tsuboi et al. (1986), and a detailed comparison of that work to the present work is shown in Figure 5. Modeling the Rotation Measure The complexity of the line of sight to -and through -the GC region argues for caution when interpreting RM patterns. The apprent 6 cm RM suggests that the line of sight magnetic field changes sign, as if the field was predominantly azimuthal. However, it is not immediately clear whether the observed polarized emission originates in the GC region or whether the observed RM can be used to measure properties of the GC magnetic field. This section addresses these issues with modeling of the Galactic electron density and magnetic field. We use the observed 6 cm RM longitude dependence to constrain parameters of the model and ultimately derive the expected polarimetric properties of the region over a range of wavelengths. Galactic Model The RM is calculated from a model of the electron density, n e , and the Galactic horizontal magnetic field, B. We use a cartesian Galactic coordinate system with the origin at l = 0 • at a distance of r GC from the Sun, the x axis pointing towards negative l, the y axis pointing away from the Sun, and the z axis pointing to positive b. This technique does not calculate the emissivity of the synchrotron radiation, so we effectively assume that the polarized emission originates behind the Faraday rotating medium on the xz-plane. The integral is done along a line to the GC distance, so the polarized emission is assumed to be at the peak of the electron distribution. For this model, the Faraday depth along a given line of sight is: where φ 0 is the foreground Faraday depth, r is the distance from the Sun along the line of sight, and w is the horizontal FWHM of the Galactic Center electron density enhancement (Cordes & Lazio 2002). As described below, the Faraday rotation induced by the GC region dominates, so the limits of integration include only a path of length 5w on the front side of the GC. In §5.1.3, we relax this assumption and consider the emissivity of the plasma. The model electron density is: where n d represents the electron density of the disc, excluding the Gaussian enhancement in the Galactic center. Here n d is assumed constant throughout the volume. The central enhancement is described by a threedimensional Gaussian function, where x 0 , y 0 , and z 0 are its offset, h is its vertical FWHM, and n 0 is its maximum density. Since the model is fit to RM measurements, the magnetic field model describes only the horizontal component. The field points counter-clockwise as seen looking down on the plane, with the pitch angle p pointing slightly outward for positive p. The horizontal field strength is assumed to be a constant, b 0 . where Model Fit The 11 model parameters must be estimated carefully: there are only 39 longitude bins over a narrow l, b range and only 2 frequency channels. To simplify the procedure, we only solve for four parameters: foreground RM, x-offset, magnetic field strength, and pitch angle. The parameters r GC , w, h, n 0 , y 0 , and z 0 have default values taken from NE2001 (Cordes & Lazio 2002), while the disc electron density, n d , is the sum of the NE2001 thin Table 1. disc and the Gaensler et al. (2008) thick disc, evaluated at the Galactic center. These default parameter values, shown in Table 1, are consistent with other observations (Spangler 1991;LaRosa et al. 2005;Lazio & Cordes 1998;Han et al. 2006). Figure 9 shows a top-down view of the spatial distribution of the electron density and magnetic field. The shaded region demonstrates that the observed area is rather small compared to the model structure. This shows that the observations are limited to the center of the electron distribution. We determine the best-fit parameter values and uncertainties with a three-step Monte-Carlo method. First, we randomly generate a list of 4 000 mock observations. The RM value at each l is drawn from a Gaussian distribution with mean and standard deviation equal to the RM and its error measured over the entire latitude range of the survey (as shown in Figure 7). Second, we fit Equation 2 for b = +0. • 5 to each mock data set by minimizing the reduced χ 2 using the nonlinear, constrained, L-BFGS-B solver (Zhu et al. 1994). Third, the mean and standard deviation of each parameter is measured from the ensemble of fit results. The parameters and fit results for the longitude dependence of the 6 cm RM are summarized in Table 1. Figure 10 shows the 68% and 95% confidence levels for all pairs of variables. The foreground RM is orthogonal The 68% confidence level contour is black, the 95% confidence level contour gray. Table 1). to all other parameters, while the remaining three show mild degeneracies. The 6 cm RM predicted by the best-fit model is compared to the observed RM in Figure 11. Since the geometry of the magnetic field is not well known in the GC region, we compare the observations to the best-fit model with and without a pitch angle parameter. For the model with the pitch angle (and shown in Table 1), the reduced χ 2 is 1.6 with 34 degrees of freedom. The model with no pitch angle parameter is slightly worse, particularly at l < −0.6 • , with a reduced χ 2 of 2.2 and 35 degrees of freedom. A p of 0 is ruled out formally at the 3 σ level, although as discussed in §5.1.3 considering emissivity of the model will change some details of the best-fit parameters. The quality of the fit shows that a realistic GC electron distribution and magnetic field geometry can explain the observed RM longitude pattern. In particular, the rapid change of RM with Galactic longitude is best explained by the rapidly changing magnetic field orientation within the central few hundred parsecs. This is consistent with our assumption that most of the polarized emission originates in the center and that the Faraday rotation happens on the near side of the GC. Not only can the model fit the observed RM pattern, but the best-fit values are consistent with other observations. The magnetic field strength and electron density are degenerate in the model, but we constrain the horizontal component of the magnetic field to be roughly 5(10 cm −3 /n gc e )µG at heights of about 0. • 5. Considering the predominantly vertical orientation of the magnetic field (Nishiyama et al. 2010), the implied total field strength is consistent with that measured previously (Haynes et al. 1992;Ferriére 2009;Crocker et al. 2010). The foreground RM is similar in magnitude to that observed toward the pulsars in the inner Galaxy, which have RM up to ∼200 rad m −2 (Manchester et al. 2005;Han et al. 2006). We compare this model to other measurements of the foreground RM in §5.3. The physical consistency of the model to GC region argues for a GC origin of the polarized emission. The shape of the RM distribution requires a value of n e · B || that exists only in the GC region (Lazio & Cordes 1998). Also, the typical n e and B || in the Galactic plane are not changing enough with longitude to cause the observed RM pattern. Faraday Depth Estimate While the model is based on the observed 6 cm RM, reasonable assumptions about the synchrotron emission in our volume will allow us to predict RM and depolarization at all frequencies and positions. These assumptions are not parameterized in our initial model fit, so its predictions can comment on the reasonableness of our model. Below we show how the model predicts a Faraday depth distribution and how it affects the interpretation of the observed polarization at 6 cm. The following treatment of synchrotron radiation closely follows that of Sun et al. (2008). The synchrotron intensity of a slice with thickness d r is given by where n rel is the relativistic electron density. We assume that n rel is constant in the inner 3 kpc of the Milky Way, that the intrinsic fractional polarization of each volume element is constant in frequency and space, and that the shape of the synchrotron spectrum is the same in all volume elements. We therefore drop the frequency dependence and relativistic electron density from our simulation of the polarized intensity: where f is the fractional polarization. If the magnetic energy density is equal to the gas energy density, then, according to Murgia et al. (2004), Because B is likely an order of magnitude stronger than the horizontal field (Crocker et al. 2010), we assume for the sake of simplicity that B ⊥ ≈ B , and therefore We can now combine the polarized emissivity from Equation 9 with the Faraday depth and Faraday thickness to create the Faraday dispersion function F (φ). The dispersion function is the polarized flux as a function of Faraday depth, which can be Fourier transformed into the complex fractional polarization as a function of λ 2 (Brentjens & de Bruyn 2005). We assume that the intrinsic polarization angle is the same thoughtout the GC region, which is reasonable given the large-scale organization reported elsewhere (Nishiyama et al. 2010). The integration was done from r GC − 5w to r GC + 5w to predict the emission across the entire GC region. All lines of sight are normalized to have a polarization fraction of 70% at λ 2 = 0. Figure 12 shows several predictions of the model given the assumptions described above. Since the initial model was fit assuming emission at the GC distance and RM induced only in the foreground of the GC (i.e., 2), we expect this model to be more Faraday thick. Indeed, while the underlying model is smooth and simple, the RM and polarization fraction are highly structured as a function of latitude and frequency. It is also worth noting that the predictions of the model are idealized in the sense that they do not account for beam depolarization or finite bandwidth of the observations. As such, they likely overpredict the polarization fraction and frequency-dependent changes in RM. The relation between Faraday depth and physical distance is displayed in the top left panel of Figure 12. The Faraday depth changes little far from the GC because the medium is tenuous. However, it also changes little in dense regions where the field direction reverses, such as in front of and behind the GC. As a result, a significant amount of flux will end up at the Faraday depths of the vertical sections of the r versus φ plot. The flux per unit φ is large in these regions, as shown in the Faraday dispersion plot in the bottom left panel. Because these caustic-like features occur whenever the line-of-sight field reverses in a synchrotron emitting area (Ue-Li Pen, 2010, private communication), RM synthesis of Faraday thick areas is much more sensitive to these reversals than to the bulk emission at large φ scales. If we think of the polarized emission as a complex Stokes vector that rotates according to its Faraday depth, we can imagine the effect on the observed RM. The peaks in F (φ) interfere to create complex Faraday effects as a function of λ 2 . As shown in the top right panel, the RM as determined by observing only two nearby frequencies can vary widely and argues for caution when interpreting our physical model. This shows that considering the emission from the entire GC region (not just the Faraday rotation by the foreground) makes the region Faraday thick. Despite the Faraday thickness, the model shows that the RM at 6 cm preserves the observed east-west gradient. Figure 13 compares our observed RM at 6 cm to the RM derived along many lines of sight through this model. This is similar to the RM derived from the model in Figure 11, which did not calculate the Faraday dispersion function, but instead assumed Faraday rotation occurred in the foreground. Figure 13 confirms that considering the emissivity produces a wide range of RM, but that there is a clear east-west gradient. 6 The qualitative agreement between models with and without emissivity shows that the observed RM gradient is caused by the orientation of the magnetic field in the GC region. Figure 12 also shows the polarization fraction expected when considering the emissivity of the model. As we assume an intrinsic polarization fraction of 70%, the plot shows that nearly every line of sight will have significant depolarization. On average, the predicted polarization fraction is 20% at 4.8 GHz and 7% at 1.4 GHz. Since the simulation does not include beam depolarization and assumes a perfectly organized magnetic field, the predictions should overestimate the polarization fraction. We consider the predictions in agreement with the observed polarization fraction of 10 to 20% at 4.86 GHz and < 1% at 1.4 GHz. In summary, considering the emissivity of our best-fit model for the GC magnetized plasma shows that the observed RM structure has a complex connection to the actual physical properties. However, the RM trend with Galactic longitude is directly related to the magnetic field direction in the GC region. More detailed physical modeling will require polarimetry with hundreds of channels between 2 and 8 GHz, such as with the EVLA (Ulvestad et al. 2006). Coincidence of RM for Extended and Filamentary Emission Eight NRFs were detected in polarized emission at 6 cm and seven of these have reliable RM measurements (Law et al. 2008a). Interestingly, all but one of these have RM consistent with their surrounding diffuse emission. In other words, the RMs toward the NRFs largely follow the longitude dependence found toward the diffuse emission. If the RMs toward the filaments was unrelated to that of the diffuse emission, a binomial probability distribution predicts a 5% chance of 6/7 coincidences. The similarity of the RMs in the diffuse and filamentary emission is consistent with observations of the brightest NRF, the Radio Arc. As shown in § 3.2 and elsewhere (Yusef-Zadeh & Morris 1988), the morphology and RM measured toward the Radio Arc has a continuous connection into the polarized diffuse emission in the east of this survey. Since the Radio Arc is known to be within the central 100 pc of the GC (Yusef-Zadeh & Morris 1987; Lang et al. 1999b;Lasenby et al. 1989), some of the diffuse polarized emission must also be located in the GC. The fact that the filaments and the diffuse polarized emission are physically near each other could explain their similar RMs. The simplest model to explain this coincidence is that the filaments and diffuse emission are behind the same Faraday screen. According to our modeling, such a screen would be located within the central kiloparsec. However, NRFs are known to have RM changes that coincide with physical changes, which argues that some RM is induced locally (Lang et al. 1999a). A locally induced RM is consistent with the strong magnetic fields inferred in NRFs (Yusef-Zadeh et al. 1997). If this RM coincidence is not extrinsic, then it must be intrinsic: the diffuse emission and the NRFs have similar magnetic field orientations. Physically, this can be explained if the NRFs are local enhancements and perturbations of the global magnetic field. This concept is common to several models for generating NRFs (e.g., Benford 1988;Serabyn & Morris 1994;Boldyrev & Yusef-Zadeh 2006). However, it excludes the model of Shore & LaRosa (1999), which relies on an interaction of molecular clouds with a global wind to generate the NRFs. Other models (e.g., Rosner & Bodo 1996) do not clearly predict how filaments can enhance and perturb the global magnetic field. The NRFs and the diffuse polarized emission also have a similar spatial distribution. As described earlier, the RM observed in the diffuse emission changes sign near l ≈ −0. • 35. Other observations, described in §5.3, show that most RM measurements in the GC region follow a pattern that is centered near l ≈ −0. • 35. High-resolution 20 cm survey of Yusef-Zadeh et al. (2004) showed that there are dozens of candidate NRFs in the GC region with the highest density between l = 0. • 2 to −0. • 7. This shows that the center of the distribution of NRFs is similar to the center of the RM pattern. A third coincidence is that the GC lobe, a shell of gas related to a mass outflow, is centered near the same longitude. These coincidences indicate a connection between the GC lobe outflow and the GC magnetic field. Figure 14 shows a schematic of all RM measurements toward sources believed to be in or beyond the GC region. The RM values derived earlier in this work are shown along with observations of the extended polarized emission from the Radio Arc (Tsuboi et al. 1986), other NRFs (Lang et al. 1999a,b;Gray et al. 1995;Reich 2003), and background sources (Roy et al. 2005). North of the plane, the east-west gradient in RM is seen toward diffuse, compact, and filamentary sources. The pattern is antisymmetric about l ≈ −0. • 35 and across the Galactic plane (esp. Tsuboi et al. 1986). Degree-Scale Structure in the GC Magnetic Field The simplest pattern to describe the GC RM values is that of a checkerboard (four quadrants of alternating sign) shifted 0. • 35 (≈50 pc) west of the center. The pos-sibility of a checkerboard pattern centered at l = 0 • has been noted before (Uchida et al. 1985;Novak et al. 2003), but some RM values were not consistent with the pattern (Ferriére 2009). The RM reported in this work finds that the checkerboard pattern is robust if it is assumed to be shifted from the center. The checkerboard pattern inspired the "flux-dragging" model for the GC magnetic field (Uchida et al. 1985;Novak et al. 2003). The model explains a large-scale pattern in the GC RM as the effect of Galactic rotation on a frozen-in, poloidal (vertical) magnetic field. As the disk rotates, the magnetic field in the disk is dragged away from us on the east side and toward us on the west side. This pull creates a line-of-sight component of the magnetic field. This perturbation to the magnetic field has a checkerboard pattern in the sign of the RM, such that the RM will have opposite signs toward any two adjacent quadrants formed about the center of rotation. Interestingly, the parity of the checkerboard pattern constrains the orientation of the magnetic field, breaking the 180 • ambiguity of observations of the polarization angle. If this scenario is valid, the parity of the observed pattern is consistent with the magnetic field pointing from south to north. This is the only known measurement that can break this ambiguity in the orien- Tsuboi et al. 1986), G0.87-0.87 (Reich 2003), G359.1-0.2 (a.k.a. "the Snake"; Gray et al. 1995), G358.85+0.47 (a.k.a. "the Pelican"; Lang et al. 1999a), and new filaments described in Law et al. (2008a). Note that the two symbols furthest to the top-right represent the max and min RM measured toward the unusual "Pelican" NRF. The dashed crosses and circles show the RM measured toward extragalactic sources (Roy et al. 2005). The dashed horizontal and vertical lines split the region into quadrants that mostly have similar RM values. tation of the magnetic field on the plane of the sky. Roy et al. (2008) found that the flux-dragging scenario was inconsistent with the RM observed toward extragalactic sources seen through the central 12 • . The RM toward their 60 background sources had an average RM= +413 rad m −2 and no checkerboard pattern. While the the RM values over the central 12 • are predominantly positive, the three measurements in the central 2 • studied here (G359.388+0.460, G359.604+0.306, G359.871+0.179) agree with the shifted checkerboard pattern. However, there are clear differences between the RM structure seen here and that reported by Roy et al. (2008). For example, the average RM observed by Roy et al. (2008) in the central 12 • is not consistent with the foreground RM in our model (φ 0 ). Assuming that half of the average RM is contribued by the foreground to the GC, we'd expect φ 0 = 206.5 rad m −2 , but find roughly the opposite of that. One possibility is that the foreground RM fit by our model is poorly constrained by our data. Indeed, §5.1.3 shows that the absolute level of RM is poorly constrained by these narrow-band observations, but that relative values are well constrained. Another possibility is that the central 2 degrees of the Galaxy is magnetically and dynamically different from the region beyond, making comparison with the RM measured in Roy et al. (2008) less meaningful. The presence of NRFs (Nord et al. 2004;Yusef-Zadeh et al. 2004) and the "central molecular zone" (Morris & Serabyn 1996) makes the central 2 • notably different from the region beyond. In summary, we argue that RM measured in the central 2 • of the Galaxy is distinct from RM structure seen outside this region and that it reflects large-scale GC magnetic field structure. The RM structure is simplest to describe as a checkerboard pattern shifted ∼ 50 pc from the dynamical center of the Galaxy. The longitude shift requires some non-Keplerian motion of the ionized gas in the GC region. This is consistent with recent observations showing that the GC is host to a small starburst outflow centered ∼ 50 pc west of the GC (the GC lobe; Bland-Hawthorn & Cohen 2003;Law 2010). The distribution of NRFs, a more direct tracer of the GC magnetic field, is also centered tens of parsecs west of the GC (see Figure 29 of Yusef-Zadeh et al. 2004). It is clear that the electron and magnetic field distributions are not symmetric about the l = 0 • ; our new RM measurements confirm this. CONCLUSIONS We have presented observations and modeling of polarized 6 cm radio continuum emission toward 0.5 square degrees of the GC region. The radio continuum survey detects polarized emission thoughout the region in the form of diffuse polarized emission, compact sources, and filamentary sources. The two bands in the continuum observations allow us to measure RM to this polarized emission. We develop a statistical technique to measure the RM; comparing our results to more robust RM measurements shows that our technique is reliable. There is a striking large-scale pattern in RM toward the diffuse polarized emission. Values in the eastern part of the survey are generally about +330 rad m −2 , but change to −880 rad m −2 in the western part of the survey. There is a sharp transition around l = −0. • 35 at all latitudes in the survey. Modeling of the propagation of the polarized signal shows that this pattern is induced within ∼1 kpc of the GC region. The RM measured toward radio filaments known to be in the GC region are generally consistent with that of the diffuse polarized emission. This coincidence is consistent with models for the filaments as localized enhancements to a global magnetic field. The modeling of the GC magnetized plasma shows that the RM structure constrains the orientation of the GC magnetic field. This RM pattern shows that the GC magnetic field is organized on size scales of roughly 150 parsecs. Combining these and other RM measurements in the GC region, we strengthen earlier suggestions for a checkerboard pattern in RM covering the central 300 parsecs, but only if the structure is shifted roughly 50 pc west of the dynamical center of the Galaxy. We show that the RM measured along different lines of sight and toward different tracers are consistent with this shift. The observed polarization and RM in the GC is consistent with the GC having a poloidal magnetic field that is perturbed by the motion of gas in the Galactic disk (Uchida et al. 1985;Novak et al. 2003). This model is being supported by a growing body of evidence (Chuss et al. 2003;Nishiyama et al. 2010). Under this model, our RM observations constrain the GC magnetic field to be directed from south to north. Our observations also suggest that a second-order perturbation, a small outflow from the GC, has shifted the magnetic symmetry axis of the GC about 50 pc west of the dynamical center of the Galaxy. New observations can test this model in several ways. First, observing the diffuse polarized emission between 6 and 20 cm (5 and 1.4 GHz), where it becomes Faraday thick, would constrain models of its physical distribution. Expanded VLA observations with thousands of channels at these frequencies will track the Faraday rotation and depolarization well enough to create a 3D reconstruction of the magnetic field topology in the Galactic Center. Second, measuring the RM of other radio filaments would test the idea that they are preferentially aligned with the RM of the extended polarized emission. Third, the detection of diffuse polarized emission and its RM beyond the region studied here (particularly south of the plane) would confirm that it traces a general property of the GC region. We thank Farhad Yusef-Zadeh, Bryan Gaensler, Bill Cotton, and Dominic Schnitzeler for valuable discussions during this work. SPATIALLY SMOOTHING ∆θ Mean ∆θ Images To improve visualization of the ∆θ images, the mosaics were smoothed using two independent methods. Both methods were implemented in IDL. The first method of smoothing ∆θ was to measure the error weighted mean value of ∆θ over small regions. Hereafter, we refer to this as the mean method. A major caveat to this method is that averaging angles is not proper, since they represent vector quantities. However, averaging angles is approximately correct for small angles, which is usually true for the ∆θ images presented here (Fig. 4). Furthermore, the large values in the ∆θ map are generally in the noisiest parts of the image and are down-weighted by large errors. There was no significant difference when using the median instead of the mean. Fitting Histograms of ∆θ Images A second, more robust method of smoothing the ∆θ images is to fit the distribution of ∆θ values with a model. Since a model is fit to the histogram, this approach avoids the problems of averaging angles. The model can also parameterize the noise-and signal-like contributions to the ∆θ distribution, making it robust to outliers. As described in §3.1, histograms of ∆θ were fit with a Lorentzian plus a constant background. The best-fit values and their errors were found by the Levenberg-Marquardt algorithm, implemented as MPFIT in IDL (Markwardt 2009). All errors reported are 1σ confidence intervals. Figure 4 shows two examples of histograms extracted with best-fit models. A histogram bin size of 2. • 5 was used; varying the bin size by a factor of a few does not significantly change the fit. The smoothed maps have pixel sizes of 125 ′′ ×125 ′′ , which are small enough to resolve structure in the mosaic, but large enough for accurate fit results. An advantage of the histogram-fitting method is that it calculates the mean and error in ∆θ from the distribution of ∆θ; no noise image is required. A disadvantage of the histogram-fitting method is that it assumes that all pixels in the region sampled are a part of the same distribution. In fact, there are times when multiple sources, with different distributions of ∆θ (i.e., different RMs) are sampled by the same histogram. In this case, the source that occupies the most pixels will dominate the histogram distribution and the best-fit value of ∆θ. This is different from what happens when calculating an error-weighted average of ∆θ, in which the average is dominated by the pixels with the lowest noise (i.e., the brightest sources). Figure A1 compares images of ∆θ using the mean and histogram-fitting methods for a grid of 125 ′′ boxes. An image showing the significance of differences between the two methods in each pixel. The average difference between the two images is 0. • 002 and the standard deviation is about 0. • 2, so there is no significant difference between the two methods. Comparing Mean and Histogram-Fitting Methods The image showing the difference in the two smoothing methods highlights two regions with greater than 1σ: (359. • 55, 0. • 15) and (0. • 15, 0. • 2). These differences demonstrate the biases of the methods. In both locations, there is a small, polarized source with a different ∆θ from the large, polarized background. Since the mean method favors bright sources and the histogram-fitting method favors large sources, these two regions are most likely to show a difference. While this is an important caveat, Figure A1 shows that, in general, the two methods are in good agreement. The errors on ∆θ differ according to the method used. When averaging over 125-arsec tiles, both methods give errors within 50% of each other, if errors < 3 • . However, for errors greater than 3 • , the errors found with the mean method are progressively less than those found by histogram fitting. The maximal error found by the mean is about 20 • , while the histogram-fitting method has a maximal error of about 200 • . This is expected, since the mean method assumes that angles are much smaller than 1 rad; this is not always true for the errors in ∆θ. Thus, the histogram fitting errors are a more accurate and more conservative estimate of the true errors in the mean value of ∆θ.
2011-02-10T19:07:29.000Z
2011-02-10T00:00:00.000
{ "year": 2011, "sha1": "72173743a9aca9855f14218305236be9bc0b6bbe", "oa_license": null, "oa_url": "https://iopscience.iop.org/article/10.1088/0004-637X/731/1/36/pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "72173743a9aca9855f14218305236be9bc0b6bbe", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
55939500
pes2o/s2orc
v3-fos-license
RAMAN SPECTROSCOPY AND X-RAY FLUORESCENCE IN MOLECULAR ANALYSIS OF YELLOW BLOCKS FROM THE ARCHEOLOGICAL SITE PLAYA MILLER 7 (NORTHERN CHILE) Yellow blocks from the archaeological site Playa Miller 7 (PLM7), on the coast of Atacama Desert in northern Chile, were analyzed by Raman spectroscopy and X-ray fluorescence (XRF) portable. Our results identify for the first time the use of K-jarosite and natrojarosite in prehispanic times (approx. 2500 year BP). In search of a possible source of supply for this mineral hydrothermal origin, our surveys were focused on Andean geothermal areas with identification, so far, from a single source in the region of Arica and Parinacota: Jurasi (JU), located at 4000 mamsl. Comparison of the Raman spectra between samples archaeological and Jurasi, allow us to infer that this hydrothermal source could be used as obtaining source of yellow pigment by prehispanic inhabitant of Formative period (3700-1500 years B.P.). INTRODUCTION The use of Raman spectroscopy and X-ray fluorescence has proved to be a powerful combination of techniques for the analysis of archaeological objects [1][2][3][4] .Raman spectroscopy allows obtaining structural information of various molecular systems.This technique is part of vibrational spectroscopy, allowing information of the normal modes of vibration of different molecular groups.The greatest potential of Raman relates to their specificity, sensitivity, reproducibility, applicability in-situ, spatial and spectral resolution 5 , besides being a non-invasive and non-destructive technique.These advantages, coupled with recent developments in Raman instrumentation, have made it possible to extend the use of this technique to the conservation and archaeology [5][6][7][8][9][10][11] .Among others, the use of Raman allowed characterizing pigments and dyes used in the preparation of manuscripts, paintings, ceramics and textiles [12][13][14][15][16][17] . Moreover, X-ray fluorescence (XRF) is, undoubtedly, one of the most commonly applied techniques in conservation and archaeology 18 .The development of portable equipment has contributed over the last 40 years to increase its application because of its non-destructive and non-invasive characteristics, besides allowing, in many cases, in-situ analysis without sample preparation.To these characteristics adds the ability to identify the presence of elements in major and minor amounts.Finally, this technique is appreciated for its relatively low cost and short analysis 19 . From the archaeological point of view, the particular conditions of the Atacama Chilean desert favour the conservation of a wide variety of archaeological materials, some of them, attributed to the different periods of the prehispanic's chronological sequence (10500-450 years B.P.) and post-Spanish contact (posterior to the fifteenth century).From the time of the first settlements in the region, pigment is registered on fibber mats in funerary contexts along the coast or as fragments found in the stratigraphy of some occupations in eaves located about 4000 mamsl 20 .During the Archaic period (10500-3500 years B.P.), the presence of color is found in other supports such as wood, leather, filling and coating of mummified bodies and rocky walls [20][21] .In each of these, pigment is prepared and applied as painting, ie, as a mixture containing at least one pigment and a binder.The pigment consist in manganese oxide for black color 20 , iron oxide for red 22 , copper oxides for green 23 , different sorts of clay for white or grey 24 .In hunter-fishermen Chinchorro from Archaic period (7000-3500 years B.P.) [25][26] , these pigments are mixed to white or grey clay [27][28] to be incorporated in the stuffing bodies or applied as a surface coating.In the rock art of andean hunter gatherers the use of iron and manganese is recognised too, but with different morphologies and sizes, aluminosilicates and possibly mixed with water as binder 21 . Until now in South America, yellow pigments have received little attention.Moreover yellow pigment such as natrojarosite, has only been reported in the cave paintings of Inca Cave 4 site in the region of Jujuy, Argentina 29 and other various sites in Patagonia Argentina 30 .The use of jarosites in South America joins other known cases as in, for example, the old world and the Egyptian Old Kingdom (2300 -2600 year BC) 31 or in the murals of Beroe fortress, Romania (4th-6th century) 32 .Jarosites are a large family of minerals that have a general formula of the type M n (Fe 3+ ) 6 (SO 4 ) 4 (OH) 12 , where M can be K + , (NH 4 ) + , Na + , Ag + or Pb 2+ and where n = 2 for monovalent cations and 1 for the divalent cations 33 .The mineralogical characteristics and chemical properties of this family of compounds have been widely studied [33][34][35][36][37] . In this paper, we present the results obtained from the analysis by Raman spectroscopy and XRF of samples of yellow pigment blocks from the PLM7 Site, located on the coast of northern Chile (fig.1).This colour is very rarely found in contexts archaic hunter-gatherer groups.Until now, yellow paint has only been identified on coating of a mummy of the Macarena Chinchorro site 38 and in rock art paintings of Andean foothills but without analyses.Samples analyzed here have a very clear yellow colour and very bright, which may acquire an ocher tonality.Thus the aim of this paper is to account for the first time of using minerals jarosites family in Chile, in ancient times (3700-1500 years BP).Moreover, this work represents the first application of these techniques to the analysis of archaeological remains in Chile. 2. Experimental 2.1.Samples PLM7 samples were taken in the laboratories of the University of Tarapacá Museum of San Miguel de Azapa.These blocks come from yellow ovoid block, with compact to semi-compact structure (Table 1 and fig.2).Many of them have linear fingerprint extraction, so that these blocks can be intentionally manufactured as a product, probably to store the pigment for his use at different times as needed.Blocks consist mainly of finely ground pigment that adheres very easily.In some cases blocks have a more heterogeneous composition visible to the naked eye, given the presence of finely ground elongated structures and incorporated in the mixture, identified as algae 39 .For sampling, we privileged unarmed blocks, detached fragments or powder (fig.2).Each sample was stored in a plastic container to be moved and then analyzed.Additionally, surveys were conducted in different areas of the region, which could be places of supply´s pigments in ancient times.Recent mining uses difficult to find evidence related to the extraction of pigment in the past.So far, only the area known as Los Pumas evidence remains linked to a mining prehispanic 20 .Regarding yellow sources analyzed in this study, only JU area showed reservoir similar to the colour identified in PLM7.This search was conducted in order to reproduce the conditions of exploration occurred in the past. Analytical measurement Raman spectra of the yellow blocks extracted from Playa Miller 7 and Jurasi were recorded on a Raman Renishaw Microscope System RM1000 apparatus, equipped with 514, 633, and 785 nm laser lines for excitation, a Leica microscope and an electrically cooled charge-coupled device detector.The instrument was calibrated using the 520 cm −1 line of a Si wafer and a 50× objective.The resolution was set to 4 cm −1 , and 5-20 scans of 10 s each were averaged; spectra were recorded in the 1800-200 cm -1 region to observe the Raman spectra.The spectral scanning conditions were chosen to avoid sample degradation and photodecomposition.Data were collected and plotted using the programs WIRE 2.0 and GRAMS 8.0. XRF spectra were recorded with a XRF Bruker Tracer III-SD portable equipment with a detector fitted 10 mm 2 XFlash® SDD, Peltier cooled; typical resolution 145 eV at 100000 cps and equipped with a X-ray tube Rh target; max.voltage 40 keV, using 15 keV of energy and an acquisition time of 120 s.Data were collected and plotted using the program Tracer software S1PXRF 3.8.3. RESULTS A total of seven samples from the archaeological of PLM7 site and 3 extracted from the JU hydrothermal site were analysed.Raman spectra were recorded on different zones for each sample.Each spectrum obtained was compared with data published in RRUFF online databases 40 .In most of the yellow blocks of heterogeneous composition, containing algae, it was not possible to obtain the Raman spectrum due to fluorescence.Only in two cases it was possible to analyse the spectrum.The spectral scanning was performed in sample areas where yellow tonality was clearly distinguished.Furthermore, two of the three yellow blocks from the hydrothermal JU, display an analysable spectrum.All registered Raman spectra of yellow zones, showed characteristic bands ascribed to the jarosite family (fig.3).Different jarosite synthetic compounds were identified by Sasaki et al. 37 by using Raman, infrared and X ray data.They assigned different vibrational modes for the SO 4 2-molecular fragment (ν 1 , ν 2 , ν 3 y ν 4 ).ν 1 and ν 3 correspond to symmetric (ν s ) and antisymmetric (ν as ) stretching, respectively.The ν 2 and ν 4 modes correspond to bending (δ) vibrations.Other features observed in the vibration spectra of jarosites are associated with the FeO fragment.The spectral assignment in Table 2 is proposed on the basis of works by Sasaki et al. 37 and Frost et al. 36 and general spectral data 41 .In our Raman spectra (fig.3), the ν s SO 4 2-mode is observed in the 1014-1017 cm -1 range.A ν as SO 4 2-mode is ascribed in all spectra to signals in the 1112-1118 cm -1 region.Another ν as SO 4 2vibrational mode is observed near 1160 cm -1 .A δSO 4 2mode appears in all spectra in the 623-628 cm -1 range.Another δSO 4 2vibrational mode is assigned to the band located in the region 442-454 cm -1 , following Frost et al. 36 .The single band observed at 565 cm -1 in the spectrum of the PLM7-14 samples is assigned to FeOH deformation modes.The strong band in all spectra in the 219-224 cm -1 range is assigned to one of the vibration modes of the Fe-O bond.Two bands in the 280-370 cm -1 spectral range are assigned to FeO vibrations.To distinguish different types of jarosites in natural samples is difficult due to their heterogeneity.Therefore, we decided to realise complementary XRF analysis. XRF analysis of yellow blocks from PLM7 and JU showed significant amounts of S and Fe (fig.4), which is consistent with the presence of compounds of the jarosites family.Furthermore, the significant amount of K, suggests that K-jarosite is present in samples.Under the conditions of XRF measurement, it was not possible to detect the presence of Na (Table 1).However, in our previous work by SEM-EDX and X-ray diffraction 39 , we detected the presence of this element in PLM7´s samples.Finally, the absence of vibration modes associated with the molecular N-H fragment in Raman spectra, as well as the absence of the metals Ag and Pb in the XRF analysis, suggest in PLM7 and JU the presence of K-jarosite and Na-jarosite. CONCLUSIONS Results obtained by Raman spectroscopy indicate that several of the observed vibrational signals correspond to vibration normal modes associated with jarosite type compounds.Slight differences in wavenumbers are not enough to differentiate the type of jarosite.However, from our XRF results, and previous results obtained by SEM-EDX and XRD 40 , we conclude that in the yellow blocks accruing from archaeological site PLM7 and font hidrotermal JU, predominant yellow pigments are Natrojarosite and K-jarosite. Alongside K-jarosite and Natrojarosite used as base and principal material in the yellow blocks, it is possible to find other elements such as algae and quartz 39 .While, Sepulveda et.al. were able to determine the presence of Natrojarosite by SEM-EDX and XRD in these same yellow blocks, it was not until this work that identify more specifically jarosites mixtures used by the prehispanic inhabitant of the coastal Atacama desert. Figure 1 : Figure 1: Map of Playa Miller 7 on the coast of northern Chile. Figure 2 : Figure 2: Type samples analyzed by yellow blocks, (a) ovoidal and (b) powder Table 1 : Results obtained by magnifying glass and XRF from samples taken from PLM7 and Jurasi. Table 2 . Wavenumbers and the most probable assignment for the Raman bands of jarosites from PLM7 and JU.
2018-12-07T10:43:27.274Z
2013-09-01T00:00:00.000
{ "year": 2013, "sha1": "1788dee0416ffdede461344e2796f63e2bb6f35e", "oa_license": "CCBYNC", "oa_url": "http://www.scielo.cl/pdf/jcchems/v58n3/art08.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "1788dee0416ffdede461344e2796f63e2bb6f35e", "s2fieldsofstudy": [ "Geology", "Chemistry" ], "extfieldsofstudy": [ "Geology" ] }
51943539
pes2o/s2orc
v3-fos-license
Near-Infrared Spectroscopy Combined with Absorbance Upper Optimization Partial Least Squares Applied to Rapid Analysis of Polysaccharide for Proprietary Chinese Medicine Oral Solution Near-infrared (NIR) spectroscopy was applied to reagent-free quantitative analysis of polysaccharide of a brand product of proprietary Chinese medicine (PCM) oral solution samples. A novel method, called absorbance upper optimization partial least squares (AUO-PLS), was proposed and successfully applied to the wavelength selection. Based on varied partitioning of the calibration and prediction sample sets, the parameter optimization was performed to achieve stability. On the basis of the AUO-PLS method, the selected upper bound of appropriate absorbance was 1.53 and the corresponding wavebands combination was 400 1880 & 2088 2346 nm. With the use of random validation samples excluded from the modeling process, the root-mean-square error and correlation coefficient of prediction for polysaccharide were 27.09 mg∙L1 and 0.888, respectively. The results indicate that the NIR prediction values are close to those of the measured values. NIR spectroscopy combined with AUO-PLS method provided a promising tool for quantification of the polysaccharide for PCM oral solution and this technique is rapid and simple when compared with conventional methods. Introduction Proprietary Chinese medicine (PCM) oral solution is a kind of health-care nourishing product, which is convenient to eat.According to the theory of traditional Chinese medicine, modern research results and practical experience, it is crafted by extracting some active components from a variety of Chinese herbal medicine.The compound polysaccharide, as the main active ingredients of PCM oral solution, can effectively regulate and enhance human immunity, prevent diseases and improve physical fitness.In the process of producing PCM oral solution, the real-time determination of the polysaccharide content is the necessary guarantee of monitoring the quality of the products.The conventional method [1] needs sample pretreatment and chemical reagent, which is difficult for real-time monitoring of production quality.Therefore, a rapid, simple, and reagent-free method has the significant value in practice. Near-infrared (NIR) spectroscopy primarily reflects absorption of overtones and combination of vibrations of X-H functional groups (such as C-H, O-H, and N-H).Because of weak absorption strength, most of samples can be measured directly without preprocessing.This rapid, simple and non-destructive technique has obvious advantages and is commonly used in many areas, including agriculture [2]- [6], food [7] [8], environment [9], biomedicine [10]- [13] and pharmaceuticals [14] [15].However, to the best of our knowledge, a quantification method for the determination of polysaccharide in the PCM oral solution using NIR spectroscopy has not been developed yet.Since the NIR spectra have serious overlapping and no significant absorption band, especially for the PCM oral solution with multiple components, appropriate chemometric methods must be employed to obtain wavelength optimization and quantitative analysis models with high signal-to-noise ratio (SNR).It can achieve extracting information variables and remove the noise interference.Partial least squares (PLS) regression has been recognized as an effective multivariate analysis method, and has been widely applied in the spectral analysis field [2]- [13]. Zengjian oral solution is a well-known brand product of PCM healthy oral solution, which is produced via refining polysaccharide from natural plant such as tremella, enoki and Chinese wolfberry etc.In this study, absorbance upper optimization PLS (AUO-PLS) was proposed, and NIR spectroscopy combined with AUO-PLS method was successfully applied to the rapid and reagent-free quantification of polysaccharide for Zengjian oral solution. The stability of the spectral analysis model is very important in practice.Numerous experiments show that differences in partitioning of calibration and prediction sample sets can result in fluctuations in predictions and parameters (e.g. the number of PLS factors), thus leading to unstable results [3] [5] [8] [9] [12].In the current study, a rigorous process of calibration, prediction, and validation based on randomness and stability was performed to achieve the goal of spectroscopic analysis. Experimental Materials, Instruments, and Measurement Methods A total of 1533 Zengjian oral solution samples were collected from infinitus (China) Company Ltd.The polysaccharide concentrations of these samples were measured with a UV-2300 UV-Vis spectrophotometer (Shanghai Tianmei, China) using mineral chameleon titration method.Mineral chameleon titration is capacity analysis method with potassium permanganate solution as titrant.It requires the use of chemical reagents, and by color reaction to achieve accurate quantification of the polysaccharide concentration of a sample.The measured values ranged from 330.26 mg•L −1 to 679.99 mg•L −1 , and the mean value and standard deviation were 484.67 and 52.53 mg•L −1 , respectively, which were used as the reference values for the calibration modeling of NIR spectroscopic analysis.Based on the obtained calibration model, a new method without chemical reagent for rapid determination of polysaccharide concentration of the PCM oral solution samples can be established with NIR spectroscopy. An XDS Rapid Content TM Solution Grating Spectrometer (FOSS, Denmark) equipped with a transmission accessory and a 2-mm cuvette was used for spectroscopy.The scanning spectrum spanned 400 nm to 2498 nm with a 2-nm wavelength gap, including the overall NIR region and a part of the visible region.Wavebands of 400 -1100 nm and 1100 -2498 nm were used for silicon and plumbous sulfide detection, respectively.Each sample was scanned thrice, and the mean value of the three measurements was used for modeling.The spectra were obtained at 25˚C ± 1˚C and a relative humidity of 45% ± 1%. Calibration, Prediction, and Validation Process with Stability First, the 693 samples were randomly selected from a total of 1533 samples as the validation sample set, which were not subjected to the modeling optimization process.Then, the remaining 840 samples were used as modeling sample set and were further randomly divided into calibration (420 samples) and prediction (420 samples) sample sets for 100 times.The calibration and prediction models were established for all 100 divisions, and the model parameters were optimized depending on the mean prediction effects for all divisions to obtain objective and stable models. The root-mean-square errors (SEC, SEP) and correlation coefficients (R C , R P ) for calibration and prediction in modeling set were calculated, respectively.For each division (i) of calibration and prediction sets, they were denoted as SEC i , SEP i , R C,i and R P,i , respectively, 1, 2, ,100 i =  .The mean values (SEP Ave , R P,Ave ) and standard deviations (SEP SD , R P,SD ) of SEP i and R P,i for all the divisions were further calculated, respectively.These values were used to analyze model prediction accuracy and stability.The equation SEP + = SEP Ave + SEP SD was used as a comprehensive indicator of prediction accuracy and stability of a model.A smaller value of SEP + indicated higher accuracy and stability.The model parameters were selected to achieve minimum SEP + .The selected model was then revalidated against the validation sample set.The root-mean-square error and correlation coefficient of prediction in validation sample set were then calculated and denoted as SEP and R P , respectively.The calculation formulas are as follows: ( ) where m is the number of validation samples; C k and k C  are the measured and predicted polysaccharide con- centrations of the kth validation sample, respectively; Ave C and Ave C  are the mean measured polysaccharide value and the mean predicted polysaccharide value of all the validation samples, respectively. Selection of Number of PLS Factors with Stability The number of PLS factors (F) is an important parameter of PLS method that corresponds to the number of spectral latent variables corresponding to sample information.The selection of a reasonable F is both necessary and difficult.If F was set too small, the sample information in the spectra was unable to be fully reflected.If F was set too big, extra noises would be led into the model, the prediction ability would descend in both cases.In the present study, F was selected according to minimum SEP + based on all divisions for the calibration and prediction sample sets.Thus, the optimal number of PLS factors exhibited stability and practicality. AUO-PLS Method Lambert Beer's law is described by the following equation: where λ is the wavelength; A(λ) is the absorbance; I 0 (λ) and I 1 (λ) are the intensity of incident light and the intensity of transmitted light through the sample, respectively; and T(λ) is the transmittance, i.e., the ratio of transmitted light intensity and incident light intensity.Conversely, Equation (3) can then be expressed as follows: According to the above equation, e.g. when A(λ) = 4, the transmitted light intensity was merely one ten thousandth of the incident light intensity, i.e., the 99.99% of the incident light was absorbed by the sample.In this case, the transmitted light was very weak and was difficult to detect; it would thus likely cause noise in the spectrum.Therefore, wavelength selection with appropriate absorbance values, which correspond to a high quality of sample information and low levels of noise, is necessary.In this study, a novel PLS-based wavelength selection method, named absorbance upper optimization PLS (AUO-PLS) was proposed on the basis of the selection the upper bound of absorbance, which can appropriately minimize noise bands.The specific steps are as follows: Step 1: A region of wavelength screening (Δ) was set in advance for the entire scanning region according to the physical and chemical characteristics of the measured objects and the instrument properties.Meanwhile, in the average spectrum for all samples within the region 4, the minimum and maximum values of absorbance were denoted as A min and A max , respectively.An appropriate step of absorbance (ε) was set. Step 2: Set some value A * , * min A A ≥ , the upper bound of absorbance A upper was changed from A * to A max with the step ε.According to relationship between wavelength and absorbance within the region Δ, for each A upper , the absorbance interval (A min , A upper ) corresponded to a wavebands combination. Step 3: Every obtained wavebands combination was employed for establishing the PLS calibration and prediction models.The corresponding SEP Ave , R P,Ave , SEP SD , R P,SD and SEP + values were then calculated. Step 4: According to minimum SEP + , the optimal A upper was determined, and the wavebands combination corresponded (A min , A upper ) was also selected. In this study, the region Δ was set to be the entire scanning region (400 -2498 nm) with 1050 wavelengths.The A min was greater than or close to zero, and the A max value was less than or close to five, therefore, A min and A max were set to 0 and 5, respectively.Noticed that around 1450 nm is another obvious absorption peak with absorbance value 1.40.In order to retain the relevant information of the region, the A * value was set as 1.40 (namely set A upper > 1.40), because the main purpose in here is to remove the noise bands with saturate absorption.The absorbance step ε was set to 0.01 and the number of PLS factors (F) was set to 1 shows a sketch map of the relationship between wavelength and absorbance for the case in which the absorbance value A upper = 1.53 and the corresponding wavebands combination is 400 -1880 & 2088 -2346 nm. Wavebands Combination Selection with AUO-PLS The NIR spectra of the 1533 samples of Zengjian oral solution in the entire scanning region (400 -2498 nm) are shown in Figure 2. As indicated in the figure, a saturate absorption region appears at about 1900 -2000 nm.The saturate region was caused by strong absorption of water molecules and scattering of some tangible components in oral solution samples.AUO-PLS method mentioned in Section 2.4 was performed to avoid the noise wavebands with high absorption. The SEP + values for each upper bound of absorbance A upper are shown in Figure 3.The results showed that, the prediction polysaccharide value achieved the minimum SEP + when about A upper = 1.53.The corresponding wavebands combination was 400 -1880 & 2088 -2346 nm with 871 wavelengths, and the prediction accuracy and stability results (SEP Ave , R P,Ave , SEP SD , R P,SD , and SEP + ) are summarized in Table 1.As a comparison, the full PLS model based on the entire scanning region was also established, and the prediction effects were also summarized in Table 1.The SEP + value for optimal AUO-PLS model was 27.81 mg•L −1 , which was obviously better than that of the full PLS model.The relative SEP value (RSEP) for the optimal AUO-PLS model was 5.6%.The results show that, by avoiding the noise wavebands with high absorption, the prediction ability was improved and model complexity was reduced. Model Validation The randomly selected validation samples, which were excluded in the modeling optimization process, were used to validate the adopted AUO-PLS model.The PLS regression coefficients were calculated using the spectral data and measured polysaccharide concentrations of all modeling samples depending on the selected parameter F. The predicted polysaccharide concentrations of the validation samples were then calculated using the obtained regression coefficients and spectra of the validation samples. Figure 4 shows the relationship between the NIR predicted and measured values of the 693 validation samples.The evaluation values (SEP and R P ) for validation effect were 27.09 mg•L −1 and 0.888, respectively.The results indicate that the NIR prediction values of the validation samples are close to those of the measured values.Satisfactory validation effects were achieved for the random samples because stability was considered in the modeling optimization process. Conclusion Wavelength selection is crucial for spectroscopic analysis, as it improves the effectiveness of prediction, reduces model complexity, and aids in the design of a specialized spectrometer with a high signal-to-noise ratio.The proposed AUO-PLS method focused on the optimization of upper bounds of absorbance to avoid noise interference caused by high absorbance.Based on the relationship between wavelength and absorbance, the appropriate wavebands combination was selected.NIR spectroscopy combined with the proposed AUO-PLS method was successfully employed for the reagent-free and rapid quantitative analysis of polysaccharide for Zengjian oral solution.A rigorous process of calibration, prediction, and validation based on randomness and stability was performed to produce objective and stable models.We believe that AUO-PLS has such applicability and can be also applied to other brand product of PCM healthy oral solution.Upper bound of absorbance /- Figure 1 . Figure 1.Sketch map for relationship between wavelength and absorbance. Figure 3 . Figure 3. SEP + values for each upper bound of absorbance with AUO-PLS method. Figure 4 . Figure 4. Relationship between the predicted and measured values of the validation samples with AUO-PLS method. Table 1 . Prediction effects of full PLS and AUO-PLS models for polysaccharide.
2019-04-07T13:06:36.581Z
2016-03-04T00:00:00.000
{ "year": 2016, "sha1": "0af8194a696d07ba6ec68acd4da97c2237489ead", "oa_license": "CCBY", "oa_url": "https://www.scirp.org/journal/PaperDownload.aspx?paperID=64579", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "0af8194a696d07ba6ec68acd4da97c2237489ead", "s2fieldsofstudy": [ "Medicine", "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
268954069
pes2o/s2orc
v3-fos-license
Energy-Efficient Resource Management for Real-Time Applications in FaaS Edge Computing Platforms Edge computing and Function-as-a-Service are two emerging paradigms that enable a timed analysis of data directly in the proximity of cyber-physical systems and users. Function-as-a-service platforms deployed at the edge require mechanisms for resource management and allocation to schedule function execution and to scale the available resources in order to ensure the proper quality of service to applications. Large-scale deployments will also require mechanisms to control the energy consumption of the overall system, to ensure long-term sustainability. In this paper, we propose a technique to schedule function invocations on Edge resources by powering down idle edge nodes during period of low demands. In doing so, our technique aims at reducing the overall energy consumption without incurring in service level agreements violations. Experimental evaluations demonstrate that the proposed approach reduces service level agreement violations by at least 78.1% and energy consumption by at least 62.5% on average using synthetic and real-world datasets w.r.t. different baselines. Introduction In the era of rapidly advancing technology and ever-increasing demand for real-time data processing, a new paradigm known as edge computing has emerged as a promising solution.Edge computing represents a shift from the traditional centralized computing model, where data is processed in a remote data center or cloud, to a decentralized approach that brings computational resources closer to cyber-physical systems and users [8,17] to allow for faster data processing and improved efficiency.Instead of sending all data to a central location for processing, edge computing supports data analysis on edge nodes installed at the edge, thus enabling quicker insights and allowing for rapid actions [6,11].This new paradigm is particularly relevant in the context of Internet of Things (IoT) applications, where often the analysis of the data generated by IoT devices is required to be performed within a certain timeframe, e.g. in autonomous vehicles, remote healthcare monitoring, industrial automation, and smart cities applications [9,10,16].By bringing computation resources closer to the point of action, edge computing not only enables rapid decision-making but also enhances the overall system performance [5,11].This new paradigm, however, is not intended to replace cloud computing entirely but rather to complement it, creating a distributed architecture that optimizes the flow of data between the edge and the cloud [8]. Concurrently to edge computing, a novel service paradigm has emerged to improve flexibility and efficiency of data analysis, the Function-as-a-Service (FaaS) model.FaaS offers developers the opportunity to run code in the form of discrete, stateless functions triggered by specific events, without worrying about underlying infrastructure management [13].FaaS is the ideal service model for edge computing, as by distributing function execution to edge nodes, quicker response times can be ensured to event-triggered data analysis, crucial for applications requiring rapid decision-making. In the context of large distributed computing platforms, efficient workload assignment and auto-scaling of available resources are two critical aspects to maximize performance and optimizing resource utilization.In FaaS platforms the former can be achieved by assigning function execution to the most suitable devices based on factors like proximity, resource availability, and computational capabilities.In edge computing platforms, auto-scaling can be performed by adding or removing nodes, a task that is crucial for the overall system efficiency [5,11] as it ensures that the proper amount of resources is available, thus avoiding under-utilization or overloading of edge devices. In response to the increasing demand for processing, however, significant progress can still be made in enhancing task assignments by integrating energy consumption considerations for processing workloads in edge nodes.This advancement is foreseen to facilitate identifying and selecting the most suitable edge node for each workload demand to run functions, considering both processing capability and energy consumption levels.The latter in particular is envisioned to be vital to lower energy and CO 2 footprint, still ensuring the required Quality of Service (QoS) [2,13]. Our work designs a model to utilize resource provisioning and execution of user functions while addressing energy use.In section 4, we introduce a model that minimizes energy by smartly assigning workloads to edge nodes.For automatic scaling, we employ a dynamic approach, powering off idle nodes during low load and activating them during heavier workloads.The proposed approach is assessed considering a realistic use-case of image analysis applications for video surveillance, where images are analyzed by invoking functions.Our performance evaluation based on simulations employed both synthetic and real workload traces.The performance of the proposed approach is compared against other policies.The results show that our proposal reduces the overall system energy consumption, while it is still capable of ensuring the required QoS. The remainder of this article is structured as follows: Section 2 presents an overview of related works.Section 3 describes system model.In Section 4, our proposed solution is detailed.Its performance evaluation, together with other solutions is then discussed in Section 5. Finally, Section 6 concludes the paper. Related works Auto-scaling is essential in cloud environments and encompasses three common strategies: horizontal scaling adjusts capacity by adding or removing instances/nodes, with each handling a portion of workload; vertical scaling resizes existing resources to match demands, and hybrid scaling combines both.In FaaS platforms, instead, scheduling is a crucial component to ensure proper resource management, as it manages the actual execution of functions on the nodes of the platform.In the following, we provide a concise overview of related works by presenting first the auto-scaling methods proposed for edge computing platforms (Section 2.1), then scheduling solutions in FaaS (Section 2.2).We conclude the section by highlighting the novelty of our contribution w.r.t. the literature in Section 2.3. 2.1 Auto-scaling approaches at the edge 2.1.1Horizontal auto-scaling approaches Lee et al. [7] propose an enhanced auto-scaling method for service management in a Mobile Edge Computing (MEC) environment.A crucial aspect of this approach is making accurate scaling decisions regarding the components to scale and where to apply these decisions.To address this, the proposed method incorporates a Deep Q-Network (DQN) model for selecting scaling actions based on the given state and a decision model that complements the DQN model.The decision model ensures that scaling actions are applied to the appropriate locations, considering factors such as QoS, operating costs, and resource availability.Silva et al. [15] present a method for horizontal auto-scaling at the network edge using online machine learning.Their approach employs the MAPE-K control loop architecture to adapt container numbers to workload changes.This method not only dynamically adjusts scaling based on real-time workload fluctuations but also transitions to proactive scaling once the prediction model reaches optimal performance, allowing it to predict and initiate scaling actions preemptively. Vertical auto-scaling approaches The Q-Learning algorithm's limitation in selecting actions from a restricted action space prevents it from achieving precise control.To address this, Gan et al. [4] introduce a vertical auto-scaling algorithm that extends the Proximal Policy Optimization (PPO) method to a continuous action space, enabling enhanced control.PPO is a reinforcement learning algorithm renowned for optimizing policies in sequential decision-making requests, striking a balance between stability and sample efficiency. Li et al. [8] focus on optimizing dynamic auto-scaling and adaptive service placement in edge computing environments.It starts by representing the architecture of edge computing and microservice-based applications as graphs.The objective is to minimize request delays while taking into account resource and bandwidth limitations.To address this problem, the paper proposes a dynamic multi-stage auto-scaling model that incorporates workload prediction for microservices and evaluates the performance of edge nodes. 2.1.3Hybrid auto-scaling approaches Rossi et al. [12] examine a model where a black-box application performs requests.To manage heavier workloads, multiple concurrent instances can be created using containers.Each independent instance meets response time requirements.Dynamic resource allocation handles varying workloads for performance, though elasticity could lead to downtime.Rzadca et al. [14] introduce "Autopilot," a machine learning-driven approach that optimizes resource scaling for requests.Autopilot reduces resource underutilization and the risk of request termination by forecasting vertical resource limits from historical data and employing rule-based horizontal scaling techniques.Vozmediano et al. [17] concentrate on responsive threshold-based methods, enabling users to set scaling rules using performance metrics.Edge nodes and cloud sites independently track metrics, triggering scaling actions if thresholds are breached or unmet within set time intervals.Horizontal scaling adds/removes a specified number of virtual machines (VMs) at the location, while vertical scaling adjusts resources within each VM based on workload.This approach suits small and medium-sized edge computing platforms. Scheduling in FaaS To efficiently distribute workloads and alleviate potential overload on edge devices within an edge environment, Ciavotta et al. [2] devised a decentralized FaaS architecture.This innovative approach enables the authors to distribute function execution across all edge devices, mitigating the risk of nodes becoming overloaded.Within their article, the authors employ predictive modeling to forecast the incoming workload for the upcoming time slot, calculating the resource requirements for various classes of functions.Russo et al. [13] introduce a comprehensive approach aimed at enhancing control over scheduling and resource allocation within FaaS platforms.Their strategy addresses the challenge of managing various services catering to diverse user classes.To accomplish this, the authors employ a First-Come First-Served methodology, ensuring the fulfillment of QoS requirements. Our contribution The literature in FaaS at the edge needs solutions for performing QoS in general and none of them consider the problem of energy efficiency.To address this crucial aspect, our proposed solution takes this benchmark into account and presents a novel model designed to tackle the energy challenge.Our model aims to minimize energy consumption by calculating the amount of energy required for function execution.By doing so, we create a list of edge nodes along with their energy consumption levels, enabling efficient function execution.To achieve efficient request handling, this study further formulates a model to solve this issue effectively.Moreover, to enhance efficient auto-scaling, our proposed approach involves powering down idle edge nodes when they are not in use and dynamically adding more edge devices to provision resources whenever the demand increases. System model The overall architecture of a FaaS system deployed at the edge consists of a two-tier model, namely the Devices tier and the Edge tier as Figure 1 shows.The Devices tier comprises a set of devices (such as smartphones, IoT devices, cameras, etc.) that generate events.An event could be data generated from a sensor or a physical occurrence that requires the execution of a certain code for data analysis or to react on edge nodes at the Edge tier.These edge nodes are capable of execution functions.In our system model, we assume that functions are triggered periodically, as each function is associated to a device/user that generates a continuous stream of events. Two mechanisms are required to manage the Edge tier: a scheduling function mechanism that selects the node for the execution of a function and an auto-scaling mechanism that manages the amount of available resources available on the system at anytime.In this paper, we propose a scheduling mechanism to select the node for the execution of a function to ensure the required QoS, while minimizing the energy consumption.To reach this objective, the proposed approach selects a node that has sufficient resources to guarantee the required QoS, however, it gives priority to nodes that are more energy efficient, i.e., they are capable of handling functions with lower energy consumption.In addition, an auto-scaling approach is adopted to minimize the overall energy consumption of the platform.To this aim, we propose an auto-scaling mechanism to power down the idle edge devices during low loads, and then turn them on again when more resources are required. To illustrate the considered system model, we consider a practical example: a video analysis application implemented on a FaaS platform.In this scenario, cameras continuously generate images to be analyzed.Every image is treated as an event and triggers the execution of a function, which analyzes the single frame or a set of frames and produces some results, e.g., the objects that are present. It is important to highlight that although this use case is considered in the performance evaluation, the proposed model is adaptable without changes also in other scenarios employing FaaS edge computing platforms. Energy-Efficient Resource Management In this section, we present the proposed Energy-Efficient Resource Management mechanism (EERM).EERM includes both a scheduling mechanism to distribute function execution on different edge devices and an auto-scaling mechanism to turn on and off the nodes available on the system.The former enables faster and more efficient processing, reducing overall latency and improving system performance, the latter aims at minimizing the overall The edge environment consists of multiple heterogeneous computing devices (edge nodes) which are responsible for running a set of functions.To ensure efficient resource allocation, each request must be assigned to an appropriate edge node.The selection criteria for determining the suitable node for processing a given request is based on the principle of minimizing energy consumption.In other words, the goal is to identify a node that can execute functions while consuming the least amount of energy.The resource allocation is performed periodically and a continuous stream of events is considered. We address two major issues in FaaS platforms at the edge by managing function execution and energy usage by edge devices.The main objective is to assign requests to nodes that possess sufficient processing capacity while maintaining energy efficiency to minimize overall energy consumption.The capacity of an edge node refers to the number of processing requests that the node can accept.To achieve more efficiency, the system will power down those nodes which are idle.Put simply, whenever the system requires additional resources to run more functions, the idle nodes will be reactivated. The formulation of the model is established based on the following definitions.There is a set of edge nodes, representing the collection of available computing devices at the edge.Additionally, there is a set of requests for function execution that need to be processed.So, let denote the computational workload of the -th request, which refers to the number of processing requests.Let denote the energy consumed by -th node for processing a single workload unit.Based on these definitions, the energy consumed by workload processed on node is given by: where is a decision variable that is defined as: The total energy consumption E to process all workloads is: Therefore, the goal is to minimize the total energy consumption due to processing workloads at the edge by edge nodes.Given the above details, the objective function is: where represents the capacity of edge node , which refers to the number of requests that node can process.Regarding the above objective function, each request must be assigned to one edge node to process.Also, the summation of workloads assigned to node should not exceed the capacity of this node.After receiving the current workload of request in the Edge tier, the goal is to assign this request to the most efficient edge node for executing its functions.To achieve this, the proposed algorithm employs a three-stage approach.In the first stage, the algorithm identifies nodes within the network that possess sufficient remaining capacity to handle the incoming request effectively.Moving on to the second stage, the system generates a list of eligible edge nodes with adequate capacity to accommodate the computational request .To determine the more suitable node for processing request , the algorithm calculates the energy consumption for each node in the list using the expression (1).The decision variable defined in expression (2) comes into play at this point, helping to select the most suitable edge node for the current request.As the request should be assigned to just one edge node, this decision variable aims to identify the most efficient node capable of accepting and processing the request to run functions.In the third stage, the system enhances overall efficiency by managing energy through horizontal auto-scaling of edge nodes.This energy management strategy ensures that during periods of low load, idle nodes are powered down, conserving energy resources.Conversely, when the load is high and more edge devices are required, the algorithm activates idle nodes to provision additional resources, ensuring efficient performance. We employ PuLP, a well-known open-source Python library for Linear Programming modeling.PuLP provides a user-friendly interface for formulating and solving linear programming problems, and it is compatible with various solvers including CPLEX [18]. Performance evaluation In this section, we carry out the performance evaluation of the proposed method by means of simulations.To this aim, the specific use case of image analysis presented in Section 3 is considered to ensure a more realistic evaluation.The objective of the simulations is to address several crucial research questions, which are as follows: RQ.1 Which method can be employed to effectively minimize violations in Service Level Agreement (SLA)?RQ.2 Which method minimizes energy consumption associated with request processing?RQ.3 Which auto-scaling approach offers the quicker resource provisioning computation process in terms of run-time metric? The remainder of the section is organized in two sub-sections: Experimental Setup and Results.In Section 5.1 we present the terms of comparison adopted, the simulation methodology and the realworld dataset used, while in Section 5.2, we present the results of the experiments and discuss the results w.r.t. the research questions. Experimental setup In our experiments, we compared the proposed approach with Silva et al. [15], Smallest-First (S-F), Largest-First (L-F), and Energy-Aware (E-A) function scheduling approaches, In addition, we decided also to compare our proposed solution with the approach presented by Silva et al. [15] as a baseline technique.The approach proposed in Silva et al. [15] is specifically tailored for the use case of image analysis at the edge and aims at enhancing resource efficiency in the edge environment.To this aim, it dynamically scales resources in response to workloads, making it an apt choice for comparative analysis.The Smallest-First technique selects the edge nodes for function allocation in ascending order according to their processing capacity, giving priority to nodes with lower capacity when executing functions.Conversely, the Larger-First technique sorts edge devices in descending order, prioritizing nodes with higher capacity for requests to run functions.In the Energy-Aware approach, requests are assigned to nodes that exhibit more efficient energy consumption during workload processing.So, this method tries to assign workload to those nodes with higher capacity and lower energy consumption for processing requests to achieve better resource allocation regarding the consuming energy of edge devices for function execution. All simulations were conducted in an ad-hoc simulator written in Python.As detailed in Section 4, the proposed approach is implemented using a CPLEX solver through the Pulp library 1 . The simulations were run using two datasets, namely, Madrid and Synthetic, which are well-suited for the scenario involving image analysis and were provided in [15].Namely, the Madrid dataset reports workload demands from traffic measuring points in Madrid, Spain, from September to October 2021.Each point provided the corresponding load to analyze an image to detect the number of vehicles detected.According to the information provided in [15], the dataset includes approximately 30 million vehicle detections on a monthly basis. The Synthetic dataset, also provided by the authors of [15], was generated using TimeSynth, an open-source library for synthetic time series generation.These synthetic time series data are used to simulate image analysis requests for vehicle license plate recognition.Specifically, during the experiments, images of vehicles are captured at specific frames per second (F/S) to recognize license plates. Different edge nodes with different capabilities are considered in the experiments.Table 1 summarizes the various edge devices considered, including their capabilities.For instance, the Jetson Nano, the cheapest one, can handle 16 F/S and consumes just 0.0028 J of energy for this processing.The simulations employed 17 edge nodes in total with the following composition: 10 Nvidia Jetson Nano, 5 Nvidia Jetson Xavier NX, and 2 Nvidia Jetson AGX Xavier devices [15].The simulations were executed on Windows 11 OS with an 11th Gen.Intel Core i5-11400H 2.70 GHz CPU and 16 GB RAM. Results To address RQ.1, we present the SLA violation percentages across both the Madrid and Synthetic datasets in Figure 2. SLAs, which define agreed-upon service levels between providers and customers, are measured in terms of underprovisioned requests w.r.t. the total request volume.SLA violations occur when insufficient resources are allocated to execute functions [1], thus the corresponding requests are underprovisioned.From Figure 2, our proposed solution exhibits the smallest average underprovisioning percentage, i.e., ≈ 0.1%, on both datasets. On average, our proposed approach consistently outperforms alternative techniques by a significant margin, reducing the number of SLA violations by at least 78.1% w.r.t. the baselines.In fact, our proposed method fails to process only 6 requests for the Madrid dataset and 7 requests for the Synthetic dataset out of 5856 and 5857, respectively.The best competitor, E-A, fails to process 32, resp.224, request for the Synthetic, resp.Madrid, dataset. In response to RQ.1, it becomes evident that other methods, except for our proposed solution, exhibit similar ratios due to underprovisioning of resources.In contrast, our approach consistently aligns resource allocation with the requirements for executing functions at the edge, demonstrating its effectiveness in meeting SLAs. To answer RQ.2, Figure 3 Underprovisioning Requests (%) To conclude on RQ.2, the proposed solution using two mechanisms can find a most efficient way to execute functions at the edge and also, manage the energy usage of edge devices by powering off them during low loads.As can be seen in Figure 3, this technique could significantly improve this benchmark compared to other approaches by at least 62.5% w.r.t. the baselines.The proposed solution, in fact, consumes 78.47 for processing requests with the Synthetic dataset and 148.10 for the Madrid dataset.In terms of energy consumption, the most competitive technique is E-A, which requires 471.14 to process requests on the Synthetic dataset and 395.01 on the Madrid dataset. Figure 4 shows the average run time for the execution of the considered approaches, i.e., the time required to compute the allocation of the functions to be executed, which addresses RQ.3.As expected, the proposed approach is the one that results in the longest execution time.While the other approaches can be executed in the order of milliseconds, the proposed approach requires almost a second to run.This can be explained considering that the proposed approach requires to solve a linear programming model via a solver.This longer execution time, however, results in a more accurate solution that is beneficial in terms of the other metrics previously presented in this section.To conclude on RQ.3, the proposed solution results in longer run-time than the competitors, which is suitable only in cases in which the allocation can be computed in advance, like in case of periodic function invocation, the one considered in this paper, in case of a real-time Conclusion In this paper, we investigated the management of functions of the FaaS paradigm on edge computing platforms.In particular, we focused on energy consumption of existing solutions, and we proposed a new strategy for selecting the most convenient edge devices to handle functions efficiently, while keeping the quality of service.Our proposed solution packs function execution requests on the edge resources, to minimize the energy consumption, while satisfying their computational capacities, to minimize the SLA violations.The experimental evaluation demonstrated that the proposed method successfully reduces service level agreement violations by at least 78.1%, and significantly lowers energy consumption by at least 62.5% on average in relation to function execution on edge nodes. For future works, we plan to design new strategies for real-time scheduling of function invocations, instead of batch processing.Moreover, we will target more complex execution platforms, where available edge resources are not known beforehand, but may appear and disappear dynamically. Fig. 1 . Fig. 1.System model presents the average energy consumption by edge nodes in function execution.Our proposed Table 1 . Types of edge devices (nodes) and their capabilities. [15]age ratio of underprovisioning requests regarding all techniques solution focuses on efficient function distribution considering.In comparison to alternative techniques applied to both datasets, our approach achieves a substantial reduction in energy consumption.As illustrated in the bar chart, the work by Silva et al.[15](Baseline) does not consider the energy usage of edge nodes, so, it consumes the highest amount of energy for resource provisioning compared to other techniques.It is observed that the Smallest-First and Larger-First techniques just concentrate on the capacity of edge nodes.However, the Energy-Aware technique focuses on energy consumption but it is not better than the proposed approach.
2024-04-06T15:30:11.735Z
2023-12-04T00:00:00.000
{ "year": 2023, "sha1": "50bbade572cbc6d9dd67f34912f9e07217d240f2", "oa_license": "CCBY", "oa_url": "https://dl.acm.org/doi/pdf/10.1145/3603166.3632240", "oa_status": "HYBRID", "pdf_src": "ACM", "pdf_hash": "4a9f9ba2e4dc64a274ef157b0d6edca28470b70d", "s2fieldsofstudy": [ "Computer Science", "Engineering", "Environmental Science" ], "extfieldsofstudy": [] }
255960653
pes2o/s2orc
v3-fos-license
Detection of dengue virus serotypes 1, 2 and 3 in selected regions of Kenya: 2011–2014 Dengue fever, a mosquito-borne disease, is associated with illness of varying severity in countries in the tropics and sub tropics. Dengue cases continue to be detected more frequently and its geographic range continues to expand. We report the largest documented laboratory confirmed circulation of dengue virus in parts of Kenya since 1982. From September 2011 to December 2014, 868 samples from febrile patients were received from hospitals in Nairobi, northern and coastal Kenya. The immunoglobulin M enzyme linked immunosorbent assay (IgM ELISA) was used to test for the presence of IgM antibodies against dengue, yellow fever, West Nile and Zika. Reverse transcription polymerase chain reaction (RT-PCR) utilizing flavivirus family, yellow fever, West Nile, consensus and sero type dengue primers were used to detect acute arbovirus infections and determine the infecting serotypes. Representative samples of PCR positive samples for each of the three dengue serotypes detected were sequenced to confirm circulation of the various dengue serotypes. Forty percent (345/868) of the samples tested positive for dengue by either IgM ELISA (14.6 %) or by RT-PCR (25.1 %). Three dengue serotypes 1–3 (DENV1-3) were detected by serotype specific RT-PCR and sequencing with their numbers varying from year to year and by region. The overall predominant serotype detected from 2011–2014 was DENV1 accounting for 44 % (96/218) of all the serotypes detected, followed by DENV2 accounting for 38.5 % (84/218) and then DENV3 which accounted for 17.4 % (38/218). Yellow fever, West Nile and Zika was not detected in any of the samples tested. From 2011–2014 serotypes 1, 2 and 3 were detected in the Northern and Coastal parts of Kenya. This confirmed the occurrence of cases and active circulation of dengue in parts of Kenya. These results have documented three circulating serotypes and highlight the need for the establishment of active dengue surveillance to continuously detect cases, circulating serotypes, and determine dengue fever disease burden in the country and region. Background Dengue fever is regarded as the most important reemerging mosquito-borne disease globally and is endemic in more than 125 countries worldwide [1]. It is an acute systemic viral illness that manifests with varying degrees of severity ranging from a mild febrile illness to severe hemorrhagic presentations, dengue hemorrhagic fever (DHF) or dengue shock syndrome (DSS). Dengue viruses are mosquito-borne members of the Flavivirus genus, family Flaviviridae, first isolated in 1943 and 1945 in Japan and Hawaii respectively [2]. Dengue fever is caused by infection with one of four distinct dengue serotypes (DENV1 to DENV4) that are genetically related but antigenically distinct and with extensive genetic diversity within the different serotypes. Immunity is serotype specific and there is no cross protection between the serotypes [3]. It is estimated that 96 million apparent dengue infections occurred worldwide in 2010 with most of these being reported in Asia, which bore 70 % of the global burden while Africa bore 16 % of the global burden. There are more than 390 million cases annually, of which 294 million maybe in apparent infections not detected by the public health system [4]. In existence for centuries, the Chinese documented symptoms compatible with dengue in 992 AD and associated the disease with flying insects and water [5]. It was not until the 20 th century when the viral etiology and the role of mosquitoes in its transmission were determined [1]. Aedes aegypti (A. aegypti), the main arthropod vector for dengue has its origins in Africa and is wide spread in Africa and the tropics. The mosquito has a high affinity for human blood, a high adaptation to urban dwelling in close proximity to human settlements, and a high vectorial capacity for the four dengue serotypes [6]. Rapid urbanization and globalization is associated with the expansion of dengue fever in the 20 th century [7]. It breeds in and around houses in regular water containers or disposed water-holding vessels. Due to its limited flight range the female A. aegypti persists in a domesticated environment contributing to the spread of dengue through high human-mosquito-human contact within communities [8]. The first documented dengue outbreak in Africa occurred in Durban, South Africa in 1927 as determined by a retrospective serological study [9]. Subsequently, dengue virus isolations in Africa have been reported in 1964-68 in Nigeria (DENV1 and 2) [10], in 1983-85 in Mozambique (DENV3) [11], in 1984 in Sudan (DENV1 and 2) [12] and in 1986 in Senegal (DENV4) [13]. In the past five decades sporadic or epidemic cases of dengue have been increasing in sub-Saharan Africa with 22 countries reporting outbreaks. East Africa has experienced the largest burden in this period with outbreaks occurring in the Island nations of Réunion (1977)(1978), the Seychelles (1977)(1978)(1979), the Comoros (1992-1993), and Cape Verde (2009). In addition Djibouti also recorded a large outbreak in 1992-1993. Approximately 300,000 cases were detected in these 5 outbreaks. Dengue is currently endemic in 34 African countries with transmission being reported through local disease transmission, detection of laboratory confirmed cases, and detection among travelers returning to countries not endemic to dengue [14]. In Kenya, the first documented dengue outbreak (DENV2) occurred in 1982 in the coastal cities of Malindi and Mombasa and was thought to have spread from an outbreak that had occurred in the Seychelles in 1979-1980 [15]. Subsequently, although dengue outbreaks were documented in the neighboring countries of Somalia, Djibouti and South Sudan [14,16], only rare sporadic cases of DENV2 were detected in the coastal town of Mombasa. Seroprevalence studies performed in Kenya have indicated high prevalence of dengue in coastal Malindi at 34.17 % and lower prevalence in western Busia at 1.96 % [17]. Due to lack of active surveillance and reporting structures for dengue infections in much of East Africa, there is a lack of appreciation of the burden of the disease in the region and detection of cases is often hampered by non-specific clinical manifestation of the illness, which mimics other common fever causing illnesses like malaria and typhoid fever and the unavailability of diagnostic capabilities in most of the health centers. In the continued absence of a viable/approved vaccine, the prevention and control of dengue is currently reliant on vector control methods and early detection of cases through continued surveillance that trigger mosquito control activities to alleviate human suffering and emergence of severe disease caused by widespread virus transmission of multiple serotypes. In September 2011, reports of increased cases of acute febrile illness were reported in Mandera in northeastern Kenya bordering Somalia. In the subsequent months and years, the viral hemorrhagic fever (VHF) laboratory at the Kenya Medical Research Institute (KEMRI) continued to receive samples from northeastern Kenya and from Mombasa on the Kenya coast for dengue fever testing. Laboratory testing was conducted with the support of the Global Emerging Infections Surveillance (GEIS) program of the United States Army Medical Research Directorate Kenya (USAMRD-K). The laboratory responds to reports of suspected arbovirus/VHF infections in Kenya and on request of the World Health Organization (WHO) other countries neighboring Kenya that lack laboratory capacity by performing diagnostic testing on samples of suspected cases of arboviruses and VHF infections. From September 2011 to December 2014, as part of the Kenya Ministry of Health response effort, the laboratory received samples from diverse private and government health facilities in northeastern Kenya in Mandera and Wajir counties, hospitals in the capital city of Nairobi and from both private and government hospitals in Mombasa, Malindi and Lamu along the Kenyan coast (Fig. 1). Study population and sample collection Samples were collected from patients of both sexes and all ages who presented with a sudden onset of fever accompanied by body aches. Following the detection of initial dengue cases a clinical working case definition was developed by the Division of Disease Surveillance and Response, Ministry of Health and sent out to all health facilities in the affected and high risk areas. Sample collection and testing Venous blood was collected in vacutainer tubes with no anticoagulant using standard phlebotomy practices from patients that met the case definition. Samples were transported to the laboratory in cold storage where they were centrifuged and serum obtained for testing. All samples were tested using the IgM antibody capture enzyme-linked immunosorbent assay (MAC-ELISA) to detect IgM antibodies against dengue, yellow Fever and West Nile. A subset of the samples was tested for exposure to Zika using a commercial IgM kit. Flavivirus family, yellow fever, West Nile, dengue consensus and dengue serotype specific RT-PCR primers were used to detect an acute infection and to determine the infecting serotype. Representative samples of RT-PCR positive samples for each of the three dengue serotypes detected were sequenced to confirm circulation of the various dengue serotypes. Laboratory analysis MAC-enzyme linked immunosorbent assay The IgM antibody capture ELISA (MAC-ELISA) used to detect presence of IgM antibodies was a laboratory derived test (LDT) provided by the Diagnostic Systems Division, United States Army Medical Research Institute of Infectious Diseases (USAMRIID). A 96 well Immunolon plate (Nunc, Denmark) was coated with a commercial anti-human IgM antibody that reacts specifically with human IgM, (goat anti-human IgM, Kirkegaard and Perry laboratories Gaithersburg, MD, USA) and incubated at +4°C for 12-16 h. The plate was washed using a wash buffer (PBS, pH 7.4, 0.01 Merthiolate, 0.1 Tween-20). This was followed by addition of the dengue IgM positive control, negative control and sample all diluted 1:100 in diluent buffer (PBS, pH 7.4, 0.01 Merthiolate, 0.1 Tween-20, 5 % skim milk). Plates were incubated at 37°C for one hour. The plate was then washed and 100 μl of dengue antigen solution consisting of equal amounts of inactivated lyophilized dengue fever virus 1-4 added in one half of the test wells and a corresponding negative antigen (same dilution) is added in the other half of the test wells. Dengue antigens used in the assay were obtained from various sources. Dengue 1 -Hawaii isolated in 1944 from a human [18], dengue 2 -New Guinea C, isolated in 1944 from a human [19] dengue 3 -H87, Philippines isolated in 1956 from a human and dengue 4 -H241, Philippines isolated in 1956 from a human [20]. The IgM antigens were supernatants from vero cells infected with the appropriate isolate and supernatants were inactivated using 0.3 % beta-propiolactone and cobalt irradiated using 3 million rads and safety tested. The plate was incubated for one hour at 37°C. After washing 100 μl dengue specific detector antibodies (anti-dengue hyperimmune mouse ascitic fluid) was added to each well and incubated for one hour at 37°C. The plate was washed and 100 μl of HRP labelled goat anti-mouse IgG, heavy and light chain specific conjugate that reacts specifically with mouse IgG (Kirkergard and Perry, catalog 074-1806) added in all the wells and plate incubated for one hour at 37°C. The plate was then washed and 100 μl of ABTS substrate (Kirkergard and Perry, Cat. No. N8 50-62-00, Gaithersburg, MD) was added and the plate incubated at 37°C for 30 min. The reaction was visualized by a green colour and the optical density (OD value) was read with a spectrophotometer at 405 nm. The adjusted OD was calculated by subtracting the OD of the negative/mock antigen coated wells from the positive antigen coated wells. The OD cut-off was calculated as the mean of the adjusted OD of the negative control sera plus three times the standard deviations. All samples were also tested for IgM antibodies against West Nile and yellow fever using the same procedure as outlined for dengue above but with variations in the positive controls, the positive and mock antigens and the virus specific detector antibodies. The yellow fever antigen used was obtained from the Asibi strain isolated from a human in Ghana in 1927 [21] while the West Nile antigen was the Eg101 strain isolated in Egypt in 1951 [22]. To rule out cross reactivity with Zika, 15 randomly picked samples that tested positive for dengue IgM antibodies were screened using the Euroimmun Anti-Zika virus IgM ELISA (Euroimmun, Lübeck, Germany) kit following the manufacturer's instructions. Briefly the serum was diluted 1:101 in sample buffer, incubated at room temperature for 10 min, added into the appropriate microplate wells and incubated at 37°C for 1 h. This was followed by addition of a peroxidase labelled anti human IgM conjugate, substrate, and finally a stop solution while performing the wash steps in between incubations and adhering to the appropriate incubation temperatures for each step. The optical density (OD) was measured using an ELx800™ absorbance microplate reader (Biotek Winooski, Vermont, USA). A cut-off ratio was calculated, and values <0.8 were regarded as negative, ≥0.8 to <1.1 as borderline, and ≥ 1.1 as positive [23]. Nucleic acid (RNA) extraction Viral RNA was extracted using the QIAamp Viral RNA Minikit (QIAGEN, Hilden Germany) according to the manufacture's protocol. A final volume of 60 μL of RNA was obtained and used as a template for cDNA synthesis and for the subsequent PCR reactions. cDNA synthesis from viral RNA To convert extracted RNA to cDNA, 10 μL of the extracted sample viral RNA was mixed with 2 μL of 50 ng/ μL random hexamer primer in a 0.2 ml PCR tube. The mixture was incubated in a thermocycler for 10 min at 70°C. The reaction was stopped and the following components added to the PCR tube: 4 μL of 5X First Strand Buffer (Invitrogen), 1 μl of 10 mM dNTPs, 2 μl of 100 mM DTT, 0.25 μl of RNAse Inhibitor (40U/μl) and 1 μl of Superscript III Reverse transcriptase (200 U/μl). The mixture was then placed in a thermocycler set at the following conditions: 25°C for 15 min, 50°C for 50 min, followed by 70°C for 15 min in the thermocycler and 4°C hold temperature. A total of 20 μL of cDNA was obtained. The PCR amplification of targeted viral sequences in the cDNA was performed in a 25-μL reaction containing: 12.5 μl of Amplitaq Gold 360 PCR master mix (Applied Biosystems USA), 50 picomoles each of forward and reverse primer, 2 μl of the cDNA and 9.5 μl of DEPC treated water to top up to 25 μl. Samples were first tested using flavivirus family primers. Samples testing positive with flavivirus family primers were further tested with yellow fever, West Nile and consensus dengue primers D1 and D2. Samples testing positive with the dengue consensus primers that target the E/NS1 junction of the virus genome were further tested for the 4 dengue sero types using the appropriate primers ( Table 1). The primer sequences above were used to detect exposure to the various arboviruses using amplification conditions as described in the corresponding references for each primer listed. A positive control cDNA and a negative control were included during the setting up of all PCR reactions. Electrophoresis of the amplified DNA products was done on a 1-2 % agarose gel in 1 % Tris-borate EDTA buffer stained with ethidium bromide. The PCR product bands were visualized by a UV trans illuminator and recorded using a gel photo imaging system. Sequencing and phylogenetic analysis Amplified target DNA bands were either purified directly from the PCR reaction or from the gel using Wizard® SV Gel and PCR Clean-Up System kit (Promega Madison, WI, USA). Sequencing was outsourced and performed using ABI-PRISM 3130 Genetic Analyzer (Applied Biosystems, Foster City, CA). Both forward and reverse strands were sequenced and the raw chromatogram file was edited for bad calls using DNAbaser v.3.0.The sequences were compared with available sequences using Basic Local Alignment Search Tool and the GenBank database to confirm the identity of the virus isolate. The sequences were aligned using Muscle [24] in Molecular Evolutionary Genetics Analysis (MEGA) software version 76 was used for phylogenetic analysis using the Maximum likelihood statistical method tested with 1000 bootstrap replicates based on the Tamura-Nei model [25]. The phylogenetic tree was inferred in MEGA version 7. A total of 15 (5 for each serotype DENV1-3) samples were sequenced. Three dengue serotypes were detected during this period (DENV1-3) with no case of DENV4 being detected. Serotypes detected varied by year and region with DENV1 accounting for 44 % (96/218) of the three serotypes detected followed by DENV2 at 38.5 % (84/218) and DENV3 17.% (38/218) detected in both the northern and coastal regions of Kenya (Fig. 2). From Samples from the coast predominantly tested positive for DENV1 (88/160) followed by DENV 2 (69/160) with the two serotypes accounting for 98 % of all the serotypes from the coast. In northeastern Kenya, DENV 3 was dominant accounting for 72 % (28/39) of all the DENV serotypes detected during this reporting period in the region (Fig. 2). In 2011, 129 samples were received from six facilities with most of the samples coming between September and November from northeastern region and Nairobi accounting for 15 % (129/868) of all the samples tested 2011 to 2014. Overall, 46.5 % (60/129) of samples received (Fig. 2). In 2012 there was a drop in the number of samples received. Only 32 samples were received from three facilities mostly between February and April from northeastern region. It is not clear what factors were associated with the sudden reduced numbers of samples received at the laboratory. We speculate that it may not have been associated with a sudden drop in patients with fever but more due to the public health response measures following the detection of cases in 2011. Detection of cases in 2011 was followed by dispatch of response teams tasked with initiating community sensitization on infection prevention, mosquito control activities and supplied local health clinics with rapid dengue diagnostic kits hence samples were tested at the respective sites. The samples accounted for 4 % (32/868) of the total samples received and 53 % (17/32) tested positive for dengue; 34 % (11/32) were positive by RT-PCR with DENV2 accounting for 82 % (9/11) and DENV3 accounting for 18 % (2/11) of the PCR positives (Fig. 2). In 2013, a total of 567 samples were received from seven facilities, with most coming between April and May accounting for 65 % of all the samples received. (Fig. 2). All positive patient samples collected from Nairobi had a travel history to either the northern, eastern, or coastal parts of Kenya where active transmission was ongoing during the surveillance period. DENV1-3 was detected in Nairobi during this period, but there was no evidence of active transmission documented. Co-infection with more than one serotype was detected in two samples. Co-infection with DENV2 and 3 was de- Of the 15 samples sequenced, 12 samples gave good quality reads and 5 were able to sequence the full 511 base pair region containing capsid and pre M genes. The remaining seven had good quality sequences for the capsid gene. The 12 sequences were trimmed to remain with capsid gene which was used for phylogenetic analysis. Phylogenetic analysis of the capsid gene sequences for representative selected PCR positive samples from the cases revealed that DENV1 isolates from Mombasa (2013) showed close relatedness to a DENV1 isolate from Djibouti isolated in 1998. All the DENV2 isolates from Kenya detected in Mombasa in 2013 showed close relatedness to isolates detected in different parts of Asia. All DENV3 isolates, two from Mandera (2011), two from Mombasa (2013) and one from Wajir (2014) were closely related to a DENV3 isolates from Pakistan, China and India obtained in the year 2006, 2013 and 2009, respectively (Fig. 3). Discussion Although considered endemic in Africa and Kenya [14], there has been limited information documenting active dengue virus transmission in Kenya among the human population since the early 1980s. Current available information has relied on serological surveys [17] and has not described the circulating serotypes in the country. With 50 % of the world's population living in dengue endemic countries, Africa included; the continent continues to face challenges in case detection and reporting resulting in limited information available towards understanding the true disease burden and economic impact of dengue. Lack of awareness of dengue among health workers, erratic treatment seeking behavior among populations, presence of symptomatically similar illnesses like malaria and typhoid, low case fatality rates, limited availability appropriate diagnostic systems, and under reporting by existing public health systems all contribute to under recognition of dengue in the continent [1,26]. Results during this 4 year period were able to detect multiple dengue serotypes and helped to provide clear evidence of active dengue transmission and identified serotypes circulating in parts of northeastern and coastal Kenya. All four dengue serotypes have been detected in Africa [14]. Our laboratory based results showed that in parts of Kenya DENV1 and 2 were most dominant. This is consistent with literature suggesting most epidemics in Africa are caused by serotypes 1 and 2 [14,16]. Infection with DENV4 is less common, but it has been documented in parts of Africa [13] and in Europe from travelers returning from Africa [27]. DENV4 was not detected in any of the 868 cases tested in this period. It is unclear why DENV4 was not in circulation. Since the serotype is associated with mild clinical disease, the absence of complications may have resulted in patients not seeking treatment; hence it would often go undetected where it occurs [28]. Detection of the first dengue cases in northeastern Kenya in September 2011 was preceded by dengue detection in samples from Mogadishu, Somalia in February 2011, which suggested there was active transmission of dengue going on in Somalia. From the Somali samples, three serotypes (DENV1-3) were detected. In the Kenya 2011 cases, only DENV3 was detected in samples from Mandera in northeastern Kenya. By 2013, DENV1-3 was being detected in samples from the northern part of Kenya. It is reasonable that the cases in Mandera on the border with Somalia may have resulted from infected travelers from Somalia. Kenya and Somalia share a long porous border where communities freely interact in search of pasture and other economic activities. Though it may be assumed that the infection spread from Somalia into Kenya, it is not clear why there was a six month gap between the Somali cases and the first detection in northeastern Kenya. It may be that the first cases went undetected or were misdiagnosed as malaria. Considering that no severe dengue infections were detected in Kenya and that the infections are self-limiting, the initial cases may have resolved only to be detected much later when it affected large populations concentrated in the major town of Mandera. All the three serotypes detected in Somalia were also detected in northeastern Kenya. Somalia is currently hosting peace keeping forces from various parts of the world. The forces present a naïve population and several outbreaks among peace keeping forces have been documented [29]. Cross border dengue infections is of concern among many countries since it is considered a major source of dengue spread [30]. Urbanization and infrastructure connectivity has been shown to be a major factor facilitating the spread of dengue infections between affected and non-affected areas [6]. Mandera is home to the Kenya Somali ethnic community who practice pastoral farming, but live in urban setting in Mandera town. The town lacks piped water, relying on water collected from a nearby river or occasional rainfall. The water is stored in large concrete water cisterns and other artificial containers that are perfect breeding sites for Aedes aegypti, the primary vector of dengue viruses. Since the first detection of dengue in 1982, coastal Kenya has long been suspected of being a dengue endemic zone. Numerous studies have attempted to show dengue circulation in human and vector populations and the presence of competent vectors [17,31]. This study documented the circulation of multiple dengue serotypes (DENV1-3) in Kenya and has confirmed the presence of ongoing virus transmission. Dengue fever cases were identified in Nairobi, the capital city of Kenya, however all cases had a prior history of recent travel to the dengue affected areas of northeastern and coastal Kenya. Despite detection of acute cases in Nairobi, there was no evidence of active dengue transmission. It is not clear why this was so, but we speculate that the numbers of acute cases (eight) could have been too few and spread apart for the establishment of active local transmission. In addition, possible inherent differences in the vector competence capabilities to dengue virus of the Nairobi Ae. aegypti mosquito population compared to populations in other regions with active cases coupled with other environmental factors may have played a role in the lack of establishment of potential active transmission [31]. No case of DHF/DSS was detected during the surveillance period despite the co-circulation of multiple serotypes (DENV1-3), a phenomenon commonly associated with DHF/DSS. The primary infecting serotype determines severity of the infection. Primary infections with DENV1 and 3 tend to cause more severe clinical disease manifestations, while DENV2 and 4 are associated with increased severity when they occur as secondary infections [32] Co-infection with more than one serotype was detected in two samples. Co-infection with DENV2 and 3 was detected in a sample collected in Mandera in the early stages of the outbreak in 2011 and a sample from Mombasa with co-infection of DENV2 and 3 was detected in 2013. Due to logistical constraints, the laboratory was not able to ascertain the outcome on these patients. Previous studies have also shown that race may also play a role in offering partial protection against severe forms of dengue. Genetic polymorphisms that offer partial protection against severe forms of dengue have been identified in people of African descent [33,34]. This could have played a role, but cannot be substantiated from laboratory based surveillance. Clinical outcomes of dengue cases may be influenced by the circulation of multiple DENV serotypes and is considered a factor in the reemergence of dengue hemorrhagic fever [35]. Co-circulation of various DENV serotypes is well documented as a frequent occurrence in various parts of the world. The outbreak in Kenya was characterized by the detection of multiple serotypes with the predominant serotype being DENV1, followed by DENV2 and then DENV3, which is similar to most dengue outbreaks detected globally where multiple serotypes are detected [36,37]. The detection of multiple dengue serotypes in Kenya with close relatedness to isolates obtained in other parts of Africa, South and South East Asia shows the continued movement and the wide geographic range of the dengue serotypes. Only DENV1 isolates showed any close relatedness to an African isolate from Djibouti (AF298808), which has been shown to be more genetically related to Asian isolates than to African isolates [38]. All DENV2 and 3 isolates showed relatedness to Asian isolates indicating transmission and sustenance in countries away from the initial geographic origin. Over the last decade, Kenya has developed into a major air and sea port transport hub in the region connecting the Asian and African continents for commercial and tourist purposes. Increased travel between affected and non-affected areas constitutes a constant threat with travelers acting as vehicles of disease spread. In addition rapid urbanization and globalization is associated with the expansion of dengue transmission by providing a conducive environment for the mosquito vector [39,40]. Of concern were the 60 % of the samples that tested negative for dengue, West Nile and yellow fever viruses by IgM ELISA and RT-PCR despite being collected from patients presenting with fever. This demonstrates the need to constantly review and avail comprehensive differential disease diagnostic panels at health facilities where possible and at the testing laboratories. This will enhance detection of underlying or co circulating reemerging and emerging disease threats caused by parasitic, viral, bacterial or other pathogens associated with febrile illness manifestations in human populations. An opportunity to determine the etiology of arbovirus infections is often missed as fevers caused by arboviruses may be misdiagnosed as malaria or Vis versa. In addition, overlaps in geographical locations and concurrent infections of arboviruses from the same or different families are well documented in various parts of the world and Africa [40][41][42][43][44][45]. This highlights the need for continued vigilance and review of the existing testing algorithms for diseases associated with febrile manifestations. In this reporting period, we were only able to screen a small subset of samples (12 % of the dengue IgM positives) for cross reactivity with Zika for logistical reasons. Although all the samples tested negative for Zika IgM antibodies, our results may be biased towards dengue as we were not able to screen for Zika in all the samples that tested positive for dengue IgM antibodies. As dengue becomes endemic in Kenya, health care providers are increasingly aware of the need to quickly detect infection and provide appropriate care to patients. The availability of rapid diagnostic kits at health facilities has resulted in the reduced flow of samples to the KEMRI laboratory, but cases continue to be detected in the northern and coastal regions of Kenya. Conclusion Confirmatory laboratory diagnosis in Kenya facilitated the detection of dengue virus circulation in the northern and coastal regions of Kenya and in the capital city Nairobi. Early laboratory detection allows clinicians to institute supportive treatment for better prognosis. There is need to establish on-going dengue surveillance to continuously detect outbreaks, the serotypes circulating, and determine dengue fever disease burden in the region. Seasonal variation should also be established to identify high risk times and facilitate appropriate public health responses. Circulation of multiple serotypes may also lead to increased cases of severe form of dengue.
2023-01-18T15:01:31.034Z
2016-11-04T00:00:00.000
{ "year": 2016, "sha1": "55146fd8cc40ca6d70b049934f650ff14aeff7d7", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s12985-016-0641-0", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "55146fd8cc40ca6d70b049934f650ff14aeff7d7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
256825503
pes2o/s2orc
v3-fos-license
Bottom-Up Construction of the Interaction between Janus Particles While the interaction between two uniformly charged spheres—viz colloids—is well-known, the interaction between nonuniformly charged spheres such as Janus particles is not. Specifically, the Derjaguin approximation relates the potential energy between two spherical particles with the interaction energy Vpl per unit area between two planar surfaces. The formalism has been extended to obtain a quadrature expression for the screened electrostatic interaction between Janus colloids with variable relative orientations. The interaction is decomposed into three zones in the parametric space, distinguished by their azimuthal symmetry. Different specific situations are examined to estimate the contributions of these zones to the total energy. The effective potential Vpl is renormalized such that the resulting potential energy is identical with the actual one for the most preferable relative orientations between the Janus particles. The potential energy as a function of the separation distance and the mutual orientation of a pair of particles compares favorably between the analytical (but approximate) form and the rigorous point-wise computational model used earlier. Coarse-grained models of Janus particles can thus implement this potential model efficiently without loss of generality. For clarity, a part of Fig. 4 in the main text is redrawn here as Fig. S1 so as to emphasize the annular ring that is cut from the surface of the sphere.To derive Eq. ( 14), either the angle β or δ = (π − β)/2 must be determined first. The radius of the cross section ring BB 1 can be readily found as a = a cos γ . (S1) Two points S and S on this ring, which separate the positive and negative sections, lie on the sphere and at the same time on two planes: the vertical BB 1 one, and the dividing plane which is defined by the orthogonality equation for the vectors r = (x, y, z) and n 1 = (cos θ 1 , 0, sin θ 1 ). The z-coordinate of points S and S can be found from the system of equations ( S2)-(S4): Since χ 1 from Eq. ( 12) can also be written as where sin δ = |z|/a , then and (second from left) and BB 1 (second from right).The image at the center represents the union of the corresponding AA 1 and BB 1 cross-sections and highlights the two overlapping outer annuli.Therein, fully red or blue segments correspond to repulsion, whereas dual-colored segments correspond to attraction. The multiplier χ 2 equals to the difference of two parts of the adjacent rings: the repulsive one, where the like charges face each other, and the attractive one where the opposite charges face each other.According to the chart presented in Fig. S2, Analogously to the derivation of Eq. (S8), since sin S3 Calculation of ∆χ 2 Figure S3: The upper panel corresponds to the same configuration as it is in Fig. S2.In the lower panel, the orientation of particle 2 is changed: the vector n 2 is turned around the x-axis at angle φ.The circle at the center of the lower panels shows the change in the overlapping of annuli AA and BB 1 when the latter is turned at angle φ. In this section we calculate corrections to the function χ 2 , the only one which is sensitive to rotations around the x-axis.We should mention here that the result of these rotations depend only on the absolute value of the angle φ (Fig. S3). Since β 1,2 /2 = π/2 − δ 1,2 (see Fig. S2), then according to Eq. ( S7) and therefore The last equalities in Eq. ( S16) and (S18) are obtained by using the relation π/2−sin −1 (x) = cos −1 (x). particles Figure S6 illustrates the azimuthal dependence of the electrostatic potential at several selected fixed values of θ 1 and variable θ 2 .At the angle θ 1 = 90 • , the modified-DA is at its worse.However, the deviations tend to be located only at the angles of θ 2 when the edges of the dividing planes of the particles are directly opposed.They are also significantly reduced at angles θ 1 only 5 • from 90 • . S5 The computational times Table S1 lists the relative computational times required for the calculation of 100 points on the curves presented in Fig. 8 in the main text using the modified-DA and the PW model for different values of the screening length. Table S1: The relative timings of the calculation of the potential of mean force at 100 different separations between two Dipolar Janus (DJ) particles using either the modified-DA or PW interaction potentials.In both cases, the reported calculations were performed on a Linux machine using a Intel Xeon Gold 6248 "Cascade Lake" processor with 512GB DDR4-2933 memory running at 2.50 GHz clock speed. S1 Calculation of χ 1 Figure S1 : Figure S1: Particle 1 (left) and its cross section BB 1 (right).The vertical position z of points S and S , which separate the oppositely charged sections of the ring, is necessary for finding values of the angles δ and β. Figure Figure S5 provides another representation of the data shown in Fig. 7 in the main text except the panels are now grouped by the values of κ.The angle θ 1 acquires the same values as in Fig. 7 except θ 1 = 45 • since the results of this value are almost indistinguishable from Table 1 in the main text). Table 1 in the main text).
2023-02-14T06:18:07.368Z
2023-02-13T00:00:00.000
{ "year": 2023, "sha1": "015a934833359b4588ec2be6d7085a6147179e33", "oa_license": "CCBYNCND", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "108a5aa559754390cf65bea2cb56bfb7382f31de", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Medicine" ] }
238199731
pes2o/s2orc
v3-fos-license
New Functions of Vav Family Proteins in Cardiovascular Biology, Skeletal Muscle, and the Nervous System Simple Summary In this review, we provide information on the role of Vav proteins, a group of signaling molecules that act as both Rho GTPase activators and adaptor molecules, in the cardiovascular system, skeletal muscle, and the nervous system. We also describe how these functions impact in other physiological and pathological processes such as sympathoregulation, blood pressure regulation, systemic metabolism, and metabolic syndrome. Abstract Vav proteins act as tyrosine phosphorylation-regulated guanosine nucleotide exchange factors for Rho GTPases and as molecular scaffolds. In mammals, this family of signaling proteins is composed of three members (Vav1, Vav2, Vav3) that work downstream of protein tyrosine kinases in a wide variety of cellular processes. Recent work with genetically modified mouse models has revealed that these proteins play key signaling roles in vascular smooth and skeletal muscle cells, specific neuronal subtypes, and glia cells. These functions, in turn, ensure the proper regulation of blood pressure levels, skeletal muscle mass, axonal wiring, and fiber myelination events as well as systemic metabolic balance. The study of these mice has also led to the discovery of new physiological interconnection among tissues that contribute to the ontogeny and progression of different pathologies such as, for example, hypertension, cardiovascular disease, and metabolic syndrome. Here, we provide an integrated view of all these new Vav family-dependent signaling and physiological functions. Introduction The Vav family is a group of signal transduction molecules that work as GDP/GTP exchange factors (GEFs) for GTPases of the Rho subfamily and, in some other cases, as adaptor-like molecules. This family is composed of single representatives in invertebrates (generically designated as Vav proteins in each species) and three members in most vertebrates (Vav1, Vav2, Vav3). However, depending on the phylogenetical stage, reduced and increased numbers of Vav family proteins can be found in some species [1,2]. The Biology 2021, 10, 857 2 of 14 first member of this family was discovered in Mariano Barbacid's lab in 1989 due to the spurious stimulation of its transforming activity during transfections of a human tumor genomic DNA in NIH3T3 cells [3]. Given that this oncogene was the sixth one isolated in that lab, it was named as the sixth letter of the Hebrew alphabet (Vav). The Vav2 and Vav3 proteins were identified from 1995 to 2000 [4][5][6]. The explosion of genomic data derived from genome sequencing efforts subsequently led to the discovery of the rest of the Vav family members present in both vertebrate and invertebrate species [1]. Mammalian Vav proteins harbor eight structural domains associated with regulatory and/or effector proteins ( Figure 1). These domains include calponin-homology (CH), acidic (Ac), Dbl-homology (DH), pleckstrin-homology (PH), C1 subtype zinc finger (ZF), noncanonical SH3 (NSH3), SH2, and canonical SH3 (CSH3) regions. The catalytic activity of these proteins is mediated by the catalytic DH domain in a concerted action with the PH and ZF regions, a rather unique feature of this protein family compared with the rest of Rho GEFs [1,2,5,[7][8][9]. However, Vav proteins can also activate additional downstream pathways using catalysis-independent mechanisms. These adaptor functions are in many cases Vav family member-and cell type-specific ( Figure 1). For example, Vav1, but not Vav2, can activate the nuclear factor of activated T cells using a CH-dependent mechanism in T lymphocytes [10][11][12]. Likewise, Vav1 plays tumor suppressor roles in T cell acute lymphoblastic leukemia using an adaptor mechanism mediated by its SH3 domains. This function promotes the ubiquitin-mediated degradation of the active, intracellular fragment of Notch 1 by forming complexes with the E3 ubiquitin ligase Cbl-b (Casitas B-lineage lymphoma b) [13]. The SH3 regions of Vav proteins can bind to many other protein partners, although their specific downstream role is not understood as yet. However, these interactions suggest that Vav proteins might participate in additional scaffold-like functions in cells. Biology 2021, 10, x FOR PEER REVIEW 2 of 14 due to the spurious stimulation of its transforming activity during transfections of a human tumor genomic DNA in NIH3T3 cells [3]. Given that this oncogene was the sixth one isolated in that lab, it was named as the sixth letter of the Hebrew alphabet (Vav). The Vav2 and Vav3 proteins were identified from 1995 to 2000 [4][5][6]. The explosion of genomic data derived from genome sequencing efforts subsequently led to the discovery of the rest of the Vav family members present in both vertebrate and invertebrate species [1]. Mammalian Vav proteins harbor eight structural domains associated with regulatory and/or effector proteins ( Figure 1). These domains include calponin-homology (CH), acidic (Ac), Dbl-homology (DH), pleckstrin-homology (PH), C1 subtype zinc finger (ZF), noncanonical SH3 (NSH3), SH2, and canonical SH3 (CSH3) regions. The catalytic activity of these proteins is mediated by the catalytic DH domain in a concerted action with the PH and ZF regions, a rather unique feature of this protein family compared with the rest of Rho GEFs [1,2,5,[7][8][9]. However, Vav proteins can also activate additional downstream pathways using catalysis-independent mechanisms. These adaptor functions are in many cases Vav family member-and cell type-specific ( Figure 1). For example, Vav1, but not Vav2, can activate the nuclear factor of activated T cells using a CH-dependent mechanism in T lymphocytes [10][11][12]. Likewise, Vav1 plays tumor suppressor roles in T cell acute lymphoblastic leukemia using an adaptor mechanism mediated by its SH3 domains. This function promotes the ubiquitin-mediated degradation of the active, intracellular fragment of Notch 1 by forming complexes with the E3 ubiquitin ligase Cbl-b (Casitas B-lineage lymphoma b) [13]. The SH3 regions of Vav proteins can bind to many other protein partners, although their specific downstream role is not understood as yet. However, these interactions suggest that Vav proteins might participate in additional scaffold-like functions in cells. Figure 1. Depiction of the structure, the main regulatory phosphosites (using the amino acid sequence corresponding to mouse Vav1), the intramolecular autoinhibitory interactions that take place in the nonphosphorylated state of Vav proteins (top), and some of the main downstream pathways described for specific mammalian Vav family members (bottom). NFAT, nuclear factor of activated T cells (a transcriptional factor involved in the proliferation and cytokine production of T lymphocytes); ICN1, intracellular domain of Notch1. The rest of abbreviations are described in the main text. Vav proteins require phosphorylation by upstream protein tyrosine kinases to become activated [2]. This is due to inhibitory interactions established by the N-terminal (CH and Ac region) and the C-terminal (CSH3 domain) regions with the catalytic core Figure 1. Depiction of the structure, the main regulatory phosphosites (using the amino acid sequence corresponding to mouse Vav1), the intramolecular autoinhibitory interactions that take place in the nonphosphorylated state of Vav proteins (top), and some of the main downstream pathways described for specific mammalian Vav family members (bottom). NFAT, nuclear factor of activated T cells (a transcriptional factor involved in the proliferation and cytokine production of T lymphocytes); ICN1, intracellular domain of Notch1. The rest of abbreviations are described in the main text. Vav proteins require phosphorylation by upstream protein tyrosine kinases to become activated [2]. This is due to inhibitory interactions established by the N-terminal (CH and Ac region) and the C-terminal (CSH3 domain) regions with the catalytic core when the proteins are in the nonphosphorylated state ( Figure 1). These interactions, which occlude the effector sites of these proteins, are eliminated upon the phosphorylation of Vav proteins on specific tyrosine residues ( Figure 1). This leads to the stimulation of the catalytic activity and most adaptor functions upon the exposure of the effector sites [2]. The structural basis for the autoinhibition of Vav proteins by the CH-Ac has been clarified due to the resolution of the crystal structure of a large N-terminal fragment of Vav1 [14]. The CSH3-mediated intramolecular inhibition has been inferred using biochemical, signaling, and cell biology experiments. However, it is worth noting that a model for the inhibitory action of the CSH3 has recently been identified for Vav2 and Vav3 using artificial intelligence approaches (see the AlphaFold Protein Structure Database, entries at https://alphafold.ebi.ac.uk/entry/P5 2735 (accessed on 30 July 2021) and https://alphafold.ebi.ac.uk/entry/F1LWB1, accessed on 30 July 2021). The regulation of Vav proteins by tyrosine phosphorylation is rather unique in the Rho GEF family. Underscoring this issue, Vav proteins are the only Rho GEFs that contain the phospho-tyrosine binding SH2 region. Recent data have revealed that the signaling output of Vav proteins can be regulated by additional mechanisms such as acetylation, binding to plasma membrane-resident phosphatidylinositol monophosphates, and expression [1,2,15,16]. Accumulating evidence indicates that Vav proteins play critical roles in a wide range of physiological and pathological processes. Some of these signaling functions have been previously reviewed [1,2]. Detailed information on the phylogenetic origin of the Vav family is also available from a previous publication [1]. Additional information on Vav functions will be found in review articles that, together with the present one, form part of the Special Issue on Vav proteins published by this journal. Here, we will specifically aim at providing an update of what is known about the role of these proteins in cardiovascular biology, skeletal muscle, and different branches of the nervous system. As a note of caution, we will exclusively focus on signaling and physiological mechanisms that have been corroborated using mouse models. As we will see, the study of Vav proteins in these functions has allowed us to better understand both the regulation and roles of these proteins. In addition, it has illuminated new layers of the ontogeny and progression of complex diseases such as metabolic syndrome and hypertension. Vav2, Vascular Smooth Muscle Cells, and Cardiovascular Regulation The control of systemic blood pressure is ensured, among other physiological responses, by regulating the contractility of blood vessels in real time. A key signal for this process is nitric oxide (NO) [17,18], a vascular endothelial cell-generated gas that favors the reduction in blood pressure via the induction of the vasodilatation or resistance arterioles. This process entails the disassembly of the F-actin cytoskeleton, of protein nitrosylation events, and of other signaling regulatory steps in NO-stimulated vascular smooth muscle cells (vSMCs) [17]. To induce the former response, NO promotes the step-wise stimulation of soluble guanylate cyclase [17,19], production of cyclic guanosine monophosphate (cGMP) [17,19], and the enzyme activity of the cGMP-dependent protein kinase type I [17,19,20] (Figure 2, pathway in black color). The latter enzyme triggers the phosphorylation of the RhoA GTPase that, in turn, causes the release of the GTPase from the plasma membrane and its disassembly from the downstream serine/threonine kinase Rock1 (Rho-associated coiled-coil-containing protein kinase 1) [21]. This leads to the vasodilatation-mediated reduction in blood pressure because the inactivation of Rock1 abrogates stress fiber formation via the concerted down-and upregulation of the 20 kDa myosin light chain (MLC 20 ) and the MLC 20 phosphatase (MLCP), respectively [22][23][24] ( Figure 2). This NO-regulated vasodilatation pathway is negatively regulated by phosphodiesterase type 5 (PDE5), an enzyme that hydrolyzes cGMP [25] (Figure 2). The importance of this route in normal physiology is demonstrated by the observation that the inactivation of several signaling elements of this pathway using either genetic or pharmacological avenues leads to the rapid development of a hypertensive state [20,[26][27][28][29][30]. Conversely, the use of PDE5 inhibitors (e.g., sildenafil, the active component of Viagra) restores most vasodilatation defects associated with hypertensive states and erectile dysfunction [25,31,32]. While this pathway had been known for a long time, the investigation of the cause of the hypertension exhibited by Vav2-deficient mice [33] allowed for the discovery of a new signaling branch that cooperates with the previously known pathway to favor the dilatation of resistance arterioles ( Figure 2, pathway in red) [34]. This branch requires the Src-dependent phosphorylation and activation of Vav2 upon the stimulation of vSMCs with NO ( Figure 2). Activated Vav2 leads, in turn, to the stimulation of Rac1, the Rac1mediated translocation of the serine/threonine kinase Pak (p21-activated kinase), and the Pak1-mediated inactivation of PDE5 ( Figure 2) [34]. This inhibitory step unexpectedly relies on the physical interaction of Pak1 with the N-terminal domains of PDE5, rather than on a transphosphorylation-dependent step. It is hypothesized that this inhibition step involves a conformational change in the target protein [34], given that the PDE5 N-terminus contains domains involved in both the homodimerization and the upstream regulation of PDE5 enzymatic activity [35]. The inhibition of PDE5 by Vav2 therefore ensures high levels of cGMP production in NO-stimulated vSMCs, thus favoring the sustained silencing of the RhoA pathway, effective vasodilation, and reduction in blood pressure ( Figure 2) [34]. Further genetic evidence has demonstrated that this pathway is intrinsic to vSMCs and is dependent on the main Vav2 substrate, the GTPase Rac1 [36,37]. These results indicate that Vav2 is a natural "Viagra"-like molecule that ensures proper NO-triggered vasodilatation responses by limiting the enzyme activity of PDE5. Consistent with this idea, the development of the hypertension and its associated cardiovascular comorbidities can be prevented in Vav2 -/and vSMC-specific Rac1 -/mice using sildenafil treatments [34,36]. restores most vasodilatation defects associated with hypertensive states and erectile dysfunction [25,31,32]. While this pathway had been known for a long time, the investigation of the cause of the hypertension exhibited by Vav2-deficient mice [33] allowed for the discovery of a new signaling branch that cooperates with the previously known pathway to favor the dilatation of resistance arterioles ( Figure 2, pathway in red) [34]. This branch requires the Src-dependent phosphorylation and activation of Vav2 upon the stimulation of vSMCs with NO ( Figure 2). Activated Vav2 leads, in turn, to the stimulation of Rac1, the Rac1-mediated translocation of the serine/threonine kinase Pak (p21-activated kinase), and the Pak1-mediated inactivation of PDE5 ( Figure 2) [34]. This inhibitory step unexpectedly relies on the physical interaction of Pak1 with the N-terminal domains of PDE5, rather than on a transphosphorylation-dependent step. It is hypothesized that this inhibition step involves a conformational change in the target protein [34], given that the PDE5 N-terminus contains domains involved in both the homodimerization and the upstream regulation of PDE5 enzymatic activity [35]. The inhibition of PDE5 by Vav2 therefore ensures high levels of cGMP production in NO-stimulated vSMCs, thus favoring the sustained silencing of the RhoA pathway, effective vasodilation, and reduction in blood pressure ( Figure 2) [34]. Further genetic evidence has demonstrated that this pathway is intrinsic to vSMCs and is dependent on the main Vav2 substrate, the GTPase Rac1 [36,37]. These results indicate that Vav2 is a natural "Viagra"-like molecule that ensures proper NO-triggered vasodilatation responses by limiting the enzyme activity of PDE5. Consistent with this idea, the development of the hypertension and its associated cardiovascular comorbidities can be prevented in Vav2 -/-and vSMC-specific Rac1 -/-mice using sildenafil treatments [34,36]. Vav2, Skeletal Muscle, and Metabolic Homeostasis The analysis of the role of Vav2 in skeletal muscle and associated physiological mechanisms has been made using second-generation Vav2 L332A and Vav2 Onc knock-in mouse models. The former strain expresses a version of Vav2 with a point mutation in the DH region (Leu 332 to Ala) that reduces its catalytic activity by approximately 70% [38]. The latter strain expresses an N-terminally truncated (residues 1 to 186) version of the protein that showed catalytic hyperactivity due to the elimination of the inhibitory CH and Ac regions ( Figure 1) [36]. Thus, these two mouse models allowed us to address, for the first time, the contribution of the deregulated catalytic activity of Vav2 to a specific biological process. Using these models, we found that Vav2 signaling is critical for the control of skeletal muscle mass due to its implication in the regulation of the optimal output from the phosphatidylinositol 3 kinase α (PI3Kα)-Akt axis upon the stimulation of skeletal muscle cells with either insulin or IGF1 (insulin growth factor 1) [39]. Consistent with this, it was observed that homozygous Vav2 L332A/L332A and Vav2 Onc/Onc mice showed reduced and increased muscle mass, respectively [39]. This is an intrinsic function of Vav2 in skeletal muscle cells, since the signaling alterations can be recapitulated using both loss-and gainof-function approaches in cultured skeletal muscle cells [39]. The signaling dissection of this pathway indicated that Vav2 contributes to the activation of the PI3Kα-Akt axis using GTPase Rac1 as the main substrate [39]. The skeletal muscle is also responsible for ≈80% of the glucose uptake and clearance induced by insulin at the whole body level [40,41]. It also regulates the physiological status of other tissues involved in metabolic homeostasis such as the brown (BAT) and white (WAT) adipose tissue through hormonal-mediated mechanisms [40,42,43]. As a result, signaling dysfunctions in skeletal muscle cells can cause the development of type 2 diabetes and metabolic syndrome in both mice and humans [40,41]. It is not surprising, therefore, that Vav2 L332A/L332A mice also showed a progressive increase in adiposity in both the BAT and WAT, which eventually caused the subsequent development of liver steatosis and hyperglycemia [39]. These problems are accelerated, and further aggravated, when these mice are maintained under high-fat diet conditions [39]. Conversely, Vav2 Onc/Onc animals exhibited resistance against the foregoing dysfunctions when subjected to a highfat diet [39]. These metabolic alterations are quite similar to those previously found in other genetically modified mouse models that display either reduced (as in Vav2 L332A mice) or increased (as in Vav2 Onc mice) skeletal muscle mass [44][45][46][47][48][49][50]. Vav2 and Neuronal Functions Vav2 plays roles in signaling processes related to the internalization of transmembrane receptors in the nervous system. The first example of these functions was given by Greenberg's group in 2005, when they discovered that Vav2 is important for the regulation of the internalization of Eph family receptors in specific neuronal subtypes [51]. This function is important for proper growth collapse and the correct establishment of axon projections from retinal ganglion neurons to cells located in the dorsal geniculate nucleus [51]. Vav2 is also involved in the endocytosis of Ret-dopamine transporter complexes present in neurons of the nucleus accumbens [52], a part of the mesolimbic pathway of the brain that becomes stimulated during rewarding experiences and the intake of some drugs [53]. In fact, the dopamine transporter is the molecular target for cocaine [53]. Loss of Vav2 leads to the accumulation of the dopamine transporter in plasma membrane and an increase in the intracellular levels of dopamine in mice [52]. This function is nucleus accumbens-specific, since the lack of Vav2 does not affect the overall dopamine content of other midbrain regions involved in the mesolimbic pathway. It is also Vav2-specific, as Vav3 -/mice do not display any defects in dopamine levels in any of those midbrain regions [52]. Interestingly, the effects of cocaine are severely reduced in the absence of Vav2 [52]. This has been connected to reductions in the dopamine transporter K m and to smaller amplitude of the elevation of dopamine levels in the nucleus accumbens in the presence of cocaine. In contrast, no overt changes in the mesolimbic pathway-associated behaviors have been found under normal conditions in Vav2 -/mice [52]. Neuron-Associated Vav3 Functions in the Brainstem, Cerebellum, and Retina Unlike the case of Vav2, the elimination of the Vav3 gene causes widespread physiological alterations in mice due to severe problems in the regulation of the sympathetic nervous system (SNS). This is due to the implication of Vav3 in the establishment of proper inhibitory GABAergic wiring between the caudal (CVLM) and the rostral (RVLM) ventrolateral medullas that are located in the brainstem area [54] (Figure 3, point a). This wiring is essential for proper sympatho-regulation, since the CVLM is in charge of feeding tonic inhibitory signals to the RVLM and, at the same time, relay afferent signals from peripheral baroreceptors [55][56][57][58][59]. These activities are required, for example, to ensure the rapid restoration of normotensia upon sporadic changes in blood pressure. The CVLM also contributes to reset the threshold for the activation of the baroreflex by RVLM cells, an action that facilitates adaptative rises of blood pressure to new environmental, health, or physiological conditions [55]. The migration of axons of GABAergic neurons located in the CVLM toward their target neurons of the RVLM is impaired in the absence of Vav3 ( Figure 3, point a) [54], leading to the unleashing of RVLM activity, the hyperactivation of the SNS, and the development of SNS-dependent defects such as hypertension, tachypnea, and hypercapnia ( Figure 3, point a) [54,60]. All these dysfunctions can be prevented or reverted when treating the animals with β-adrenergic antagonists such as propranolol [54,60]. Interestingly, similar SNS-dependent dysfunctions are detected in mice lacking Ahr [61], a transcriptional factor that regulates Vav3 expression [62]. Biology 2021, 10, x FOR PEER REVIEW 6 of 14 [52]. This has been connected to reductions in the dopamine transporter Km and to smaller amplitude of the elevation of dopamine levels in the nucleus accumbens in the presence of cocaine. In contrast, no overt changes in the mesolimbic pathway-associated behaviors have been found under normal conditions in Vav2 -/-mice [52]. Neuron-Associated Vav3 Functions in the Brainstem, Cerebellum, and Retina Unlike the case of Vav2, the elimination of the Vav3 gene causes widespread physiological alterations in mice due to severe problems in the regulation of the sympathetic nervous system (SNS). This is due to the implication of Vav3 in the establishment of proper inhibitory GABAergic wiring between the caudal (CVLM) and the rostral (RVLM) ventrolateral medullas that are located in the brainstem area [54] (Figure 3, point a). This wiring is essential for proper sympatho-regulation, since the CVLM is in charge of feeding tonic inhibitory signals to the RVLM and, at the same time, relay afferent signals from peripheral baroreceptors [55][56][57][58][59]. These activities are required, for example, to ensure the rapid restoration of normotensia upon sporadic changes in blood pressure. The CVLM also contributes to reset the threshold for the activation of the baroreflex by RVLM cells, an action that facilitates adaptative rises of blood pressure to new environmental, health, or physiological conditions [55]. The migration of axons of GABAergic neurons located in the CVLM toward their target neurons of the RVLM is impaired in the absence of Vav3 ( Figure 3, point a) [54], leading to the unleashing of RVLM activity, the hyperactivation of the SNS, and the development of SNS-dependent defects such as hypertension, tachypnea, and hypercapnia ( Figure 3, point a) [54,60]. All these dysfunctions can be prevented or reverted when treating the animals with β-adrenergic antagonists such as propranolol [54,60]. Interestingly, similar SNS-dependent dysfunctions are detected in mice lacking Ahr [61], a transcriptional factor that regulates Vav3 expression [62]. The SNS hyperactivity is also the original cause of post-receptor insulin-like state and the obesity-independent metabolic syndrome seen in chow diet-fed Vav3 -/-mice (Figure 4) [63]. Interestingly, this metabolic phenotype is highly dependent on the type of The SNS hyperactivity is also the original cause of post-receptor insulin-like state and the obesity-independent metabolic syndrome seen in chow diet-fed Vav3 -/mice ( Figure 4) [63]. Interestingly, this metabolic phenotype is highly dependent on the type of diet because, unexpectedly, the Vav3-deficient animals do not develop the foregoing alterations when maintained under a high-fat diet regimen (Figure 4) [63]. Even more unexpectedly, these mice are totally protected against obesity and the ensuing metabolic syndrome condition that typically develops in mice under high-fat diets (Figure 4) [63]. Several physiological processes contribute to this paradoxical metabolic phenotype. On one hand, the protection from obesity exhibited by these animals independently of the diet used is the result of the presence of constitutive thermogenic programs in the BAT and the WAT that are sustained through adrenergic signals conveyed by both β 3 and α 1 receptors (Figure 4) [63]. On the other hand, the metabolic syndrome that develops in Vav3 -/mice is caused by two separate, SNS-dependent inputs on the liver: (a) An extrinsic effect elicited by peripheral tissues that promotes a post-receptor insulin state, de novo lipogenesis, and liver steatosis in Vav3 -/mice, regardless of the type of diet used [63]; and (b) an intrinsic effect on the liver itself that causes the upregulation of Pgc1α, a transcriptional cofactor involved in the activation of gluconeogenic, fatty acid oxidation, and ketogenic routes during fasting responses [63]. All these data indicate that the loss of Vav3 in the CVLM causes a butterfly-like effect that leads to the progressive alteration of many SNS-dependent physiological processes at the whole organismal level. (Figure 4) [63]. Even more unexpectedly, these mice are totally protected against obesity and the ensuing metabolic syndrome condition that typically develops in mice under high-fat diets (Figure 4) [63]. Several physiological processes contribute to this paradoxical metabolic phenotype. On one hand, the protection from obesity exhibited by these animals independently of the diet used is the result of the presence of constitutive thermogenic programs in the BAT and the WAT that are sustained through adrenergic signals conveyed by both β3 and α1 receptors (Figure 4) [63]. On the other hand, the metabolic syndrome that develops in Vav3 -/-mice is caused by two separate, SNS-dependent inputs on the liver: (a) An extrinsic effect elicited by peripheral tissues that promotes a post-receptor insulin state, de novo lipogenesis, and liver steatosis in Vav3 -/-mice, regardless of the type of diet used [63]; and (b) an intrinsic effect on the liver itself that causes the upregulation of Pgc1α, a transcriptional cofactor involved in the activation of gluconeogenic, fatty acid oxidation, and ketogenic routes during fasting responses [63]. All these data indicate that the loss of Vav3 in the CVLM causes a butterfly-like effect that leads to the progressive alteration of many SNS-dependent physiological processes at the whole organismal level. [63]. Outside the VLM, the Vav3 gene deficiency causes a retardation in the developmental steps of cerebellar Purkinje and granule cells. These defects lead to transient motor coordination and gaiting defects in very young Vav3 -/-mice [64]. At the cell biology level, it has been demonstrated that Vav3 regulates the branching of dendrites in both Purkinje and granule cells in culture [64]. The cerebellar phenotype of Vav3-deficient animals is similar to those found in animals lacking BDNF (brain-derived neurotrophic factor), NT3 Outside the VLM, the Vav3 gene deficiency causes a retardation in the developmental steps of cerebellar Purkinje and granule cells. These defects lead to transient motor coordination and gaiting defects in very young Vav3 -/mice [64]. At the cell biology level, it has been demonstrated that Vav3 regulates the branching of dendrites in both Purkinje and granule cells in culture [64]. The cerebellar phenotype of Vav3-deficient animals is similar to those found in animals lacking BDNF (brain-derived neurotrophic factor), NT3 (neurotrophin 3), and calcyphosine 2 [65][66][67]. BDNF and NT3 are ligands for TrkB (tropomyosin receptor kinase B) [68]. Calcyphosine 2 is an intracellular calcium-binding protein that participates in the secretion of the foregoing neurotrophins and, thereby, for the local bioavailability of these ligands to the TrkB-expressing cells located in the cerebellum [67]. This phenotypic similarity suggests that some of the defects found in Vav3-deficient mice could be related to the defective stimulation of the neurotrophin-TrkB signaling axis. In agreement with this idea, Vav3 -/knockout mice displayed low levels of BDNF than the controls within specific cerebellar areas during perinatal ages [64]. Given the transient nature of the cerebellar phenotype observed in Vav3 -/mice, it is likely that other Rho GEF could carry out functions analogous to Vav3 during this process concurrently or at later postnatal ages. Obvious candidates include GEFs, whose elimination causes cerebellar defects such as P-Rex family members, Trio, β-Pix, and Dock10 [69][70][71][72][73]. Finally, the genetic elimination of Vav3 enhances the differentiation of early neuronal lineages such as ganglion and cone photoreceptor cells in the retina of mouse embryos. This alteration, however, is corrected later on in postnatal stages [74]. It is likely that this process is mediated by the regulation of Vav3 expression and, subsequently, the stimulation by extracellular ligands for transmembrane tyrosine kinase receptors. Potential candidates for this stimulation step include ligands for both the epidermal and fibroblast growth factor receptors [74]. Vav3, Oligodendrocytes, and Myelination Processes Similar to the case of retinal cells [74], the elimination of Vav3 accelerates the differentiation of oligodendrocytes in mice [75]. The Vav3 gene deficiency also delays, although it does not abrogate the myelination of fibers in both cortical and cerebellar areas [75]. This process seems to be Rho GTPase-dependent, although the upstream and downstream mechanism involved still remain to be elucidated [75]. The physiological consequences of these defects are also unknown. Lessons Learnt from the Phenotypes of Vav Family Knock-Out and Knock-In Mice Together with the identification of new Vav family-regulated signaling and physiological processes, the study of the mouse models for the Vav family has unveiled new data on the regulation of the Vav proteins themselves as well as on the biological programs they participate in. In the former case, for example, we have learnt that the phosphorylationmediated activation of Vav proteins can be mediated not only by antigens and ligands for specific tyrosine kinase receptors, but also by NO (Figure 2) [34]. Despite this, the activation of Vav2 by this gas still requires the expected participation of protein tyrosine kinases, in this case, of the cytoplasmic Src family [34]. The exact mechanism that connects NO with those kinases is as yet unknown. In the latter case, the analysis of the Vav familyregulated physiological programs has also shed light on lingering questions affecting a number of etiological factors and cross-talk that are associated with the development of cardiovascular and metabolic diseases. For example, a rather obscure issue in this field is the role played by chronic sympathoexcitation in the development of both obesity and metabolic disease. Likewise, the role of hypertension in the development of type II diabetes and the ensuing metabolic syndrome is under debate. Answering these questions is highly relevant, since the data obtained will give us information on whether all these pathologies are etiologically and mechanistically intertwined or whether they just develop concurrently (but independently from a mechanistic point of view) as a consequence of the lifestyle of people. Tackling these issues from a clinical point of view has been rather difficult up to now given the multiple ethnic, sex, genetic, and environmental layers that affect the origin and evolution of all these illnesses. The availability of Vav3 -/mice has made it possible to use them as genetically "clean" tools to dissect the contribution of chronic sympathoexcitation to these pathologies. The findings obtained with them indicate that, at least in the case of rodents, the chronic stimulation of the SNS does affect the development of most pathological dysfunctions linked to metabolic syndrome conditions (Figure 4) [63]. They also suggest that treatments with α-adrenergic receptor antagonists, but not with β-adrenergic receptor antagonists, can be utilized to eliminate the metabolic syndrome condition in non-obese individuals displaying chronic sympathoexcitation (Figure 4) [63]. The combined used of Vav2 -/and Vav3 -/mice also demonstrated that hypertension does not contribute per se to the development of type II diabetes and metabolic syndrome [63]. The use of Vav3 -/mice led to the discovery that vagal signals originated from peripheral tissues are essential for the consolidation of the systemic pathologies caused by RVLM-driven sympathetic hyperactivity [76]. In line with this, it was observed that the surgical or chemical elimination of the afferent vagal branch that goes from the liver to the brainstem eliminates all the cardiovascular and metabolic defects found in Vav3-deficient animals [76] (Figure 3, point b). The transmission to, and subsequent integration of these vagal nerve-transmitted inputs into the RVLM requires further investigation. Finally, there is the issue of whether the functions discovered in mice have a translation to human settings. At moment, this question remains to be tackled. However, it is worth noting that genetic association studies have found associations of specific VAV2 and VAV3 gene polymorphisms with the development of functions found in mice such as cardiovascular homeostasis, hypertension, obesity, and diabetes [77,78]. More work, however, is needed to fully address this issue. Physiological Functions of Vav Proteins, a Problem for Potential Anti-Vav Therapies? Recent data indicate that the inhibition of the catalytic activity of Vav proteins could be of interest to treat a large variety of pathologies such as immune dysfunctions and cancer [2,79]. For example, in the case of Vav2 and Vav3, protumorigenic roles have been described in skin cancer, head and neck cancer, and p190 BCR-ABL -driven B cell acute lymphoblastic leukemia [80][81][82]. Roles for the third family member, Vav1, in cancer and other pathologies will be described in other reviews of the Special Issue on Vav proteins of this journal. Thus, they can be considered as potentially interesting good targets if high affinity inhibitors can eventually be developed. However, prima facie, it can be considered that the phenotypes described in this review can preclude the application of such therapies due to the extensive side effects they can elicit in the cardiovascular system, the skeletal muscle, and the overall metabolic homeostasis in patients. It is likely that the same problem would also arise when using inhibitors targeting downstream elements of the Vav-Rac1 axis. This issue is still a matter of investigation. However, with the available evidence, we can forecast some of the most important problems and rule out other potential side effects. In the case of Vav2, we believe that the cardiovascular defects elicited by the inhibition of Vav2 should not represent a serious hurdle for anti-Vav2 therapies for a number of reasons: (i) Genetic evidence indicates that the hypertensive state is rescued when the activity of Vav2 is restored in Vav2-deficient mice [36]; (ii) these defects can be prevented using standard anti-hypertension therapies [33,34,36]; and (iii) using homozygous Vav2 L332A/L332A (which display a reduction in 70% of Vav2 catalytic activity in tissues) and heterozygous Vav2 L332A/-(which displayed a reduction of 85% of Vav2 catalytic activity in tissues) pharmaco-mimetic mice, we have shown that the cardiovascular defects only arise in the latter animals [38]. However, effective anti-tumoral effects were obtained when using both Vav2 L332A/L332A and Vav2 L332A/mice [38]. These results indicate that it could be possible to find therapeutic windows in which the positive anti-tumoral effects of the catalytic activation of Vav2 can be dissociated from the negative side effects [38]. Skeletal muscle defects do arise in Vav2 L332A/L332A mice though [39], thereby suggesting that the use of Vav2 inhibitors could eventually lead to the loss of muscle mass and the development of a metabolic syndrome condition if administered during very long periods of time. The time-window in which most of these defects arise, however, suggest that patients will not develop such side effects under the normal protocols usually utilized during anti-tumoral treatments [39]. In the case of Vav3, we consider that most defects found in Vav3 -/mice do not represent a red flag for anti-Vav3 therapies because they arise as a consequence of the dysregulation of biological processes that take place during the embryonic or early postnatal times. Thus, it is unlikely that they will emerge when Vav3 is inhibited at older ages. Concluding Remarks Despite the progress made, many regulatory and functional issues remain to be addressed for these Vav family-dependent physiological processes in the near future. Thus, we still have a long way to go to understand the upstream receptors, regulatory molecules, and downstream pathways that are involved in most of the signaling processes described in this review. This is important from a basic science perspective, but also from pharmacological and clinical points of view, given that this information can shed light into new therapeutic avenues for high-incidence diseases. In this context, it has been generally assumed that most phenotypes found in Vav family mouse models are Rho GTPase-dependent. However, the specific GTPases that are involved remain to be clarified in the case of most of the pathways that have been discussed here. Likewise, we cannot rule out the participation of Vav-regulated adaptor pathways, at least in some of them. The repercussions of some of the defects found in Vav family-deficient mice at the organismal level also remain poorly characterized. For example, we do not know as yet the impact of the defects found in retinal neurons on the vision of Vav family-deficient mice. As indicated above, the physiological problems associated with the myelination defects found in Vav3 -/mice are also unknown. Many questions remain to also be addressed in the case of the regulation of sympathoexcitation by the RVLM in Vav3-deficient mice and how it is affected by the inputs received from vagal afferent fibers. All these lingering problems can be solved by continuing the analyses of genetically modified mice for Vav family genes and for other loci encoding proteins implicated in these processes. Finally, we cannot rule out the implication of Vav family proteins in additional functions in the tissues and systems discussed in this review. It is also possible that additional phenotypes will be found when the genetic manipulation of Vav family loci is conducted in combination with other Rho GEF-encoding genes. Arguably, more work has to be done to eventually solve all these questions. Conflicts of Interest: The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.
2021-09-29T05:16:15.850Z
2021-09-01T00:00:00.000
{ "year": 2021, "sha1": "d3f939f9fd85f1aca6b17f142eacabc520de314e", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-7737/10/9/857/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d3f939f9fd85f1aca6b17f142eacabc520de314e", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
49486193
pes2o/s2orc
v3-fos-license
Some international perspectives on legislation for the management of human-induced safety risks Legislation that governs the health and safety of communities near major-hazard installations in South Africa is largely based on existing legislation that had been developed in the United Kingdom and other European Union countries. The latter was developed as a consequence of several major human-induced technological disasters in Europe. The history of the evolution of health-and-safety legislation for the protection of vulnerable communities in European Union (EU) countries, France, Malaysia and the USA is explored through a literature survey. A concise comparison is drawn between EU countries, the USA and South Africa to obtain an exploratory view of whether current South-African legislation represents an optimum model for the protection of the health-and-safety of workers and communities near major-hazard installations. The authors come to the conclusion that South-African legislation needs revision as was done in the UK in 2011. Specific areas in the legislation that need revision are an overlap between occupational health and safety and environmental legislation, appropriate land-use planning for the protection of communities near major-hazard installations, the inclusion of vulnerability studies and the refinement of appropriate decision-making instruments such as risk assessment. This article is the first in a series that forms part of a broader study aimed at the development of an optimised model for the regulatory management of human-induced health and safety risks associated with hazardous installations in South Africa. Introduction The society in which we live becomes more complex every day as a result of a multitude of factors such as economic development, wars, terrorist attacks, technological innovation, societal demands for wealth creation and an increased awareness of the impact of human activities on the health and safety of people (Perrow 1999). Human populations are rapidly growing to extremes where the sustainable utilisation of natural and man-made resources is stretched to the limit. Clarke (2006) summarises this situation as follows: People are worried, now, about terror and catastrophe in ways that a short time ago would have seemed merely fantastic. Not to say that horror and fear suffuse the culture, but they are in the ascendant. And for good reason. There are possibilities for accident and attack, disease and disaster that would make September 11 seem like a mosquito bite. (p. 2) As one of the consequences of human-induced technological disasters, and focusing on it in this article, the regulation of major-hazard installations near densely populated areas has become more critical in order to limit the human-induced safety risks to which communities are exposed. A 'major-hazard installation' is defined in the South African Occupational Health and Safety Act (Act 85 of 1993) (LexisNexis Butterworths 2003a) as: An installation(a) Where more than the prescribed quantity of any substance is or may be kept, whether permanently or temporary;(b) Where any substance is produced, processed, used, handled or stored in such a form and quantity that it has the potential to cause a major incident. (Section 1,Definitions) In the same Act, a 'major incident' is defined Act as '[a]n occurrence of catastrophic proportions, resulting from the use of plant or machinery, or from activities at a work place'. The awareness around environmental conservation has grown substantially around the world during the past two decades, most notably in South Africa. However, it would appear that the safety of human communities, as far as the human-induced impact of major-hazard industrial installations are concerned, has not enjoyed the same priority as environmental issues in South Africa. In its State of the Environment Report, the Department of Environmental Affairs and http://www.jamba.org.za doi:10.4102/jamba.v8i2.170 Tourism (2006) confirms that their focus was on the condition of the environment and natural resources (the ecological system) in South Africa. The safety impact that major-hazard installations could have on people was not addressed in the publication. One reason for this is that environmental matters in South Africa are governed by the National Environmental Management Act ( In South Africa, the prime focus of the latter is the safety and health of workers in organisations (the labour force) and to a lesser extent that of the general public. Background of the study This article is the first in a series that forms part of a current broader study aimed at the development of an optimised model for the regulatory management of human-induced health and safety risks associated with major-hazard installations in South Africa. The proposed study aims to provide answers to the following key questions: • Does the existing regulatory framework for major-hazard installations in South Africa adequately address the safety of the public around such installations through appropriate land-use planning criteria, taking into consideration that current legislation falls within the domain of the Occupational Health and Safety Act with its prime focus on employees? • Under which legislation does the governance of majorhazard installations fall in other countries in the world? Is it classified as environmental or as labour-related legislation? Should the governance of hazardous installations not be consolidated into one set of legislation, governed by a centralised body? • What lessons can South Africa learn from the experience of other countries with regard to a regulatory framework for major-hazard installations? • What factors need to be taken into consideration when an optimised regulatory model for major-hazard installations in South Africa is developed in the interest of employers, employees and the public? Research methodology The methodology followed in this paper is based on a dualistic approach: • Research of the literature was done with specific focus on regulatory regimes for major-hazard installations in South Africa and some overseas countries, how they developed and how they compare with those of South Africa today. • Discussions were held with key stakeholders in government and industry in South Africa, Namibia and Europe. Regulatory management in South Africa Apart from the influences that major disasters had on legislation around the world, we could not find information in the literature on the development of legislative models for the management of health and safety related to major human-induced hazards. The major industrial disaster that took place in Seveso, Italy, in 1978 shaped industrial safety regulation around the world (Homberger et al. 1979 South Africa is faced with the following options: • use the latest regulations and guidelines from developed countries such as the United Kingdom to create a regulatory framework for the management of hazardous installations in industry in order to protect, firstly, the health and safety of members of the public and, secondly, workers in the industry • integrate existing legislation that involves major-hazard installations and disasters • ensure that an economic regime is created and maintained in local industry that is conducive to fixed investment, profitability and employment creation. Regulatory conflict exists in South Africa in that the boundaries between environmental legislation (NEMA and Environmental Impact Assessment Regulations) and safety legislation (Occupational Health and Safety Act and Major Hazard Installation [MHI] Regulations) are not clearly defined, with the result that overlapping of authority occurs. In cases where environmental authorisation is required from the Department of Environmental Affairs for new hazardous installations, environmental-assessment practitioners erroneously pull the risk-assessment reports for majorhazard installations, compiled under the MHI Regulations for the Department of Labour, into the process of assessing any impact on the environment. As a result, the Department of Environmental Affairs mistakenly gained a mandate to authorise risk-assessment reports which actually resort under the Department of Labour. This causes the stipulations of the MHI Regulations to be violated, for example, publicconsultation protocols. Campbell (2013) concludes that, if South Africa is to move to a better regulation of major-hazard installations, a number of ingredients are needed: • good regulations, backed by clear unambiguous guidance such as risk assessment • a focus on high-hazard industries • competent, appropriately staffed organisations • a bigger, more competent Department of Labour • an integrated approach to land-use planning, driven by local planning authorities. Campbell (2013) further formulates three dimensions for comparing the regulatory management frameworks of South Africa, the USA and the European Union: • the identification of major-hazard installations • control over major-hazard installations • control of land development in the vicinity of majorhazard installations. Regulatory management in the United Kingdom It is important to consider the work done in the UK regarding the development of regulations for major-hazard installations. The history of health-and-safety regulation in the UK dates back to 1833 when the first factory inspectors were appointed under the Factories Act of 1833. Inspectors were appointed to focus on injuries and to control the work hours of children in the textile industry. In July 1974, the Health and Safety Commission was formed under the Health and Safety at Work Act of 1974. Its responsibilities included the health and safety of people at work, protecting the public generally against health and safety risks, giving advice to local authorities on the enforcement of the Act and assisting persons with duties under the Act. The promotion of ongoing research and the provision of information also formed part of its responsibilities. The Health and Safety Executive (HSE) was established in January 1975 with the responsibilities of executing the duties of the Health and Safety Commission and enforcing health-and-safety legislation in all workplaces except those regulated by local authorities. In 2006, the Health and Safety Commission and the Health and Safety Executive merged to form one organisation called the HSE. The HSE is the national regulatory body responsible for promoting better health and safety at work in Great Britain. However, enforcement is shared to a large extent with local authorities in accordance with the Health and Safety (Enforcing Authority) Regulations of 1998. Healthand-safety legislation in the country was shaped to a large extent by evidence gathered from the occurrence of major disasters such as the following: Kevin Allars (UK Health and Safety Executive 2006) expressed concern about the level of major incidents at major-hazard installations in the UK. He commented that the Health and Safety Executive formed an inspection team to work with industry to promote sharing of information on dangerous incidents and to review preventative action plans. It follows in the wake of the Buncefield incident in the UK in 2005. The UK Health and Safety Executive (2013) places a strong emphasis on this approach of information sharing and public consultation, similar to what the NEMA prescribes in South Africa. The Health and Safety Commission undertook a comprehensive review of the UK's health-and-safety legislation in 1992 to determine whether existing legislation was still relevant and necessary in its then form. The review also aimed to reduce the administrative burden that legislation placed on small businesses and to examine the general approach of the Health and Safety Executive regarding enforcement of the legislation. It was found that much of the current legislation was considered too voluminous, complicated and fragmented. The review report recommended the removal of 100 sets of regulations and seven pieces of primary legislation as well as the simplification of the 340 requirements for administrative paperwork. A Simplification Plan was consequently implemented by the Health and Safety Executive to reduce the legislative burden on industry. A follow-up review of health-and-safety legislation was commissioned by the Minister The review sought evidence from government bodies, employers' organisations, employee organisations, professional health-and-safety bodies and academics. The report concluded that: • self-employed bodies whose activities pose no potential health-and-safety risk to others should be exempted from such legislation • all Approved Codes of Practice be reviewed • the UK government work more closely with the European Union to ensure that health-and-safety legislation is risk and evidence-based • sector-specific legislation be consolidated • the HSE should direct all health-and-safety inspections and enforcement activities in local authorities in order to be consistent and focused on high-risk workplaces • clarification be obtained regarding the protocols of early settlements between parties in cases of civil action initiated against employers by employees and the public • regulatory provisions imposing strict liability on employers be reviewed and aligned with the principle of 'reasonably practicable'. In his review, Löfstedt (2011) made a general conclusion that is appropriate to South Africa, namely that regulatory requirements in the field of health and safety are misunderstood and applied inappropriately or inconsistently. The recommendations that flowed from the review addressed streamlining the body of regulation through consolidation, re-directing enforcement activities towards workplaces where the highest risk of injury or illhealth exist and re-balancing the civil-justice system through clarification of early settlement protocols and a review of strict liability imposed on employers. According to the Health and Safety Executive of the UK (2013), the legislative regulation of health and safety at nuclear installations in the UK received renewed attention in 1957 following a major incident at the Windscale nuclear site. It led to the passing of the Nuclear Installations Act in 1959, followed by the formation of the Regulatory management in European Union countries The complexity of and room for confusion created by risk assessment at major-hazard installations were the study topic of Ignatowski and Rosenthal (2001). For this purpose, they developed a Chemical Accident Risk Assessment Thesaurus (CARAT) in the Organization for Economic Cooperation and Development (OECD) throughthe Working Group on Chemical Accidents. The research recognises the difficultly of communicating amongst member countries about the risk assessments of hazardous installations. They conclude that this difficulty was based largely on the fact that certain 'terms of art' have different meanings in different countries and cultures. Furthermore, different people and organisations use different terms of art to address the same concept. This cultural complexity is a prominent feature of South African society and may contribute to the problem of legislation for major-hazard installations. The lack of consistency in the definitions of essential terms of art creates an impediment to understanding amongst all stakeholders of the approaches and methodologies used in risk assessment at major-hazard installations. This, in turn, creates uncertainty aboutthe significance of the assessment results. Shaluf (2007) introduces the concept of technological disasters instead of major-hazard installation disasters. This helps to emphasise the industrial, man-made nature of such disasters in order to manage themeffectively. Reference is made to some major industrial disasters in the world, namely Seveso-1978, Flixborough-1974, Bhopal-1984and Piper Alpha-1988. Shaluf (2007 refers to the following definition of a major accident as used by the International Labour Organisation: A major accident is an occurrence such as a major emission, fire or explosion resulting from uncontrolled developments in the course of an industrial activity, leading to a serious danger to man, immediate or delayed, inside or outside the establishment and to the environment, and involving one or more dangerous substances. (p. 115) Note that there is no reference to the labour force per se although the inside and outside of the facility establishment are included. Operators of hazardous installations, in particular those with limited resources and time constraints, often find it difficult to collect the large number of different safetyperformance indicators of their plants in accordance with the approaches developed by member countries in the OECD, as outlined by Mengolini and Debarberis (2008). They propose that organisations should focus on a culture of safety amongst workers (plant operators and managers) so that major disasters can be prevented. A typical example would be to involve workers in regular focusgroup discussions on safety. Our proposals can form the basis of expanded regulatory measures for major-hazard installations. relationship between work demands and human capacities when considering human and system performance. The aim should be to eliminate or reduce the chance of adverse human behaviour, which can lead to harm through accidents or chronic exposure to conditions adverse to health. This particular aspect relates to the situation prevailing at some major-hazard installations in third-world countries. Bellamy, Geyser and Wilkenson (Loss Prevention Major-hazard installations are needed in every country to provide for its manufacturing, agriculture, transportation and energy needs. These installations store large quantities of hazardous substances and energy in one place such as refineries, petrochemical plants, chemical production plants, storage facilities for liquid petroleum gas and watertreatment plants. Some contemporary technical installations are so complex and so closely meshed that accidents are inherent in their design. Such systems could generate 'normal accidents' (Perrow 1999). Perrow makes it clear that most high-risk systems have inherent characteristics that cause the accidents that take place in them to be 'normal', that is, inevitable as a result of complex interdependence between the various system components. He refers to the disaster at Three Mile Island's nuclear reactor in 1979 as a case in point. Shaluf (2008) comes to the conclusion that the impact of technological disasters is not limited to the plants only but can extend to neighbouring surroundings. The establishment of disaster criteria is useful to set benchmarks for the definition of disaster incidents and to declare the need for international assistance. Papazoglou et al. (2003) state that the European Union Directive 96/82/EC for the Control of Major-accident Hazards (the Seveso-II Directive) requires that majorhazard companies implement a prevention policy for major accidents and have auditable safety-management systems. The authors propose an integrated safety-management system that links technical and managerial models to give good insight into the quality of management and its influence on the safety of a plant. The quality of management is particularly relevant when the safety of major-hazard installations is considered. Regulatory management in the United States of America The United States Environmental Protection Agency (EPA) plays a decisive role in the setting and enforcement of safety standards in the USA. In its historic overview, the USA Environmental Protection Agency (2013) reports that the US President decided in 1970 to establish an independent authoritative regulatory body to enforce environmental policy. He eventually established the US Environmental Protection Agency (EPA) for this purpose. The mission of the EPA was formulated mainly to: • establish and enforce environmental protection standards and goals • conduct research on the adverse effects of pollution • gather information on pollution for the development of environmental protection programmes • provide for grants and technical assistance to combat environmental pollution. Regulatory management in France France provides us with a clear focus on land-use planning as an important consideration for the regulation of majorhazard installations. Salvi and Gaston (2004) describe the context of hazardous establishments in France as one which entails a very complex decision process based on several criteria which are difficult to evaluate. The only explicit criteria in France are those related to the consequences of accidents that are used to define the safety distances around hazardous establishments. In their code related to the control of hazardous establishments (Code de l'Environement, Livre V, 1976, Art L512), the license to operate such a facility is subordinated to a sufficient distance between the establishment and people in the vicinity. In other words, the regulatory bodies may theoretically not license new • Legislation applies to establishments and installations where a dangerous substance is present in quantities above a certain specified threshold. • The nuclear industry is excluded from the legislation. • EU countries have a clear approach to assisting industry in identifying whether or not the relevant legislation applies to a particular establishment or installation. • Definition of a major-hazard installation is ambiguous. • Risk assessment is prescribed as decision-making instrument. However, the assessment methodology leaves room for varying interpretations. • Classification of major-hazard installations is unclear, non-specific and interpreted differently by role players and authorities in industry. • There is no differentiation between various categories of hazardous installations. • Legislation can create barriers to trade due to cost impact. • The nuclear industry is excluded from the legislation and is covered under separate legislation. 2: How are major-hazard installations controlled? • The quantity of dangerous substances dictates the control measures. • Establishments and installations fall into two groups: lower-tier sites and top-tier sites. • Lower tier establishments: notify the regulator; prepare a major accident prevention policy; take measures to prevent major accidents; report accidents. • Top tier establishments: As above, but with the additional requirement to submit a safety report. • The regulatory approach is balanced where low-hazard installations are not burdened with disproportionate cost and administration. Highhazard installations are regulated in proportion to their scale of risk. • Legislation is reviewed often such as the Löfstedt review in UK in 2011. • Notify the authorities, perform a risk assessment and develop an on-site emergency-response plan. • The requirements beyond risk assessment are limited to emergency-response planning, incident reporting and riskassessment revision. • The same emergency management measures are required across all industries, for all types of hazardous-installation categories, which is onerous for small operators. 3: How is development controlled in the vicinity of major-hazard installations? • Member states are responsible for implementing policies and procedures for land-use control of new establishments, modification of existing establishments and new developments around Seveso II high-risk establishments. • The requirements are met differently across different member states. • The Seveso II land-use planning directives vary across Europe. • The process is controlled by planning authorities who are advised by technical specialists such as the Health and Safety Executive in the UK. This gives a greater degree of assurance that major hazards are taken into consideration for land-use planning. • Environmental impact is not explicitly addressed. • Local authorities have the responsibility to control developments around existing major-hazard installations. • The regulatory process for major-hazard installations are in some cases detached from the land-use planning process and therefore not adequately considered in development planning. • The regulations are ambiguous and therefore poorly enforced. • Environmental impact as defined in environmental legislation is not addressed in the regulations. Source: Campbell, D., 2013, 'PetroSA', presentation at Major Hazard Installation Seminar, Boksburg, South Africa, 19 February • Process-safety management and risk-management planning are used as decision-making instruments. • Legislation is based on threshold quantities of dangerous substances and applies to processes or installations. • The USA has a clear approach to assisting industry in identifying whether or not the relevant legislation applies to them. • Definition of a major-hazard installation is ambiguous. • Risk assessment is prescribed as decision-making instrument. However, the assessment methodology leaves room for varying interpretations. • Classification of major-hazard installations is unclear, non-specific and interpreted differently by role players and authorities in industry. • There is no differentiation between various categories of hazardous installations. • Legislation can create barriers to trade due to cost impact. • The nuclear industry is excluded from the legislation and is covered under separate legislation. 2: How are major-hazard installations controlled? • A process-safety management system with 14 steps is required for all hazard installations. • Notify the authorities, perform a risk assessment and develop an on-site emergency-response plan. • The requirements beyond risk assessment are limited to emergency-response planning, incident reporting and riskassessment revision. • The same emergency management measures are required across all industries, for all types of hazardous-installation categories, which is onerous for small operators. 3: How is development controlled in the vicinity of major-hazard installations? • The Environmental Protection Agency (EPA) requires a risk-management plan. It is passed on to local and state regulators. • The focus is on response rather than the proactive management of development around these installations. • Regulations are weak in this regard. The focus is on prevention and recovery at site rather than on separation through planning control. • Local authorities have the responsibility to control developments around existing major-hazard installations. • The regulatory process for major-hazard installations are in some cases detached from the land-use planning process and therefore not adequately considered in development planning. • The regulations are ambiguous and therefore poorly enforced. • Environmental impact as defined in environmental legislation is not addressed in the regulations. Regulatory management in Malaysia Malaysia provides a good perspective with regard to the foundation on which its regulatory framework is based. Shaluf, Ehmadun and Sharif (2003) investigate the causes of technological (major-hazard installation) disasters in Malaysia in a fireworks factory, a petrochemical plant, a refinery and another mutual major-hazard installation. They conclude that there were seven main causes of the disasters, namely: • social errors related to operators and managers at the installations • technical errors such as design shortcomings and equipment failure • organisational errors related to wrong procedures and documentation (the authors particularly focus on the link between the social and technical side of the installations through policies, regulations, rules, manuals, training and emergency plans) • operational errors caused by the human and technical interface • warning systems used to alert the management of the facility that dangerous operational conditions are starting to arise • triggering events after which disaster is unavoidable, such as unsafe acts and conditions • defence errors such as a lack of emergency-response measures. Conclusion This article provided a concise overview and comparison of the regulatory frameworks for major-hazard installation in EU countries, USA, France, Malaysia and South Africa. Whilst there are marked similarities between EU countries and the USA, albeit with some apparent shortcomings with regard to land-use planning, current legislation in South Africa needs revision in order to bring it on par with those of developed countries. In particular, South-African legislation should be improved in terms of the following aspects: • The definition of a major-hazard installation is ambiguous with the result that the classification of major-hazard installations is unclear, non-specific and interpreted differently by role players, risk assessors and authorities in the industries. • Risk assessment is prescribed as a decision-making instrument, but the assessment methodology leaves room for varying interpretations due to the lack of a uniform assessment standard. • There is no differentiation between various categories of hazardous installations. • The same emergency management measures are required across all industries, for all types of hazardousinstallation categories, which is onerous for small operators. • Local authorities have the sole responsibility to control developments around existing major-hazard installations, but there are no clear regulatory guidelines or measures that can legally be enforced to prevent the establishment of vulnerable developments near existing major-hazard installations. • Apart from risk assessment, the requirements for major-hazard installation planning are limited to onsite emergency-response planning, incident reporting and risk assessment revision. This causes the regulatory process for major-hazard installations to be detached from the land-use planning process and therefore not adequately considered in development planning. • Vulnerability studies do not form part of majorhazard installation planning at all, and aspects such as community vulnerability, resilience and coping capacity related to the specific installation are not considered at all. • There is a gap between the Major Hazard Installation Regulations and the Disaster Management Act. • Environmental impact as defined in environmental legislation overlaps with the major-hazard installation regulations and creates conflict between the relevant state departments with regard to enforcing mandates. Further research The regulation of major-hazard installations in South Africa does not consider vulnerability and sustainability science. There is scope for further research into the impact of majorhazard installations on the vulnerability, coping capacity and human-induced disaster resilience of communities. We are of the opinion that the UK, followed by EU countries, is furthest advanced with regard to the implementation of legislation to regulate major-hazard installations. However, the international comparison of South African safety legislation should be expanded to other industrialised countries such as Australia, Japan, China, Singapore, Mexico and Canada. In addition, lessons learnt from EU countries and others should be researched to identify similarities with the South-African situation.
2018-06-30T01:28:56.438Z
2016-01-13T00:00:00.000
{ "year": 2016, "sha1": "e18974870ed3ebdd51cf57a40368800b8a2782e7", "oa_license": "CCBY", "oa_url": "https://jamba.org.za/index.php/jamba/article/download/170/370", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e18974870ed3ebdd51cf57a40368800b8a2782e7", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Political Science", "Medicine" ] }
249680612
pes2o/s2orc
v3-fos-license
PCSK9 inhibitors for secondary prevention in patients with cardiovascular diseases: a bayesian network meta-analysis Background The Food and Drug Administration has approved Proprotein Convertase Subtilisin/Kexin Type 9 (PCSK9) inhibitors for the treatment of dyslipidemia. However, evidence of the optimal PCSK9 agents targeting PCSK9 for secondary prevention in patients with high-risk of cardiovascular events is lacking. Therefore, this study was conducted to evaluate the benefit and safety of different types of PCSK9 inhibitors. Methods Several databases including Cochrane Central, Ovid Medline, and Ovid Embase were searched from inception until March 30, 2022 without language restriction. Randomized controlled trials (RCTs) comparing administration of PCSK9 inhibitors with placebo or ezetimibe for secondary prevention of cardiovascular events in patients with statin-background therapy were identified. The primary efficacy outcome was all-cause mortality. The primary safety outcome was serious adverse events. Results Overall, nine trials totaling 54,311 patients were identified. Three types of PCSK9 inhibitors were evaluated. The use of alirocumab was associated with reductions in all-cause mortality compared with control (RR 0.83, 95% CrI 0.72–0.95). Moreover, evolocumab was associated with increased all-cause mortality compared with alirocumab (RR 1.26, 95% CrI 1.04–1.52). We also found alirocumab was associated with decreased risk of serious adverse events (RR 0.94, 95% CrI 0.90–0.99). Conclusions In consideration of the fact that both PCSK9 monoclonal antibody and inclisiran enable patients to achieve recommended LDL-C target, the findings in this meta-analysis suggest that alirocumab might provide the optimal benefits regarding all-cause mortality with relatively lower SAE risks, and evolocumab might provide the optimal benefits regarding myocardial infarction for secondary prevention in patients with high-risk of cardiovascular events. Further head-to-head trials with longer follow-up and high methodologic quality are warranted to help inform subsequent guidelines for the management of these patients. Supplementary Information The online version contains supplementary material available at 10.1186/s12933-022-01542-4. Background Patients who have had established cardiovascular diseases remain at elevated risks of recurrent cardiovascular events, leading to an increased risk of death [1,2]. Therefore, secondary preventions targeting the established risk factors for this group of patients represent a high priority. For decades, statins have been regarded as the firstline drugs for lowering cholesterol levels and prevention of potential cardiovascular events. But a considerable proportion of high-risk hypercholesterolemic patients do not achieve adequate reductions in low-density lipoprotein cholesterol (LDL-C) despite of the intensive statin therapy [3]. According to the latest US and European guidelines, proprotein convertase subtilisin/kexin type 9 (PCSK9) inhibitors in combination with statin and ezetimibe therapy are recommended to reduce risk of cardiovascular events in these patients [2,4]. PCSK9 accelerates degradation of LDL receptors, thereby inhibiting the removal of LDL from the circulation [5][6][7]. Thereafter, by controlling the expression of LDL receptor on the surface of hepatocytes, modulators that inhibit PCSK9 could reduce LDL-C and subsequently major cardiovascular events [8][9][10]. This therapy may be more effective in reducing LDL-C and other atherogenic lipids in high-risk patients treated with the maximum tolerated dose of statins, as well as those who are intolerant to statins. Although there are safety concerns such as the potential risk of new-onset diabetes [11][12][13], several meta-analyses have demonstrated that PCSK9 inhibitors showed better effects in reducing LDL-C levels and improving clinical benefits than other lipid-lowering agents for the secondary prevention of cardiovascular disease [14,15]. However, due to the lack of direct comparisons between different medications, the optimal agent targeting PCSK9 to reduce the risk of death after cardiovascular events remains undetermined. Therefore, this study aimed at evaluating the efficacy and safety of different PCSK9 inhibitors for secondary prevention in patients with high-risk of cardiovascular events. Guidance and protocol The methodology for reporting the systematic review with network meta-analysis followed the PRISMA-NMA guideline [16]. The protocol of the present study was registered in Open Science Framework database (https:// osf. io/ xf9dh). Data sources and search strategy Several electronic databases were searched, including Ovid Medline, Ovid Embase, and Cochrane Library of Clinical Trials. Searches were conducted from inception until March 30, 2022, without restrictions of language or publication status. The following MesH terms and their entry terms were chosen: "PCSK9 Inhibitors", "hypercholesterolemia", "randomized controlled trial". For any ongoing studies or completed studies with reported results, we consulted the relevant clinical trials registry (https:// www. clini caltr ials. gov/). We also inspected the reference lists of included trials and latest reviews in the same field. The details of the search strategy conducted are presented in Additional file 1: Table S1. Selection criteria We only included randomized controlled trials that met the following criteria: first, the study population should be adult patients (age ≥ 18) with established coronary heart disease (CHD), atherosclerotic cardiovascular disease (ASCVD), or disease risk equivalent; second, the intervention group used PCSK9 modulating therapies for secondary prevention with statin background therapy; third, comparison group was placebo or ezetimibe, or a different PCSK9 modulating therapy; forth, at least one outcome of the following had to be reported. The primary efficacy outcome was all-cause mortality, and the primary safety outcome was serious adverse events (SAEs). Follow-up duration of the cardiovascular events should be at least 48 weeks or one year. Secondary efficacy outcomes including cardiovascular death, myocardial infarction, and stroke. Secondary safety outcomes including injection site reaction, new-onset diabetes, and neurocognitive disorders. These outcomes could be defined by each trial. Study selection and data extraction process Study selection was carried out by two authors (XW and DW) independently. Most of the literature was excluded based on the titles and abstracts of all publications retrieved in the electronic search. Only when both agreed that literature met the eligibility criteria did they screen the full text for potentially relevant trials. In cases of any disagreements, the problems were resolved by detailed discussion between the study team. When inclusion criteria needed to be assessed or vital data were missing, corresponding authors were responsible for contacting to obtain the missing information. Data extraction and collection process were performed by two independent authors (XW and DW) using predesigned table forms. Any disagreements were resolved by detailed discussion between the study team. Quality assessment Risk of bias assessments of the eligible studies were completed by two authors (XW and LM) independently using the Cochrane risk of bias assessment tool [17]. For each study, the following six domains needed to be assessed: first, selection bias including allocation sequence concealment and random sequence generation, second, detection bias including blinding of outcome assessment, third, performance bias including blinding of participants and personnel, forth, reporting bias including selective reporting, fifth, attribution bias including incomplete of outcome data, and sixth, other potential sources of bias. Assessments of certainty of evidence were performed by two authors (XW and CY) using the Grading of Recommendations, Assessment, Development and Evaluation (GRADE) designed by the GRADE working group. The following five aspects need to be taken into account: first, overall risk of bias, second, imprecision, third, inconsistency, forth, publication bias, and fifth, indirectness [18]. Data synthesis and analysis The statistical analyses were performed using the R packages in R software (version 4.0.5) and RevMan (version 5.4.0). We performed Bayesian network meta-analyses using a consistency model to incorporate indirect comparisons. In brief, the comparison of the effect of any two treatment regimens as a function was modeled that each drug was relative to the reference drug. Dichotomous variables were expressed as risk ratios (RR), and continuous variables were expressed as mean difference (MD). The corresponding 95% credible interval (CrI) was obtained using the 2.5th and 97.5th percentiles of the posterior distribution. The models are based on 30,000 iterations after a burn-in of 10,000 iterations. For the inadequate convergence of the model, the parameters were further modified until a satisfactory convergence was achieved. The rankograms were estimated to rank the intervention hierarchy in the network meta-analysis. We used the surface under the cumulative ranking curve (SUCRA) to estimate the ranking probability of the treatment agents for each outcome. Heterogeneity of the model was assessed using with the Chi 2 test and the I 2 test. A I 2 vlaue of more than 50% was considered substantial [19]. Tests of statistical significance were based on two side and a p value with less than 0.05 was considered as statistically significant. The possibility of publication bias was evaluated by the Harbord regression test, Egger regression test, and Begg's test if more than ten trials were included [20]. Study selection and characteristics Through a systematic database search, we identified 1,478 records. After selection, nine trials totaling 54,311 participants fulfilled the aforementioned criteria and were included in the analysis [21][22][23][24][25][26][27][28]. The study selection process was presented in the Additional file 1: Figure S1 in the Supplement. Characteristics of the eligible trials are presented in Table 1. Five trials compared alirocumab with control, two trials compared evolocumab with control, and two trials compared inclisiran with control. Study sizes ranged from 300 to 27,564 participants; the mean age ranged from 58.6 to 65.7 years; the percentage of male participants ranged from 63.3% to 81.0%. Efficacy outcomes All the included studies reported the primary efficacy outcome, including a total of 54,301 participants with available data in terms of the all-cause mortality (Fig. 1A). The administration of alirocumab was associated with reductions in all-cause mortality compared with control (RR 0.83, 95% CI 0.72-0.95; Fig. 1B). Evolocumab was associated with increased all-cause mortality compared with alirocumab (RR 1.26, 95% CI 1.04-1.52; Fig. 1B). The SUCRA value represents the overall rank for each agent with regards to the likelihood of the outcome of interest (Fig. 1C). Alirocumab was identified as the best regimen that results in reduction of all-cause mortality, with a SUCRA value of 0.91. This result also showed significant difference. Followed treatment agents were inclisiran (SUCRA = 0.44), and evolocumab (SUCRA = 0.24). We performed sensitivity analysis by excluding the ODYSSEY LONG TERM trial which enrolled some patients with heterozygous familial hypercholesterolemia (< 20%), and the results remained consistent ( Table 2). Meta-regression was performed to test the effects of body mass index (BMI) and diabetes on the risk of death. The results were shown in Fig. 2, which revealed a negative interaction between the BMI and the death risk (p = 0.029; Fig. 2). Safety outcomes A total of eight trials reported serious adverse events, including 53,264 patients (Fig. 4A). The administration of alirocumab was associated with reductions in serious adverse events compared with control (RR 0.94, 95% CI 0.90-0.99; Fig. 4B). SUCRA curve identified inclisiran (SUCRA = 0.83; Fig. 4C) as the top ranked treatment in association with less serious adverse events, followed by alirocumab (SUCRA = 0.77; this result was also significant) and evolocumab (SUCRA = 0.20). Similar results were obtained for the primary safety outcome by excluding the ODYSSEY LONG TERM trial (Table 2). Other safety outcomes were reported in Fig. 5. The use of alirocumab, evolocumab, and inclisiran were . Therapies with alirocumab, evolocumab, and inclisiran were not associated with an increased incidence of new-onset diabetes, and neurocognitive disorders. Evolocumab ranked the best strategy for injection site reaction (SUCRA = 0.66), while alirocumab as the best agent for new-onset diabetes (SUCRA = 0.84), and neurocognitive disorders (SUCRA = 0.85). We also evaluated the effects of LDL-C change on the risk of new-onset diabetes (Fig. 6). The results did not show any significant interactions (p = 0.161). Quality assessments The overall quality of the nine included trials was judged to be high (Additional file 1: Figures S2 and S3). The certainty of the evidence for the network comparisons of alirocumab vs. placebo in the primary efficacy outcome was judged as high; alirocumab vs. evolocumab in in the primary efficacy outcome was judged as low due to indirectness and imprecision. The quality of the evidence for the network comparisons of alirocumab vs. placebo in the primary safety outcome was judged as high; alirocumab vs. evolocumab in in the primary safety outcome was judged to be low due to indirectness and imprecision. Discussion Cardiovascular disease is one of the leading causes of death, accounting for approximately one third of deaths in the United States [29]. In the present meta-analysis of nine RCTs totaling 54,311 patients, we evaluated the comparative effect of three PCSK9 inhibitors in the secondary prevention in patients with high-risk of ASCVD. We excluded trials that compared bococizumab with placebo because it was dumped in 2016 by its manufacture. Reasons for withdrawal included unexpected attenuation of LDL-C-lowering effects over time, and higher rates of immunogenicity and injection site reactions during treatment than with other drugs in this class [30,31]. According to the present analysis, the use of alirocumab was associated with reductions in the all-cause mortality and serious adverse event. Besides, administration of evolocumab was associated with decreased risk of myocardial infarction. Comparison with the latest evidences This study is the first network meta-analysis assessing the effect of different modulators targeting PCSK9 on cardiovascular events in patients with ASCVD to the best of our knowledge. Previous studies have been performed to assess the comparative effects of PCSK9 inhibitors, statins, and ezetimibe. The authors concluded that PCSK9 inhibitors were ranked as the most effective treatment for reducing cardiovascular events without increasing major safety concerns [14]. Thus, it is important to explore the optimal PCSK9 inhibitors which benefit highrisk patients the most. Former meta-analyses have evaluated the effects of different PCSK9 modulators compared to controls through direct comparisons. Most of them did not find significant differences regarding all-cause mortality [15,[32][33][34]. Our study found that evolocumab significantly reduced the risk of myocardial infarction. Similar findings have been found in other studies [15,35,36]. Moreover, a recent study demonstrated that the combination of evolocumab and statin produced favorable changes in coronary atherosclerosis after non-STsegment elevation myocardial infarction, consistent with stabilization or even regression [37]. Most of the previous studies on the same field were designed as direct meta-analyses, which provided only partial information in this case and therefore did not optimally inform decision making on comparative effectiveness of different treatment agents. The present study used network analysis which could help evaluate comparative effectiveness of various treatment agents [35,38] This method is useful to improve the precision of the outcome estimate and allows estimation of the comparative effectiveness of different types of PCSK9 inhibitors. Another notable finding from the meta-regression was that risk of all-cause mortality was statistically significantly lower in patients with higher BMI. This finding suggest that these patients might be more likely to benefit from treatment with monoclonal antibodies targeting PCSK9. On the other hand, recent studies demonstrated that loss-of-function variants in PCSK9 were associated with lower LDL-C levels but associated with increased levels of fasting glucose concentration and an increased risk for new-onset diabetes, which resulted in serious concerns about the safety of the anti-PCSK9 treatments [11][12][13]. According to our analysis, there is no significant impact of LDL-C change induced by PCSK9 inhibitors on new-onset diabetes. Mechanism and clinical implications PCSK9 binds to LDL receptors on the hepatocytes surface and induces degradation of them after internalization, resulting in reduced uptake of LDL-C by the liver and increased levels of circulating LDL-C. PCSK9 inhibitors exert lipid-lowering effects by decreasing plasma PCSK9, ultimately leading to a reduction in the major cardiovascular events [39]. These agents could not only effectively decrease levels of LDL-C, but also reduce apolipoprotein B (apoB), lipoprotein (a) [Lp(a)], and non-HDL-C levels. The lipid-lowering potential in addition to LDL-C was observed both in PCSK9 monoclonal antibody and inclisiran [40]. Furthermore, it has been reported that more individuals with type 2 diabetes mellitus (T2DM), with and without atherogenic dyslipidemia, achieve the recommended LDL-C targets compared to those without T2DM [41,42]. Although both PCSK9 monoclonal antibody and inclisiran upregulate LDL receptors and thereby reduce LDL-C concentrations by diminishing active PCSK9, their mechanisms of action are different. Monoclonal antibodies function extracellularly to bind and block circulating PCSK9 protein, still allowing PCSK9 to be produced intracellularly [43]. Inclisiran works intracellularly by preventing the translation of PCSK9 mRNA, thereby decreasing both intracellular and plasma PCSK9 levels [44]. A potential advantage of treatment with inclisiran is the longer duration of its lipid-lowering effect. As a result, the frequency of administration is less compared to PCSK9 mAbs. Specifically, inclisiran required subcutaneous injections once every six months, whereas PCSK9 mAbs should be injected once every 2-4 weeks. Different The diabetes data was extracted from the control group in each trial administration patterns may lead to differences in the development of adverse events, particularly injection site reactions, which should be taken into account when choosing the appropriate agent [7,45]. Recently, a rapid recommendation is published, showing a clinical practice guideline of PCSK9 inhibitors for the reduction of cardiovascular events in patients at different risks [46]. The guideline panel provided weak recommendations to add a PCSK9 inhibitor to ezetimibe for adults already taking statins at very high risk of cardiovascular event and those at very high and high risk who are intolerant to statins. In consideration of the fact that both PCSK9 monoclonal antibody and inclisiran enable patients to achieve recommended LDL-C target. Our study revealed that the use of alirocumab was associated with reductions in the all-cause mortality and serious adverse event. Besides, administration of evolocumab was associated with decreased risk of myocardial infarction. The findings of this study update current guidelines in a novel way. Strengths and limitations Given limited comparative effectiveness of different types of PCSK9 inhibitors for secondary prevention in patients with high-risk of cardiovascular events, a Bayesian network meta-analysis was established. To determine the best approach benefiting the patients most, we used all-cause mortality within at least one year follow up to evaluate the efficacy, and serious adverse events to evaluate the safety. Besides, we followed the guidelines of the PRISMA-NMA statement; included explicit eligibility criteria; and performed a comprehensive search strategy. We also included GRADE to assess certainty in pooled estimates of effect and presented absolute and relative risks. Thus, our analysis is robust and extending and integrating the recent guidelines in a novel way. This study has several limitations. First, in some of the comparisons, we did find a significant difference. For example, alirocumab showed better efficacy in reducing all-cause mortality than evolocumab, and evolocumab was superior to alirocumab in reducing risk of myocardial infarction. However, the result of these outcomes might be imprecise and heterogeneous because direct head-to-head studies were lacked. We have downgraded the quality of evidence of these outcomes. Second, clinical heterogeneity existed regarding the dosage and administration interval among the different treatment regimens. For example, PCSK9 monoclonal antibody needs to be administered 1-2 times per month, while inclisiran can be given only once every 6 months. Clinicians need to make comprehensive considerations in selecting the appropriate agents based on administration intervals, effects, and costeffectiveness [47,48]. Third, in the present study, the maximum follow-up period of the included trials was 2.8 years. More trials with longer follow-up are required to examine whether the benefits of PCSK9 inhibitors will emerge over time. Future research The findings of the present analysis suggest that more clinical trials are needed to investigate the efficacy and safety of different types of PCSK9 inhibitors on cardiovascular outcomes. We searched the National database of clinical trials (https:// www. clini caltr ials. gov/) to identify any ongoing trials. A Phase III clinical trial (NCT04790513) is currently in progress to evaluate the efficacy and safety of LIB003, evolocumab, and alirocumab in patients with cardiovascular disease. In addition, it is thought that PCSK9 inhibition provides a definite cardiovascular benefit by lowering LDL-C levels, but may increase the risk of new-onset diabetes [7]. Longer follow-ups could be helpful to provide much more information on effectiveness, long-term safety, and tolerability of PCSK9 inhibitors. Conclusions In consideration of the fact that both PCSK9 monoclonal antibody and inclisiran enable patients to achieve recommended LDL-C target, the findings in this metaanalysis suggest that alirocumab might provide the optimal benefits regarding all-cause mortality with relatively lower SAE risks, and evolocumab might provide the optimal benefits regarding myocardial infarction for secondary prevention in patients with high-risk of cardiovascular events. In the absence of multi-arm RCTs that include treatment regimens with various agents targeting PCSK9, our exploration provides an important and useful guide to inform treatment decisions. Further head-to-head trials with longer follow-up and high methodologic quality are warranted to help inform subsequent guidelines for the management of these patients.
2022-06-16T13:43:01.597Z
2022-06-15T00:00:00.000
{ "year": 2022, "sha1": "d46f976c9a4064cee398564f7c3ead028aceb45b", "oa_license": "CCBY", "oa_url": "https://cardiab.biomedcentral.com/track/pdf/10.1186/s12933-022-01542-4", "oa_status": "GOLD", "pdf_src": "Springer", "pdf_hash": "44b209fecfb506e3c3817e99267e66b80a30dc89", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
2248442
pes2o/s2orc
v3-fos-license
Wide-Range Restructuring of Intermediate Representations in Machine Translation This paper describes a wide-range restructuring of intermediate representations in machine translation, which is necessary for bridging stylistic gaps between source and target languages and for generating natural target sentences.We propose a practical way of designing machine translation systems, based on the transfer method, that deal with wide-range restructuring. The transfer component should be divided into two separate subcomponents: the wide-range restructuring sub-component and the basic transfer sub-component. The first sub-component deals specifically with global reorganization of intermediate representations in order to bridge the stylistic gaps between source and target languages, and the second performs local and straightforward processing, including lexical transfer and basic structural transfer.This approach provides us with an effective basis for improving translation quality by systematically enhancing the transfer rules without sacrificing the clarity and maintainability of the transfer component. It also guarantees that most of the translation process can be based on the augmented Context Free Grammar (CFG) formalism and the so-called compositionality principle, by which we can both systematically expand and maintain linguistic data and design the simplified process control necessary for an efficient machine translation system. INTRODUCTION Much effort has been devoted to research into, and development of, machine translation since the 1950s (Slocum 1985). However, the quality of the output sentences produced by most machine translation systems is not high enough to have any marked effect on translation productivity. A machine translation system produces a variety of expressions in the target language, including good, fair, and poor expressions. In this paper, we define these distinctions as follows. "Good" sentences can be easily understood and have high readability because of their naturalness, "fair" sentences can be understood but their readability is low, and "poor" sentences cannot be understood without referring to the source sentences. To improve the quality of translation, the following major functions should be implemented: 1. selection of equivalents for words; 2. reordering of words; and 3. improvement of sentence styles. A machine translation system that does not have Function 3 often produces "good" output in the case of transla-tion between languages in the same linguistic group if Functions 1 and 2 are appropriately achieved. However, most output will be "fair" or "poor" in the case of translation between languages in different linguistic groups, because of the stylistic gaps between them. We need to enhance Function 3 as well as Functions 1 and 2 in order to change "fair" or "poor" sentences to "good" or "fair" ones. Note that in this paper, style means a preferable grammatical form that successfully conveys a correct meaning. First, let us consider how to select target language equivalents for words in the source language. An appropriate part of speech for a word is first determined by the grammatical constraints provided by the analysis grammar. Nouns and the verb in a simple sentence can then be appropriately translated according to the combinative constraints between the case frame of the verb and the semantic markers of the nouns. This is a well-known mechanism for practical semantic processing in machine translation. However, this way of selecting equivalents has the limitation that we cannot classify verbs and nouns in sufficient detail, because of ambiguities in the definitions and usage of words. Many researchers in machine translation claim that a much more powerful semantic processing mecha-nism with a knowledge base is required for this purpose. This is a long-range research project in the field. Second, there seem to be few critical problems in reordering words if structural transfer is appropriately carried out. In a simple sentence, for example, we can usually reorder words correctly on the basis of the case frame of the verb. Thus, the improvement of sentence styles is one of the crucial functions required for further enhancement of the present translation quality. Even when the output is "fair" in quality, it has to be read carefully, because the readability is often low. We expect an improvement in sentence styles to result in an improvement in translation quality from "poor" or "fair" to "good." Improving sentence styles seems to be easier than selecting better equivalents for words, because there are syntactic clues to help us make the styles more natural. This paper focuses on an approach to the improvement of sentence styles by wide-range restructuring of intermediate representations. Wide-range restructuring in this paper means the global restructuring of intermediate representations, usually including the replacement of some class words (i.e., noun, adjective, verb, and adverb). Some papers have mentioned limited restructuring of intermediate representations (Bennett and Slocum 1985;Vauquois and Boitet 1985;Isabelle and Bourbeau 1985;Nagao et al. 1985;McCord 1985;Nomura et al. 1986). For example, LMT (McCord 1985) has a restructuring function after the transfer phase, to form a bridge between the basic styles of English and German. The Mu system (Nagao et al. 1985) has two specific restructuring functions, before and after the transfer phase, mainly to handle exceptional cases. However, few machine translation systems have so far had a comprehensive component for wide-range restructuring (Slocum 1985), mainly because many systems are presently designed to produce output that is at best "fair," and little effort has been devoted to obtaining "good" output or natural sentences. As a matter of fact, widerange restructuring functions are usually scattered over the analysis, transfer, and generation phases in an ad hoc way. Because of complicated implementations, the systems come to have low maintainability and efficiency. Few papers so far have systematically discussed the crucial stylistic gaps between languages, the importance of wide-range restructuring of intermediate representations to bridge these gaps, and effective mechanisms for restructuring. A restructuring mechanism is necessary even when a machine translation system is based on the semantic-level transfer or pivot method (Carbonell et al. 1981). In this paper, we first discuss stylistic gaps between languages, and the importance of dealing with them effectively in order to generate natural target sentences. We then propose a restructuring mechanism that successfully bridges the stylistic gaps and preserves a high maintainability for the transfer phase of a machine translation system. Last, we discuss the implementation of the wide-range restructuring function. STYLISTIC GAPS BETWEEN LANGUAGES In discussions on the stylistic gaps between English and Japanese, it is often said that English is a HAVE-type or DO-type language, whereas Japanese is a BE-type or BE-COME-type language. This contrast corresponds to the differences in the ways people recognize things and express their ideas about them (Nitta 1986), and hence is considered as a difference of viewpoint. Idioms and metaphors are heavily dependent upon each society and culture, and we sometimes have to reinterpret them to give an appropriate translation. Each language also has its own specific function word constructions, which are used to express specific meanings. These specific constructions cannot be directly translated into other languages. We categorize the major stylistic gaps as follows: (1) stylistic gaps in viewpoint, (2) stylistic gaps in idioms and metaphors, (3) stylistic gaps in specific constructions using function words, and (4) others. We will discuss these gaps in more detail in the case of English-to-Japanese translation by referring to examples extracted from the literature (Bekku 1979;Anzai 1983) and slightly modified for our purpose. In each example, the first sentence is the original English and the second one is an equivalent or a rough equivalent in Japanese-like English, which will help us recognize the stylistic gaps, though some of the rewritten English is not strictly acceptable. STYLISTIC GAPS IN VIEWPOINT The following are some examples of stylistic gaps due to differences of viewpoint. The meaning of (l-a) is that the room contains two tables. However, an inanimate subject for the verb have is not allowed in Japanese. Japanese usually expresses the same fact as in (l-b), without using an inanimate subject. (2-a) and (2-b) show a case in which the voice is changed to avoid an inanimate subject in a Japanese sentence. (3-b) Because insects were humming, it seemed to me it was autumn. T~:is case is similar to example (I-a) in that the subject of (3-a) is inanimate. An event or action, which may be the subject of an English sentence, is usually treated as a cause or reason in a Japanese sentence. 3. Inanimate noun + allow + noun + to-infinitive (4-a) The support allows you to write IPL procedures. (4-b) You can write IPL procedures by using the support. In this case, we can consider the subject as a tool or method, as explicitly rewritten in (4-b). Have + adjective + noun (two-place predicate) (5-a) The routine has a relatively low usage rate. (5-b) The usage rate of the routine is relatively low. The adjective low in the noun phrase low usage rate in (5-a) is removed and used as predicative form in (5-b). Japanese often prefers predicative expressions like this. 5. Adjective + verbal noun + of + noun (6-a) He is a good speaker of English. The noun phrase a good speaker is rewritten to form a predicative phrase. 6. Special verb (do, make, perform, etc.) + adjective + verbal noun + of + noun (7-a) The DOS/VSE SCP is designed to make efficient use of a hardware system. (7-b) The DOS/VSE SCP is designed to use a hardware system efficiently. This is also a case in which Japanese prefers a predicative phrase. This is a case in which a special determiner should be removed from the noun phrase and rewritten as an adverb. If we translate these English sentences literally into Japanese, we will have low readability for the Japanese sentences. STYLISTIC GAPS IN IDIOMS AND METAPHORS Idioms and metaphors should be distinguished from other phrases in a text, because they have implicit and fixed meanings. The following are some examples: (9-a) A car drinks gasoline. (9-b) A car requires a lot of gasoline. (1 l-a) He burned his bridges. (1 I-b) He destroyed his alternative options. In case (9-a), we will face difficulty in semantic processing if we try to translate the sentence directly. The reason is that the verb drink usually requires an animate subject whereas the noun car is, in most cases, classified as an inanimate thing. We can translate example (10-a) literally if we want to preserve the humor conveyed by the original sentence. However, this is not often possible because of cultural differences. Example (1 l-a) is a typical case of something that we cannot translate literally into Japanese. STYLISTIC GAPS IN SPECIAL FUNCTION WORD CONSTRUCTIONS The following are some examples of stylistic gaps due to special English constructions using function words. (12-a) It is required that you specify the assignment. (12-b) That you specify the assignment is required. (13-a) The system operation is so impaired that the IPL procedure has to be repeated. (13-bl) Because the system operation is impaired very much, the IPL procedure has to be repeated. (13-b2) The system operation is impaired to the extent that the IPL procedure has to be repeated. (14-a) The box is too heavy for a child to carry. (14-bl) Because the box is very heavy, a child cannot carry it. (14-b2) The box is very heavy to the extent that a child cannot carry it. There is no direct way to translate the examples given above, because the grammatical functions conveyed by the special constructions using function words are often expressed in a very different way in a target language. OTHERS In addition to the stylistic gaps described above, we often see other stylistic gaps based on the meaning of a word. For example, the verb bridge in English should be translated by "hashi (a noun meaning a bridge) wo (a case particle meaning an object) kakeru (a verb meaning 'install')" in Japanese. One English verb corresponds to a noun, a case particle, and a verb in this case. A set of consecutive words may have a fixed meaning: for example, a number of can be considered as many in most cases. Differences in tense, aspect, and modality are also related to stylist gaps between languages, although we do not discuss these in detail here. So far, we have discussed four types of stylistic gaps. Generally, there are larger stylistic gaps between languages belonging to different groups than between languages in the same group. It is clear that if we can deal adequately with these stylistic gaps, we can further improve translation quality. How TO DEAL WITH STYLISTIC GAPS This section discusses a framework for dealing with the stylistic gaps we noted in the previous section. THE COMPOSlTIONALITY PRINCIPLE IN MACHINE TRANSLATION Most machine translation systems (Slocum 1985) aiming at practical use employ the transfer method, which divides the whole process into three phases: analysis of the source language, transfer between intermediate representations, and generation of the target language. The basic concept underlying current machine translation technology is the compositionality principle (Nagao 1986). The original idea of the principle is that the meaning of a sentence can be assembled from the meaning of each of its constituents and, moreover, that the assembling process can be implemented by assembling the forms or syntax that convey the meanings. Montague grammar is one of the theoretical bases of the principle, and some work applying Montague grammar to machine translation has been reported (Landsbergen 1982;Nishida and Doshita 1983). If the compositionality principle is applied to machine translation, we expect that it will be possible to translate a whole sentence by translating each word individually and then appropriately composing all the translated words. For example, let us consider the following two sentences, which have the same meaning. Let us assume that the above sentences have the syntactic structures shown in Figure 1 (a) and (b), based on the grammars shown in (c) and (d), respectively. Note that the parentheses in the right-hand side of the last rule in (d) denote a condition that must be met in applying the rule. These structures are also considered to be the intermediate structures (i.e. the source structure and target structure) in the transfer phase of machine translation. The ideal machine translation based on the compositionality principle ensures that structure (a) is successfully transferred to structure (b) by applying the transfer rules, as shown in Figure 2. In Figure 2, the transfer rules are symbolized for convenience. The left-hand sides of the rules consist of matching patterns that correspond to the grammar of the English sentence, and the right-hand sides consist of target patterns that correspond to the grammar of the target Japanese sentence. The steps of the transfer process using these transfer rules are shown in Figure 3. This transfer process is done entirely in a bottom-up and left-to-right manner by using the transfer rules, and is based on the compositionality principle. The process is simple, easy to control, and easy to implement efficiently. Let us consider the stylistic gaps mentioned in the previous section. To bridge such gaps, we need to replace some words with new words and perform restructuring widely. It is important to recognize that the words involved in the rephtcement are class words (i.e. noun, adjective, verb, and adverb) rather than function words (i.e. preposition, auxiliary verb, conjunction, relative pronoun, particle, etc.), as shown in the examples of types 2.1 and 2.2. For example, in cases (6-a) and (6-b), good is replaced by well and speaker is replaced by speaks. In cases (10-a) and (10-b), are time bombs is replaced by gradually harm us. On the other hand, most words involved in the replacement are function words in the examples of type 2.3. For example, in cases (13-a) and (13-bl), so and that are replaced by because and very much. The above-mentioned framework based on the compositionality principle cannot provide appropriate treatment for stylistic gaps of types 2.1 and 2.2, because wide-range structure handling, as well as the replacement of some class words, is necessary instead of the local and bottom-up structure handling that includes some treatment of function words. For type 2.3, the above framework does not suit the treatment of the gaps if the transfer is done at the analysis-tree level and some function words exist in the source structure for the transfer. It is not difficult to handle gaps of type 2.4 except for those caused by tense, aspect, and modality, because they can be bridged only by local treatment of constituents instead of wide-range restructuring. For example, if a system finds the consecutive words a number of in a sentence, the system can exceptionally treat it as one word meaning many in the previous framework. It is normally translated by replacing it with a targetlanguage equivalent. TWO-STEP TRANSFER METHOD To deal with stylistic gaps effectively in a system based on the transfer method, we propose the incorporation of a specific sub-component for wide-range restructuring of the intermediate structures in the transfer component, as shown in Figure 4. This gives an example of a system configuration for English-to-Japanese machine translation. The basic transfer consists of lexical transfer and reordering of words. The wide-range restructuring should be done after analysis of the input sentence and before the basic transfer. We take advantage of syntactic clues given in the intermediate representation for effective restructuring after analysis of the input. The wide-range restructuring, which changes the global structure as well as some class words of the sentence, should be performed not after but before the basic transfer, for the following reasons. 1. The restructuring makes the basic transfer easier and it also reduces the transfer rules, because it often contributes to standardization or limitation of English sentence styles, as discussed in more detail in Section 3.3 2. The restructuring is not affected by transfer errors, which often occur because of the complexity of the transfer process. By means of this restructuring sub-component, the intermediate representation of the input sentence is transformed or reinterpreted from a source-dependent expression into a target-dependent one. We can define augmented CFGs for analysis and generation in this framework. rule appropriately and control for the wide-range restructuring sub-component, the output structures of both the sub-component and the basic transfer sub-component can be defined by using augmented CFGs that deal with conditions for rule applications. In other words, the basic transfer sub-component can specialize in transfer from one augmented CFG system to another, as illustrated in Figure 2. Because wide-range restructuring, which does not suit the compositionality principle, can be performed entirely in the restructuring sub-component, and because the basic transfer can be simplified and specialized in local and bottom-up treatments of structures based on the augmented CFG formalism, as mentioned above, all the processes of machine translation except wide-range restructuring can be based on the augmented CFG formalism or the compositionality principle. This approach makes the whole system simple, easy to control, and efficient. If a machine translation system uses analysis-tree structures as intermediate structures (Lehmann et al. 1981;Nitta et al. 1982), wide-range restructuring can be introduced appropriately at the surface level. If the system performs deep analysis of the input sentence and creates a semantic representation such as a frame-like structure or a semantic network as an intermediate representation, widerange restructuring may be required at the deep level. This is true whenever we handle stylistic gaps of types 2.1 and 2.2. However, gaps of type 2.3 can be handled by analysis, and no wide-range restructuring is required from the system that performs deep analysis of the input sentence. METHOD OVER THE SINGLE-STEP TRANSFER METHOD Let us discuss the advantage of this approach over the conventional single-step transfer method from the standpoint of maintainability. Technical documents contain many variants of sentence patterns. The examples in Section 2 are regarded as variants from the viewpoint of English-to-Japanese translation. As a matter of fact, several different English sentences in an English technical document can often be translated by the same Japanese sentence. In other words, a wide variety of expression in English can be reduced to some extent in Japanese, because the most important concern in technical documents is that each sentence should convey technical information correctly. Therefore, we may standardize or control styles of English sentences for the sake of Englishto-Japanese translation. The two-step transfer method including wide-range restructuring is an appropriate way to take advantage of this phenomenon. If we encounter a new variant of a sentence pattern in English, we only have to write an English restructuring rule in the case of the two-step transfer method. On the other hand, a whole transfer rule, which is usually harder to write, is needed in the single-transfer method. If we want to modify a target Japanese sentence that corresponds to some English sentences, we only have to modify the corresponding basic transfer rule, instead of modifying all the transfer rules for these English sentences. Consequently, it is easier to maintain the transfer rules if the system is based on the two-step transfer method, especially in translating technical documents. IMPLEMENTATION OF THE WIDE-RANGE RESTRUCTURING FUNCTION A prototype English-to-Japanese machine translation system, SHALT (Tsutsumi 1986), is based on the two-step transfer method described in Section 3.2. So far we have developed about 500 wide-range restructuring rules to cope with the stylistic gaps exemplified in Section 2, and we have confirmed the effectiveness of the restructuring through test translation of a few IBM computer manuals. In this section, we discuss the details of the rules for the wide-range restructuring and their applications in SHALT, as an example. SHALT is implemented in LISP, and the English and Japanese intermediate representations are syntactic-analysis tree structures. WIDE-RANGE RESTRUCTURING RULES AND THEIR APPLICATIONS A wide-range restructuring rule consists of a pair of a matching pattern and a target pattern. If an input English tree structure matches a matching pattern, then a target Japanese-like English tree structure is generated according to specifications in a target pattern. A matching pattern is defined as follows. Note that * allows repetition of specifications. STRUCTURE specifies the tree structure to be checked. If 0 is specified, the whole input structure is treated. If MATCHING-VARIABLE is specified, its value (i.e. part of a structure), which has already been set by MATCH-ING-ELEMENTs in an earlier matching process, is a target for checking. A sequence of MATCHING-ELE-MENTs checks a sequence of daughter tree structures. If specified MATCHING-CONDITIONs match a structure, a specified MATCHING-VARIABLE is set to the structure. If a MATCHING-ELEMENT is a mere MATCH-ING-VARIABLE, any structure or nil can be set for the variable. MATCHING-CONDITION specifies a LISP function and its arguments. LISP functions check parts of speech, terminal symbols, or other information of a structure. All specifications in a matching pattern form AND conditions, except arguments of LISP functions, which form OR conditions. A target pattern specifies the required output structure by using MATCHING-VARIABLEs where structures are already set and by adding new structures. Figure 5 shows an example of a wide-range restructuring rule and its application. Figure 5 (a) shows the output of English analysis, which is the input for wide-range restructuring. Figure 5 (b) shows a wide-range restructuring rule and (c) gives the output of the restructuring. The left-hand side of the restructuring rule is a matching pattern, and the right-hand side of the rule is a target pattern in Figure 5 (b). Numbers preceded by *, such as * 1, • 2, and *3, denote MATCHING-VARIABLEs. There are four specifications of (STRUCTURE-(MATCHING-ELEMENT*)) in the matching pattern, such as (0-(*1 ('2 (T"it")) *3 ... *7)) and ('6-(*8 (*9 (P AD, J)) ... *11)). A MATCHING-CONDITION (T "it'") in the matching pattern denotes that the terminal symbol of the tree should be "it." (P ADJ) denotes that the part of speech of the root node should be ADJ (i.e., adjective). T and P are the LISP function names to perform these specific checks. FURTHER DISCUSSIONS ON IMPLEMENTATION Let us discuss example (5-a) in Section 2. The wide-range restructuring rule is as follows. If the main verb is have and the head noun (rate) of the object is a two-place predicate and there is an adjective (low) that modifies the head noun, then restructure it as shown in (5-b). If the head noun of the object is classified as "ATTRIBUTE," the restructuring is obligatory. Otherwise, translation without this restructuring is not very good, but acceptable. The restructuring rule for case (l-a) in Section 2 is slightly complicated because we need richer information, such as (room contains table), so as to restructure (l-a) into the form 'NP1 be in NP2.' If the input is The table has four legs, it will be restructured differently into the form Four legs exist for the table because of the above constraint. This is not standard English, but it is very similar in form to Japanese. We have not yet implemented a way of using semantic information, such as (room contains table), as constraint, because the desired restructuring can be done on the basis of syntactic restrictions in the field of IBM computer manuals. However, the approach proposed in this paper can be augmented to handle semantic information without any crucial problems, if we prepare a knowledge base. CONCLUSIONS In this paper, we discuss the importance of treating stylistic gaps between languages and methods of doing so. A comprehensive wide-range restructuring that can cope with stylistic gaps is indispensable for improving the quality of translation, especially between languages from different linguistic groups, such as English and Japanese. We propose a practical way of designing machine translation systems. The transfer component should be divided into two separate sub-components: the wide-range restructuring sub-component and the basic transfer sub-component. Because the first of these deals with global reorganization of the intermediate representations, usually including the replacement of some class words, the second only has to do local, straightforward processing. This approach makes the transfer component much clearer and more maintainable than the conventional single-step transfer method. It also guarantees that, except for the wide-range restructuring sub-component, all of the translation process can be based on the augmented CFG formalism and the compositionality principle. The ease of controlling the process makes the system efficient, which is crucial for the development of a practical machine translation system. As a future direction, it will be necessary for us to pursue a thorough contrastive study of several languages, in terms of semantics as well as syntax. This will enable us to build more effective rules for restructuring that will further improve the quality of machine translation.
2014-07-01T00:00:00.000Z
1990-06-01T00:00:00.000
{ "year": 1990, "sha1": "067eee154f0cbdc0e55211c112986576dd319162", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "ACL", "pdf_hash": "067eee154f0cbdc0e55211c112986576dd319162", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
239327717
pes2o/s2orc
v3-fos-license
Polychromatic polarization: Boosting the capabilities of the good old petrographic microscope Polychromatic polarizing microscopy (PPM) is a new optical technique that allows for the inspection of materials with low birefringence, which produces retardance between 1 nm and 300 nm. In this region, where minerals display interference colors in the near-black to gray scale and where observations by conventional microscopy are limited or hampered, PPM produces a full spectrum color palette in which the hue depends on orientation of the slow axis. We applied PPM to ordinary 30 µ m rock thin sections, with particular interest in the subtle birefringence of garnet due both to non-isotropic growth or to strain induced by external stresses or inclusions. The PPM produces striking, colorful images that highlight various types of microstructures that are virtually undetectable by conventional polarizing microscopy. PPM opens new avenues for microstructural analysis of geological materials. The direct detection and imaging of microstructures will provide a fast, non-destructive, and inexpensive alterna-tive (or complement) to time-consuming and more costly scanning electron microscope–based analyses such as electron backscatter diffraction. This powerful imaging method provides a quick and better texturally constrained basis for locating targets for cutting-edge applications such as focused ion beam-transmission electron microscopy or atom probe tomography. INTRODUCTION The polarizing microscope is the fundamental analytical tool for any first characterization of geological materials. Invented almost two centuries ago (Davidson, 2010), its fundamental structure and analytical capabilities have not changed much since. Due to such long, established use, geologists may not realize that polarizing microscopy suffers from one major limitation, which is namely the poor capability to resolve microstructures where minerals have low birefringence (<0.010) and display interference colors in the gray scale. This corresponds to the range of retardance from 0 to ∼300 nm. Many important rock-forming minerals (e.g., feldspars, silica polymorphs, hydrogarnets, andradite, leucite, and apatite) possess such low birefringence intrinsically (Deer at al., 2013). Others, originally cubic, may acquire subtle anisotropy and birefringence due to imposed deformation. One classic example is birefringence haloes around inclusions trapped in diamond (Howell et al., 2010) or garnet (Campomenosi et al., 2020). Techniques to highlight the optical effects of very low birefringence materials include using compensators such as the Bräce-Köhler or the lambda plate. As the retardance is a function of thickness, another way to enhance birefringence effects is to make thicker sections. Although in some cases thick (e.g., 100 µm), sections provide satisfactory results (Cesare et al., 2019), they are seldom used because of the great drop in transparency and sharpness of the subjects imaged. The above shortcomings of conventional optical microscopy, until now accepted as intrinsic and unsolvable, are overcome by a technique called polychromatic polarizing microscopy (PPM). We show how PPM, implemented as a complementary accessory on petrographic microscopes, allows unprecedented imaging of microstructures in very low birefringence minerals using standard 30 µm thin sections. PPM not only produces a full palette of colors in areas normally dominated by gray, but it allows fast, qualitative determination of crystallographic and strain orientations. POLYCHROMATIC POLARIZING MICROSCOPY PPM was first presented as a means of imaging biological objects with birefringence down to a few nanometers. Details of various optical setups and theoretical bases of PPM are given by Shribak (2015Shribak ( , 2017 and provided in the Supplemental Material 1 . The polychromatic polscope is very similar to the petrographic microscope, but the polarizer is accompanied by a special spectral polarization state generator, and the analyzer is accompanied by an achromatic quarter-wave plate, which forms the circular analyzer (Shribak, 1986). This setup makes it easy to switch from PPM to plane-polarized light (PPL) to cross-polarized light (XPL) visualization. Even at very low retardance, PPM produces the full hue-saturation-brightness (HSB) color spectrum in birefringent materials. Unlike in conventional polarizing microscopy, where interference colors are a measure of the retardance according to the Michel-Levy chart, HSB hues in PPM depend on the orientation of the slow vibration direction with respect to a preset zero direction (conventionally oriented E-W). This implies that hues change continuously during the rotation of the stage, repeating every 180°, and that extinction is never observed. For some aspects, PPM recalls the "fabric analyzer" (Wilson et al., 2003), which also uses the orientation of the optical indicatrix to find crystal orientation but requires an ad hoc *E-mails: bernardo.cesare@unipd.it; mshribak@ mbl.edu instrument and has never been used on minerals with birefringence of < 0.009. The functioning peculiarities and potentials of PPM are illustrated by the close-up views on the central part of a frustule of the diatom Arach-noidiscus spp. (Fig. 1). These siliceous skeletons are normally black to dark gray under XPL (Fig. 1B). PPM (Fig. 1C) uses colors to visualize the structure of the frustule. By taking a complementary image with the PPM rotated to 90° (Fig. 1D), a differential image can be computed (Fig. 1E), which suppresses the imperfections and provides better measurements of hues. The range and geometric distribution of hues in the inner costae of the diatom are virtually identical to those of the color scheme in Figure 1F, which displays the experimentally calibrated variation of hue as a function of the angle between the slow direction of the birefringent material and the zero position of PPM (E-W in Fig. 1F). Through such analogy, PPM allows us to infer that (1) each of the 28 inner costae (Ross and Sims, 1972) radiating around a central ring of the diatom is a single crystal; (2) costae may be crystalline and not amorphous as commonly thought (e.g., Javaheri et al., 2015;Aitken et al., 2016); (3) they are crystallographically oriented following a radial pattern; and (4) they are length-slow, as the slow vibration direction is parallel to the elongation of each costa (Fig. 1E). Obtaining such a wealth of information on the basis of two optical images alone highlights the exceptional added value of PPM as a novel imaging technique over conventional polarized microscopy. NON-ISOTROPIC GARNET GROWTH We tested PPM on common (Fe-Mg-Ca-Mn) tetragonal garnets, which were recently shown to be more common than previously thought (Cesare et al., 2019). They are easily overlooked due to their very low birefringence except when observed in thick (≥100 µm) sections. On regular 30 µm thin sections of pelites from the eastern Alps, PPM produces striking differential images in areas where XPL would show nothing but apparent isotropy (Figs. 2A-2C). The garnet is unexpectedly optically anisotropic and shows beautiful optical sector zoning with pairs of opposed sectors characterized by similar hue and therefore by similar optical orientation. It should be kept in mind that PPM is a semiquantitative technique, as the direction of the slow vibration axis provided by the hue in PPM images is not absolute but is the direction of the projection of the axis onto the plane of the thin section regardless of the actual dip angle of the axis. The boundaries between adjacent sectors range from sharp and straight to diffuse or compenetrated. Owing to the 30 µm thickness of the samples, these microstructures are much better resolved than when thicker sections are used. PPM was applied (Fig. 2D) to an eclogite from Port Macquarie, Australia (Tamblyn et al. 2019(Tamblyn et al. , 2020, that contains garnet compositions in the range expected by Cesare et al. (2019) for non-isotropic garnets. PPM reveals that the garnet crystals are birefringent and beautifully sector-zoned, where different zones may represent growth twins. Being the only technique by which these microstructures can be visualized on regular thin sections, PPM would be a key tool for precisely locating the targets for a focused ion beam (FIB)-based, high-resolution transmission electron microscopy study of possible twinning and its origin. The differential PPM images of Figure 2 also demonstrate, as one would expect, that the optical orientation of sectors is not random. In fact, the fast vibration direction is normal to the growing faces (Fig. 2D). Optical anisotropy of garnet is also manifested as striped (or mottled) areas of crystals, sometimes within each sector, which is therefore not a coherent optical entity. The parallel stripes or bands can be as thin as a few tens of micrometers and often form intersecting sets characterized by two prevailing optical orientations (Fig. 2E). Striped birefringence patterns are well-developed (Figs. 2F and 2G) in another eclogite sample from the central Tauern Window, Austria (Warren et al., 2012), where, conversely, sector zoning is not evident. Analysis of PPM hue distribution in one of these zones (Fig. 2G) shows that the optic axes of stripes are systematically arranged along two orientations at a high angle to each other. STRESS-INDUCED BIREFRINGENCE IN GARNET Another application of PPM concerns the very low birefringence induced by non-isotropic stress fields applied to optically isotropic crystals. Such stress fields may have led to permanent deformation (i.e., plasticity) in the past or may be the signal of an elastic residual stress that is still acting on a crystal at room conditions, for example around inclusions. Garnet may display crystal plasticity when deformed at high temperature (Prior et al., 2000) and/or high differential stress (Austrheim et al., 2017). Distortion of the crystal lattice may determine an optical anisotropy that is seldom detectable under conventional XPL. Using PPM, we imaged (Figs. 3A-3E) a felsic mylonite from the Musgrave Ranges, central Australia, where garnet underwent both brittle and plastic deformation as determined by electron backscatter diffraction (EBSD) and focused ion beam-transmission electron microscope investigation (Hawemann et al., 2019). The PPM images demonstrate that the complex internal deformation of garnet can also be rendered optically. With diffuse microfracturing along subparallel sets, the variations of hue in the garnet portions bounded by fractures indicate that the entire crystal is affected by crystal plasticity (Fig. 3A). The distribution of hue in the close-up images (Figs. 3B-3D) suggests the presence of a patterned optical orientation of deformed garnet structures. Furthermore, analysis of hue distribution with image processing software like ImageJ (Schneider et al., 2012) highlights the presence of subdomains with sizes in the range of 2-5 µm that display distinct optical orientation with respect to surrounding domains (Fig. 3E). This texture is strikingly similar to that of the subgrains of garnet imaged by EBSD by Austrheim et al. (2017). At present, it is unclear if the hue distribution in these differential PPM images at high magnification may be affected by optical artifacts and therefore does not correspond to real optical inhomogeneities. Should it, conversely, be verified by further investigation, the possibility of imaging subgrains in garnet optically by PPM would represent a major breakthrough for microstructural analysis. Lattice strain and anomalous birefringence in optically isotropic minerals is also induced by inclusions trapped at high pressure-temperature during metamorphism due to the contrast of their thermoelastic properties with those of the surrounding host (e.g., Howell et al., 2010). The residual stresses and strains recorded by host-inclusion systems are central to elastic thermobarometry (e.g., Bonazzi et al., 2019). With cooling and exhumation, inclusions typically induce "birefringence halos" in the surrounding garnet host at ambient pressure-temperature conditions. Although these halos are visible under XPL in the form of a cross-shaped extinction pattern ( Fig. 3F; Campomenosi et al., 2020), their imaging by PPM reveals important additional features. First of all, the black cross, unavoidable but also of little utility, is not present anymore. Conversely, the entire spectrum of hues covering all possible slow axis orientations is observed around inclusions (Figs. 3G-3I). This provides, without the need to rotate the stage and sample, immediate visualization of the continuous change of orientation of optic axes in the garnet that is now locally anisotropic. Such qualitative visualization can be refined by inspection of hue distribution by which, for example, departures from a purely radial pattern due to the shape and intrinsic anisotropy of mineral inclusions can be detected and analyzed (Fig. S1 in the Supplemental Material). PERSPECTIVES Although we focused our attention on garnet, the applications of PPM in the geosciences extend to all materials that possess very low retardance. For example, the unprecedented visualization of microstructures of Arachnoidiscus spp. (Fig. 1) demonstrates the potential that PPM has for the study of siliceous microfossils. The best applications of PPM are probably yet to be discovered. We foresee that PPM will become a fundamental tool for studying microstructures in crystals that-from the view point of the optical properties-are nominally isotropic or intrinsically birefringent (Burnett et al., 2001) or show low birefringence such as, for example, leucite, feldspars, and silica polymorphs (Fig. 4). The imaging capabilities of PPM, especially in differential mode, disclose a wealth of micro-structural features that until now were optically inaccessible. PPM can be accomplished using a standard petrographic microscope and conventional thin sections, thus making PPM a fast and extremely cost-effective technique. Among the newest and most important utilities of PPM is the easy identification of the optical orientation of a (portion of) crystal by means of its hue such as, for example, the chalcedony fibers in Figures 4C and 4D. Although this is a semiquantitative method that cannot replace quantitative approaches such as universal stage or EBSD, the applications on garnet described above show that PPM is the perfect tool, and far more precise than conventional polarizing microscopy, for identifying and localizing targets for advanced analyses like TEM or atom probe tomography.
2021-10-22T15:07:58.956Z
2021-10-01T00:00:00.000
{ "year": 2021, "sha1": "46877ca7b089cad02005491512ab19ee788ea807", "oa_license": "CCBY", "oa_url": "https://pubs.geoscienceworld.org/gsa/geology/article-pdf/50/2/137/5519943/g49303.1.pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "37698a6bd5452d592300d327e8107531d3e7e81c", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [] }
140073914
pes2o/s2orc
v3-fos-license
The influence of bamboo-packed configuration to mixing characteristics in a fixed-bed reactor Fixed-bed reactors are commonly used as bioreactors for various applications, including chemicals production and organic wastewater treatment. Bioreactors are fixed with packing materials for attaching microorganisms. Packing materials should have high surface area and enable sufficient fluid flow in the reactor. Natural materials e.g. rocks and fibres are often used as packing materials. Commercially, packing materials are also produced from polymer with the advantage of customizable shapes. The objective of this research was to study the mixing pattern in a packed-bed reactor using bamboo as packing material. Bamboo was selected for its pipe-like and porous form, as well as its abundant availability in Indonesia. The cut bamboo sticks were installed in a reactor in different configurations namely vertical, horizontal, and random. Textile dye was used as a tracer. Our results show that the vertical configuration gave the least liquid resistant flow. Yet, the random configuration was the best configuration during mixing process. Introduction Reactor type and design are determining factors in chemical and biological processes. Fixed-bed reactors are high performance reactors where the reactions occur in packing (support) material. Fixedbed reactors are commonly used as bioreactors for various applications, including chemicals production and organic wastewater treatment [1]. Fixed-bed reactors use materials such as plastics, rocks, or ceramics as support material and locus of microbial attachment [2]. Performance of a fixed-bed reactor depends on the availability of surface area; the higher the surface area, more microorganisms can attach and form biofilm. Even though a higher surface area can be achieved by reducing the size of packing materials, smaller materials can increase the risk of pressure drop in the reactor. The ideal characteristic of packing material should have enough surface area, high porosity level, stable at various pH condition, not easily degraded or oxidized, and not toxic [3]. On the other hand, the support materials should be easy to process, low cost, and preferably available using local material. Toda et al. [4] compared vertical and horizontal configuration of rectangular tapered packed-bed reactors. They found horizontal packed-bed reactor produced higher yield of alcohol from glucose fermentation compared with vertical packed-bed reactor. Horizontal packed-bed reactor showed high sedimentation and accumulation of free cells, which was not occurred in the vertical packed-bed reactor. Reactor performance is also influenced by fluid flow inside the reactor. In an up-flow fixed-bed reactor, the fluid flows from the bottom to the outlet at the upper part of the reactor. This reactor type is often used to reduce the high concentration of organic content in the agro-industrial waste [5]. This wastewater must be treated properly before being discharged into the environment. Bamboo can grow easily and abundantly in Indonesia. The tubular form and the structural strength of bamboo can be used as an alternative for packed bed material. Bamboo has been used as packing materials for treating domestic, leather industry, milk processing, and tapioca wastewater [6][7][8][9][10]. Organic matter removal on bamboo media takes place through two mechanisms: absorption and decomposition by microbes [10,11]. Both of these mechanisms are supported by a specific surface area of bamboo that can reach 2100 m 2 /m 3 , which is comparable to some synthetic media of polypropylene and polyethylene [7,10]. As packing material, bamboo has the advantage of high lignocellulosic content. Experiments with various acid, alkali, and solvents showed that bamboo fibre was insoluble in 10% sulphuric acid, 15% hydrochloric acid, 17% nitric acid, 5% sodium hydroxide, ethyl acetate, carbon tetrachloride, and 13% sodium hypochlorite [12]. For application in bioreactors, such extreme condition rarely occurs. The degradation of bamboo as filter material in an anaerobic process treating wastewater from cassava starch production showed no degradation of the bamboo pieces at the end of the experiment. The lignin and cellulose remaining should not be subjected to further degradation [13]. The flow inside a fixed-bed reactor is often assumed as homogeneous. However, the structure and configuration of packing materials have significant influences on the flow [14]. For bamboo, this includes the hollow cylinder dimension and shape that influence pressure inside the reactor. The configuration of cutting bamboo as a support material must be designed to minimize disturbance of fluid flow, but still enable efficient mixing process. In this research, the fluid flow and mixing process in a lab-scale fixed-bed reactor were investigated. Three bamboo configurations: random, vertical, and horizontal configurations were compared using water as the working fluid. Textile dye was used as the tracer. TiO 2 photoelectrode preparation To enable observation of fluid flow in the reactor, the experiments were performed in a transparent acrylic reactor with 18.5 cm diameter and 50 cm height (figure 1(a)). A plate was placed at 10 cm from the bottom to support the bamboo. The empty volume of the reactor (without bamboo) was 12.6 L, while the volume between bottom and top plates was 6.6 L. Bamboos with diameter of 3 -4.5 cm were obtained from local vendor. The bamboo pieces were cut into 4.5 cm length. Bamboo pieces were placed in the reactor in random, vertical, or horizontal configuration (figures 1b -(d)). The numbers of pieces were 84 for random, 105 for vertical, and 86 for horizontal configurations. The bamboos were soaked for seven days before used for the experiment. Compared with bamboo before soaking, the soaked bamboo absorbed 0.9 L of water during the soaking process. Water was used as the working fluid for the experiment. Red textile dye at the concentration of 0.005% (w/v) was used as the tracer. Experimental setup The experimental setup is presented in figure 2. The reactor was first filled with water without tracer. Colored water as the tracer was flowed into the feeding tank and let overflow to keep the water level in the feeding tank. The colored water was subsequently flowed by gravity from the feeding tank to the reactor and mixed with the water inside the reactor. The tracer needed about 60 minutes to replace the water in the reactor until they are completely mixed. Analysis The measured parameters during this experiment were flow rate and tracer concentration. The input flow rate was constant due to the use of pump. The output flow rate from the reactor was calculated by measuring the accumulated volume of fluid in the outlet with a measuring cylinder. Flow acceleration was analysed by comparing flow rate gradient between the three configurations during the unsteady state. To determine the flow regime inside the reactor, Reynold number (Re) was calculated as follow [15]: (1) Tracer concentration was measured with a spectrophotometer at 496 nm. The change of relative concentration between the output and input of the reactor was used as an indication of mixing process inside the reactor. Fluid flow profile The flow rate from the reactor was measured and the results for the three bamboos configurations, i.e. random, vertical, and horizontal, are presented in figure 3. Experiment with a reactor with no bamboo was used as a control. Due to the limitation of experimental set up, the flow rate inside the reactor could not be measured. Assuming there was no significant difference between output flow rate and flow rate inside the reactor; Reynold numbers for all experiments were in the range of 126 -141, which means the flow inside the reactor was laminar [15]. The presence of bamboo should create local turbulences; however, the overall flow was still laminar. In this experiment, the ratio between bamboo length and diameter was 1:1 to 1.5:1. Higher ratio (longer bamboo cut) may cause a higher flow resistance in random and horizontal configurations. On the other hand, a higher length:diameter resulted in a lower flow resistance in vertical configuration [16]. In a laboratory, lab-scale bioreactor with the same dimension using cut-bamboo with length: diameter 2:1 in vertical configuration resulted in no significant flow reduction between inlet and outlet (unpublished result). Mixing profile Mixing characteristic of the fluid in the reactors could be approached by comparing the color intensity of the colored water in the outlet with the inlet. Bamboo could influence mixing process in the reactors. Figure 5 shows the fluctuation of tracer concentration presented as relative concentration to inlet. Experiment without bamboo reached a steady state after 27 minutes, after which the steady state concentration still fluctuating at 46±5%. Experiment with configuration took 33 minutes to reach the steady state concentration of 49±3%. The concentration gradient for both experiments were quite similar at 1.7 ( figure 6). Random configuration showed a distinct transfer from unsteady state to steady state. The steady state concentration of 57±2% was reached after 27 minutes. Similar steady state concentration of 58±2% was occurred in horizontal configuration. However, the time to reach Compared to this experiment, a higher length:diameter ratio of bamboo cut may decrease the Reynolds number in random and horizontal configurations [17]. It resulted in less mixing. On the other hand, a higher length:diameter may increase Reynolds number in vertical configuration that resulted in better mixing [16]. The number of bamboo pieces in vertical configuration (105) was higher than horizontal (86) and random (84). The dye seemed to attach on the surface area of bamboo, as observed in the change of bamboo's color. Figure 5 shows that tracer concentration in vertical configuration was lower than horizontal and random. With the higher surface area in vertical configuration, more dye in the liquid could be attached compared with the other configurations. The attachment could reduce the color intensity in the reactor outlet. A study conducted by Swaine and Daugulis [18] confirmed the difficulties of getting a determined certainty of liquid flow characteristics in the reactor. Aspect ratio of system, packing size, reactor size, and selection of suitable tracer could influence the result of liquid flow. On the other hand, tensile stress on the position parallel to the bamboo fibre direction was about 18 times higher than to the perpendicular direction of the fibre. This characteristic can be used for the application of bamboo cutting in the bigger scale anaerobic reactor [19]. Conclusions Bamboo configurations in packed/fixed bed system influence the flow rate and mixing process in the reactor. Vertical configuration in fixed bed reactors showed the least liquid resistant flow in the reactor that results in a fastest flow rate. Random configuration showed the best configuration in mixing process.
2019-04-30T13:06:55.863Z
2018-03-01T00:00:00.000
{ "year": 2018, "sha1": "d664cecf2d05bbd1bc02780987b48def6f0d0ce4", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/985/1/012056", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "88eb8f10ac77f016f174b71fd129ade01be47777", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
24157615
pes2o/s2orc
v3-fos-license
Modified Pectoralis Major Tendon Transfer for Reanimation of Elbow Flexion as a Salvage Procedure in Complete Brachial Plexus Injury: A Case Report Abstract Traumatic brachial plexus injuries rarely recover spontaneously and if the window period for neurotisation has elapsed, the only option for restoration of function lies in a salvage procedure. Many such salvage procedures have been described in the literature with variable functional results. We report the case of a 16-year-old boy who presented after unsuccessful treatment for a complete brachial plexus injury; we performed a pectoralis major tendon transfer to attain elbow flexion. Postoperatively, the elbow was splinted with flexion at 100°. After 4 weeks of immobilization the splint was removed and the patient could actively flex his elbow from 30° to 100°. Key Words brachial plexus injury, salvage procedure, pectoralis major tendon transfer INTRODUCTION The management algorithm for restoring elbow flexion in non-recovering post-traumatic brachial plexus injury (BPI) begins with neurosurgical procedures in the early stages (<6 months after injury) and salvage procedures thereafter 1 . These salvage procedures utilize either local or shoulder muscles for free muscle transfer 2 . One of the commonly used shoulder muscles for this procedure is the pectoralis major (PM) muscle. Previous use of the PM muscle can be broadly categorised into distal muscle transposition techniques and proximal tendon transfer techniques 3 . Here we report on a different technique that also utilises the PM muscle for reanimation of elbow function in a neglected post-traumatic BPI. This method is a modification of previously described techniques. CASE REPORT A 16-year-old Malay male was referred to our hand clinic for further management of a BPI to the right upper limb following a motor vehicle accident 3 months prior to presentation. Examination revealed a complete right BPI. He had normal motor power in the rhomboids and serratus anterior muscles. There was no evidence of scarring around the affected neck and shoulder region, nor clavicle or sternoclavicular joint disruption by clinical and radiological assessment. All the joints in the right upper limb were supple. We scheduled the patient for neurotisation of the right spinal accessory nerve (SAN) to the musculocutaneous nerve branch of the biceps (MBB) using a sural nerve graft. During surgery, stimulation of the MBB produced biceps contraction, indicating potential for nerve recovery; hence, the neurotisation was abandoned. We prescribed biceps brachii muscle re-education using electrical muscle stimulation to be administered by a physiotherapist. Two months later (5 months post-injury), the patient was reviewed in the clinic and assessment revealed complete lack of biceps tone or active contraction and no other sensory or motor recovery. He was again scheduled for the neurotisation, but failed to present for the procedure. Instead, he presented for follow up 5 months later (10 months postinjury) and requested the previously planned neurotisation. We chose to abandon neurotisation in favour of muscle or tendon transfer as a salvage procedure to achieve elbow flexion. The PM muscle was assessed to be at Medical Research Council (MRC) grade 4, from MRC grade 0 at the previous examination, due to some recovery of his right pectoral nerves. His glenohumeral joint was relatively stable. We proceeded to perform a right-sided pectoralis major tendon transfer to attain elbow flexion at 11 months postbrachial plexus injury. Using a deltopectoral approach, the tendinous insertion of the PM was identified and detached. The musculotendinous unit was mobilised distally by dissecting the clavi-pectoral fascia without detaching either origin of the PM muscle. The PM tendon was subsequently sutured to the distal myotendinous junction of the biceps muscle using the Krackow technique to over-tensioning to 110° of elbow flexion and forearm supination (Figure 1). After wound closure, the elbow was splinted at 100° of flexion and forearm supination with a plaster-slab, collar and cuff. The slab was maintained for 4 weeks during which isometric exercises were performed. At the 4 week follow-up appointment, the patient was able to actively flex at the right elbow from 30° to 100°, while attempting to adduct and flex his right shoulder; MRC grade was 3. Physiotherapy assisted passive range of motion (ROM) exercises were commenced and graduated to active ROM exercises 2 weeks later. Once painless active ROM was possible, rehabilitation focussed on muscle strengthening to improve the MRC grade. DISCUSSION The aim of late reconstruction is to restore elbow flexion using a tendon or muscle transfer procedure to restore strength and functional ROM to 30-130° without excessive pronation. Previously described surgical options include the Steindler procedure, latissimus major muscle transfer, pectoralis major muscle transfer, pectoralis minor muscle transfer, triceps muscle transfer, sternocleidomastoid muscle transfer and last but not least free muscle transfer (e.g., gracilis muscle transfer) 4 . The primary factor guiding our choice of procedure was the type and extent of the BPI. In the present case, this limited our options to use of a muscle group with motor power of at least grade 4 MRC or higher. Other considerations included muscle excursion, alignment, cosmesis and preoperative ROM. For the current case, the only ideal donor was the PM muscle. As described by Heirner et al., there are 2 scenarios in which PM muscle transfer techniques should be used. The first being the distal muscle transposition technique, where the PM origin is transposed and tenodesis is performed to the biceps brachii insertion. This category can be further divided into unipolar or bipolar and partial or complete distal transfer. The second, a proximal tendon transfer technique, where the insertion of the PM tendon to the humerus is detached and tenodesis to the biceps brachii tendon insertion is accomplished using an interposing tensor fascia lata graft. This technique also involves detachment of the clavicular origin of the PM to gain excursion 3 . Our technique is a modification of the latter. We adopted this technique due to our intra-operative findings, revealing that the sternocostal portion of the PM was atrophied. This enabled the clavicular portion to be mobilised with sufficient excursion without detachment. This technique avoids the need for harvesting a tendon graft and accompanying donor side morbidity. It also reduces the amount of required dissection as there was no need for detachment from the PM muscle origin or exposure of the radial tuberosity (insertion of biceps brachii). The major disadvantage though was the bowstringing effect subcutaneously, which was cosmetically unfavourable. Once soft tissue healing is optimal, we plan to reassess the patient's shoulder stability on the affected side as the previous dynamic stabilization by the PM was sacrificed. If stability is compromised, shoulder fusion will be offered as an option to the patient. This would further enhance the strength of elbow flexion. The long-term plan is to reestablish finger and wrist flexion using a gracilis free muscle transfer, provided the patient first achieves stable shoulder and functional elbow flexion. CONCLUSION Salvage procedures for post-traumatic complete brachial plexus injuries are performed with the aim of restoring some function to a limb that has none. The surgical priority is to reestablish functional elbow flexion first 5 . We used a modification of existing pectoralis major muscle transfer techniques that offers a new alternative for those who give precedence to function over cosmesis. As this was only our first such case and only a series of cases will enable us to sufficiently assess outcomes for this technique and its ideal rehabilitative protocol.
2017-09-26T15:14:08.479Z
2013-03-01T00:00:00.000
{ "year": 2013, "sha1": "aeef9d4657139f209b0cb72c4f546796f077f42e", "oa_license": "CCBY", "oa_url": "https://doi.org/10.5704/moj.1303.004", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "aeef9d4657139f209b0cb72c4f546796f077f42e", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
256191580
pes2o/s2orc
v3-fos-license
Contributory conditions for unexpected COVID-19 cases and nosocomial COVID-19 infection cases identified from systematic investigation in a French University Hospital Introduction Nosocomial case (NC) of COVID-19 infections is a challenge for hospitals. We report the results of a seven-month prospective cohort study investigating COVID-19 patients to assess unexpected cases (UC) (no COVID-19 precautionary measure application since admission) and NC. Patients and methods Investigation by an infection control team of 844 patients with COVID-19 infection hospitalized for more than 24 hours (cases). Results A total of 301 UC were identified (31% after contact tracing) with a total of 129 contact patients, and 27 secondary cases for 59 of them. In geriatric wards, 50% of cases were UC. NC represented 18% of cases (37% in geriatric wards), mainly identified after contact tracing of wandering cases. Conclusion A rapid infection control response is essential to contain nosocomial transmission, along with detailed contact tracing and screening policy. Dealing with wandering elderly patients remain challenging for HCWs. Introduction The severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2) pandemic continues to place a significant burden on health services worldwide. Nosocomial cases (NC) of COVID-19 infection are a challenge for hospitals where patients are at high risk of severe infection [1]. Quick identification of any situation at risk of epidemic spread within hospitals is a key feature to prevent cross-transmission. COVID-19 patients hospitalized without specific barrier measures for COVID-19 represent one of these situations because those unexpected cases (UC) can lead to contamination of other patients within contact patients or healthcare workers (HCW). Furthermore, tackling nosocomial acquisition is necessary to adjust infection control measures in order to ensure patients and HCWs safety [2,3]. Because understanding the risk of spread due to UC or nosocomial SARS-CoV-2 acquisition is fundamental to implement infection control strategies, we conducted a prospective cohort study to describe UC, contact patients, and NC as well as circumstances for these situations to identify specificities and epidemiological situations at risk of epidemic spread within hospitals. Methods This prospective cohort study took place at Bordeaux university hospital (France), with more than 3,000 beds. We prospectively included all patients with COVID-19 infection diagnosed by RT-PCR hospitalized for more than 24 hours from August 3, 2020 to February 28, 2021 (i.e., case). Patient data were collected by an infection control team (ICT) from electronic medical records: age, gender, hospitalization unit, reason for hospitalization, onset of symptom, context of diagnosis (symptoms, before surgery or transfer, contact tracing). The implemented infection control program included dedicated COVID-19 wards; medical masks for HCWs, patients and visitors; restriction of visitors; screening patients with RT-PCR on admission and after seven days of hospitalization. Suspected or confirmed cases were isolated in a single room for contact and droplet precautions. Furthermore, we stepped up the use of personal protective equipment (PPE) when caring for all patients, irrespective of their COVID-19 status: eye protection, respirator FFP2 masks and aprons for performing aerosol-generating procedures; aprons for contact with patients or their environment. Every COVID-19 diagnosis was notified to the ICT for investigation. Upon COVID-19 case notification, the ICT reviewed all records to identify: (1) UC when COVID-19 precautionary measures had not been applied since admission; (2) Available online at w w w . s c i e n c e d i r e c t . c o m room or other exposure to UC, notably wandering elderly patients) and secondary cases during hospitalization among contact patients; (3) NC depending on the time between admission and the date of diagnosis: probably nosocomial (from 8 to 14 days) or certainly nosocomial (beyond 15 days) [4]. The origin of COVID-19 was considered as community-acquired between 0 and 2 days, and as undefined between 3 and 7 days after admission. The cumulative incidence of cases was the number of cases divided by the number of patients hospitalized for more than 24 hours (global and per month) during the study period. The incidence of NC (i.e., probably and certainly nosocomial) was the number of cases who acquired nosocomial COVID-19 divided by the number of patients hospitalized for more than 24 hours during the study period. Study population A total of 844 patients with COVID-19 were hospitalized for more than 24 hours during the study period ( Table 1). The ICT investigated a median of 28 cases per week (range: 1-49). During the study period, a total of 431,103 patients were hospitalized for more than 24 hours; thus, the global incidence of cases was 1.9 for 1,000 admissions and varied from 1.3 to 2.2 cases for 1,000 admissions per month. Unexpected cases Unexpected cases were identified in 36% (n = 301, 95%CI [33.7; 38.3]) of our cases with a monthly variation from 3% to 50%. Patients were more frequently women, older, and UC concerned more than 50% of cases in surgery and geriatrics units ( Table 2). The global cumulative incidence of NC was 0.3 per 1,000 admissions and varied from 0.04 to 0.59 cases per 1,000 admissions per month. Discussion To control transmission of SARS-CoV-2 we used a bundle of infection control measures for early detection, isolation, and systematic notification of cases for investigation. The aim of the present study was to have accurate knowledge of specificities and epidemiological situations at risk of epidemic spread within hospitals in order to have best-fit policies. We identified frequent UC for which isolation precautions need to be adjusted and probably made it possible to limit secondary cases. Among these UC, few had secondary cases identified due to rapid identification with systematic screening and to our policy of stepping up PPE indication for all patients. To date there is currently very little evidence available on the effectiveness of interventions, excluding PPE, to prevent the spread of SARS-CoV-2 in hospital settings [5]. The proportion of NC was high among our study population and varied by ward specialty. However, comparison with the literature is complex due to various screening strategies and no universally accepted definition of NC. Incidence rates reported in seven studies with a proportion of hospital-onset COVID-19 infections among all hospitalized confirmed COVID-19 patients ranged from 0% to 15% [2]. The lower proportion of NC in surgery wards could be explained by surgery being postponed when COVID-19 screening was positive before admission. Circumstances for these risk situations more frequently concerned geriatric patients and particularly those exposed to wandering patients. Management of wandering patients is challenging for HCWs because it is both impossible to make them strictly respect barrier measures to protect other patients (wearing mask, limiting wandering, limiting physical contact with other patients), and to limit their freedom of movement through physical constraint due to ethical questions [6]. Of note, a majority of NC was screened for other reason than symptoms, highlighting the importance of effective and sustainable screening strategies to prevent crosstransmission from asymptomatic carriers [7]. We pointed the limits of screening strategies, due to RT-PCR false negative results on admission which lead HCWs to lift COVID-19 precautionary measures for symptomatic patients [8]. Our study was conducted on an exhaustive large cohort of hospitalized COVID-19 cases. However, this was an observational study without follow-up of outpatients; thus, nosocomial acquisition could be underestimated. Because data available was relatively rare in the literature we were not able to compare our results to those of other hospitals. As rapid infection control response is essential to contain the risk of COVID-19 nosocomial transmission, a screening policy for the early detection of cases and a detailed contact tracing of patients with positive RT-PCR are necessary. Dealing with wandering elderly patients also remain challenging for HCWs in this COVID-19 context.
2023-01-25T05:08:19.638Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "5e434a346a48b93546ed6580513cea0132d0e969", "oa_license": null, "oa_url": "https://doi.org/10.1016/j.idnow.2023.104648", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "5e434a346a48b93546ed6580513cea0132d0e969", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
196302122
pes2o/s2orc
v3-fos-license
Anesthetic Approach to Giant Ovary Cyst in the Adolescent Submit Manuscript | http://medcraveonline.com MOJ Surg 2015, 2(3): 00019 cysts do not usually have any presenting findings and rarely reach giant sizes. Abdominal mass and pain are most common symptoms [4]. Masses reaching large size may lead to some difficulties in anesthesia management. Difficulty in intubation, and complications associated with cardiovascular and respiratory systems, which may threaten life, may develop. Large ovary masses may exert pressure on large vessels and adjacent organs, leading to pathology and acid formation. After giant tumor is removed, rapid fall in thorax pressure and expansion again may cause lung edema. Aspiration of fluid during excision of giant mass may give rise to severe hypotension or vena cava inferior syndrome [5,6]. cysts do not usually have any presenting findings and rarely reach giant sizes. Abdominal mass and pain are most common symptoms [4]. Masses reaching large size may lead to some difficulties in anesthesia management. Difficulty in intubation, and complications associated with cardiovascular and respiratory systems, which may threaten life, may develop. Large ovary masses may exert pressure on large vessels and adjacent organs, leading to pathology and acid formation. After giant tumor is removed, rapid fall in thorax pressure and expansion again may cause lung edema. Aspiration of fluid during excision of giant mass may give rise to severe hypotension or vena cava inferior syndrome [5,6]. Figure 1: Images of the giant ovarian cyst. Case A 15 year old female patient referred to pediatric surgery outpatient clinic with the complaints of abdominal bloating and abdominal pain. Routine tests, abdominal ultrasonography, and CT revealed a cystic mass extending from pelvic area to xiphoid at the size of 20x30 cm, thought to be mesenteric and ovarian origin. The patient underwent operation and was monitorized with electrocardiogram (ECG), oxygen saturation (SpO 2 ), end tidal capnography and non-invasive blood pressure and peripheric vessel cannulation was carried out with 22 G catheter. Following adequate preoxygenation, in anesthesia induction, 2 mg/kg propofol, 2 mcg/kg fentanyl, (Talinat, Vem, Istanbul) 0.6 mg/kg rocuronium (Esmeron, Schering-Plough, Istanbul) were administered. Patient was intubated with inner diameter 7.0 no tube and was ventilated at low pressure in case giant mass exert pressure on large vessels. In order to prevent hypotension which may develop following mass excision, 10ml/kg fluid resuscitation was performed. Urine output and blood loss was monitorized. A mass at the weight of 4900 g mass was removed during operation. (Figure 1) Patient was hemodynamically stable during operation and was extubated without any problems. Discussion Respiratory and circulatory management is especially difficult in giant ovary masses. Mass leads to pathology by exerting pressure on large vessels and adjacent organs. It has risks of difficult intubation, aspiration due to mass pressure and risk of massive bleeding. With the expansion in lungs following excision of the mass, pulmonary edema may develop. Due to these risks, preoperative preparation is necessary in order to prevent the negative impact of tumor mass on circulatory and respiratory system [7,8]. Large abdominal tumors may lead to impairment of respiratory functions by leading diaphragm to rise and chest cavity to be narrowed. Following the administration of muscular relaxant, compliance between lung and diaphragm is impaired, making respiratory management even more difficult. High airway pressure may also lead to lung injury [9]. Excision of giant masses may lead to bleeding and hypotension, electrolyte disturbances as well as morbidity and other serious problems. Pressure on vessels and positive pressure ventilation may lead venous return to decrease. In association with the suppression of sympathetic activity by general anesthesia, symptomatic inferior vena cava syndrome and hypoxemia may develop [10]. exercise tolerance, abdominal distension, hypotension, oligourea and increased jugular pressure. This syndrome may sometimes be compensated by suitable intravascular volume and hemostatic mechanisms via sympathetic system when these compensation mechanisms are impaired, the symptoms of decreased venous return become more marked [11][12]. With this compensation mechanism, a balance is produced between blood pressure, cardiac output, and peripheric vasoconstriction. Sympathic blockade by central neuroaxial blocks may lead to severe hypotension. Therefore, spinal and epidural anesthesia should be avoided since they may render this protective mechanism inefficient. However, few cases have been reported in which epidural anesthesia has been employed for cyst decompression without causing circulatory depression and pulmonary edema. Due to these conditions which may develop, each case should be carefully evaluated by anesthesia department in preoperative period and anesthesia method that will be used should be determined and postoperative intensive care conditions should be prepared according to the condition of the patient. Blood and blood products should be reserved due to probability of bleeding and coagulation disorder. ECG, SpO 2 , BP, urine output, and bleeding should be continuously monitorized. If necessary, central catheterization, arterial cannulation, and blood gas monitorization should be carried out. In postoperative period, frequent hemogram and electrolyte evaluations are recommended. Conclusion Giant ovary cyst excision may lead to life threatening problems due to serious respiratory, cardiovascular and circulatory disorders. Therefore, hemodynamic monitorization, ventilator monitorization and fluid balance management should be properly carried out.
2019-03-07T14:09:03.057Z
2015-05-06T00:00:00.000
{ "year": 2015, "sha1": "c53041ab53a66f4051d872c7fae94d15d6ef63d5", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.15406/mojs.2015.02.00019", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "b84a7b5b8cc839cbee974602b330c7aaa452ad3c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
252608762
pes2o/s2orc
v3-fos-license
Objective monitoring of functional recovery after total knee and hip arthroplasty using sensor-derived gait measures Background Inertial sensors hold the promise to objectively measure functional recovery after total knee (TKA) and hip arthroplasty (THA), but their value in addition to patient-reported outcome measures (PROMs) has yet to be demonstrated. This study investigated recovery of gait after TKA and THA using inertial sensors, and compared results to recovery of self-reported scores of pain and function. Methods PROMs and gait parameters were assessed before and at two and fifteen months after TKA (n = 24) and THA (n = 24). Gait parameters were compared with healthy individuals (n = 27) of similar age. Gait data were collected using inertial sensors on the feet, lower back, and trunk. Participants walked for two minutes back and forth over a 6m walkway with 180° turns. PROMs were obtained using the Knee Injury and Osteoarthritis Outcome Scores and Hip Disability and Osteoarthritis Outcome Score. Results Gait parameters recovered to the level of healthy controls after both TKA and THA. Early improvements were found in gait-related trunk kinematics, while spatiotemporal gait parameters mainly improved between two and fifteen months after TKA and THA. Compared to the large and early improvements found in of PROMs, these gait parameters showed a different trajectory, with a marked discordance between the outcome of both methods at two months post-operatively. Conclusion Sensor-derived gait parameters were responsive to TKA and THA, showing different recovery trajectories for spatiotemporal gait parameters and gait-related trunk kinematics. Fifteen months after TKA and THA, there were no remaining gait differences with respect to healthy controls. Given the discordance in recovery trajectories between gait parameters and PROMs, sensor-derived gait parameters seem to carry relevant information for evaluation of physical function that is not captured by self-reported scores. INTRODUCTION Walking is essential for many activities of daily living, and a good walking capacity is key for participation in society. Previous reports have identified walking speed as 'sixth vital sign', given its correlation with essential health parameters, including quality of life (Schmid et al., 2007), risk of future hospitalization (Montero-Odasso et al., 2005), and mortality (Hardy et al., 2007). In individuals with end-stage osteoarthritis (OA) of the knee and hip, walking capacity is reduced (Thomas, Pagura & Kennedy, 2003), thereby leading to decreased physical functioning and a lower quality of life (Neogi, 2013). As final step in the treatment of severe knee and hip OA, total joint arthroplasty can be performed in order to resolve OA-related symptoms (e.g., pain, stiffness, instability) and improve physical functioning. Although total knee arthroplasty (TKA) and total hip arthroplasty (THA) are very successful and cost-effective procedures (Ethgen et al., 2004), a subset of patients is dissatisfied with treatment outcome (Gunaratne et al., 2017;Anakwe, Jenkins & Moran, 2011;Nilsdotter, Toksvig-Larsen & Roos, 2009). In addition to patients with identified complications, this includes patients who had an uneventful procedure, but did not achieve their expected level of functional recovery (Gunaratne et al., 2017). Early identification of individuals at-risk of limited functional recovery is crucial in order to enable clinicians to intervene timely, and may help to readjust patient expectations (Tolk et al., 2021). However, it has been challenging to identify these patients. In part, this is due to a lack of outcomes of physical functioning with good psychometric properties (Hossain et al., 2015). Current diagnostics (e.g., radiographs, physical exam, self-reported outcomes) are limited to static or non-weightbearing situations, or are not necessarily reflective of someone's actual performance during daily life activities (Bolink et al., 2016;Fransen et al., 2019). Moreover, patient-reported outcomes (PROMs) are inherently subjective, largely influenced by pain, and suffer from early ceiling effects (Stevens-Lapsley, Schenkman & Dayton, 2011). Although PROMs often contain subscales related to limitations in activities of daily life, such as KOOS/HOOS-ADL or WOMAC function score, these outcomes seem to be more reliant on a patients' own reflections on their capacity rather than their actual performance (Fransen et al., 2019). Hence, there is a need for objective data that can bridge this gap in clinical assessment. As an alternative to these subjective scores, performance-based tests have been proposed to objectively capture physical function. For example, evaluation of sit-to-stand transfers, walking short distances, and stair negotiation has been endorsed by the OARSI as coreactivities for individuals with knee and hip OA (Dobson et al., 2013). While these tests are well-suited to quickly obtain a global picture of a patient's physical function, they are limited to a single outcome measure, being the time to perform the task or activity, completed distance, or number of repetitions. These tests provide no information about compensations or underlying biomechanics relevant to the performance, and thus may lack important details. Wearable, inertial sensors, are promising tools to instrument performance-based tests in order to obtain more detailed insights into physical functioning. These inertial sensors are easy to use, have been proven to be valid and reliable (Kobsar et al., 2020a), do not require lengthy procedures or specialized laboratories, and can be used in clinal settings or even remotely in the home environment (Fransen et al., 2021). Not surprisingly, inertial sensors have gained interest over the past few years to objectively monitor changes in physical function after total knee and hip arthroplasty (Small et al., 2019;Kobsar et al., 2020b). In particular, the focus has been on studying gait recovery (Small et al., 2019;Kobsar et al., 2020b), potentially due to the fact that gait parameters are predictive of limitations in other activities of daily living (Potter, Evans & Duncan, 1995) and gait improvements are an important goal for patients after TKA and THA (Scott et al., 2012). In the same settings, turning could also be evaluated (Boekesteijn et al., 2021), which has been suggested to be even more sensitive to sensorimotor impairments than straight ahead gait (Mancini et al., 2016). However, before such technologies can be clinically adopted, it is important that the derived outcome measures fulfill the following requirements: they must (1) be sensitive to pre-operative impairment, (2) be responsive to interventions aimed at improving mobility, and (3) provide clinically relevant information about physical functioning. Multiple gait and turning parameters derived from inertial sensors have shown to be sensitive to mobility impairment in end-stage knee and hip OA (Boekesteijn et al., 2021). The next step herein is to evaluate responsiveness of these parameters to unilateral TKA and THA, and to assess whether post-operative function recovers to the level of healthy individuals. While recovery of gait has previously been investigated using inertial sensors at different timepoints after TKA (Fransen et al., 2019;Fransen et al., 2021;Bolink, Grimm & Heyligers, 2015;Senden et al., 2011;Jolles et al., 2012;Kluge et al., 2018;Youn et al., 2020) and THA (Bolink et al., 2016;Reininga et al., 2013;Nelms et al., 2020;Wada et al., 2019), a comprehensive study is lacking that maps the recovery trajectory -including turning capacity-at multiple timepoints matching routine follow-up after TKA and THA. In addition, there is a lack of clarity whether gait can be assumed to be 'normal' one year after joint replacement (Naili et al., 2017;Bahl et al., 2018;Milner, 2009). Finally, little is known about how gait recovery compares to self-reported recovery of physical function (e.g., PROMs). Therefore, the aims of this study were threefold: (1) to investigate gait recovery at two and fifteen months after TKA and THA using inertial sensors, (2) to compare gait 15 months after TKA and THA with data from healthy participants, and (3) to compare recovery trajectories between objective gait parameters and self-reported scores physical functioning. Participants Individuals with end-stage OA scheduled for TKA (n = 24) or THA (n = 24) at the Sint Maartenskliniek participated in this study. A group of healthy controls (HC; n = 27) within the same age range of 50 to 75 years old was recruited from the community for reference purposes. Healthy participants had no pain in the lower extremities, nor were they familiar with a clinical diagnosis of knee or hip OA. All participants had to be able to walk for more than two minutes without the use of any assistive device. Exclusion criteria were: (1) joint replacement within a year following surgery (including revisions), or symptomatic OA in another weight-bearing joint than the joint scheduled for surgery, (2) BMI >40 kg/m 2 , and (3) any other musculoskeletal or neurological impairment interfering with gait or balance. Participants who received any other joint replacement to the lower extremities, or had a revision surgery within the period of fifteen months follow-up, were labeled as lost to follow-up. In these cases, data that had been collected until the time of the second surgery was still used for analysis. Written informed consent was obtained from all participants prior to testing. This study was exempt from ethical review by the CMO Arnhem/Nijmegen (2018-4452) as it was not subject to the Medical Research Involving Human Subjects Act (WMO). All study procedures were conducted in accordance with the Declaration of Helsinki. Power calculation Sample sizes were based on the smallest difference that we aimed to detect in this study, which was the difference in gait parameters between individuals 15 months after arthroplasty and HC. Effect sizes for this comparison were informed by studies from Senden et al. (2011) andKluge et al. (2018). When using a standardized mean difference for stride length of 1.1, a power of 80%, and a significance level of 0.05, 22 participants were required per group. To account for potential drop-outs, 24 individuals were recruited for each study group. Surgical procedure TKA was performed using the medial parapatellar approach. All individuals scheduled for TKA received the Genesis II posterior stabilized knee prosthesis (Smith & Nephew, Memphis, TN, USA). The patella was resurfaced in 58% of the patients. THA was performed using the posterolateral approach. Specific types of hip implants differed among individuals scheduled for THA and are listed in File S1. In total, TKA was performed by seven different surgeons in this study, whereas THA was performed by ten different surgeons. All patients followed an enhanced recovery protocol with mobilization on the day of surgery and hospital discharge within two days. All patients were referred to out-of-hospital physical therapy, which was focused on optimizing functionality, mobility, muscle power, coordination, stability, and walking improvement. Although physical therapy protocols were not standardized, patients usually continued physical therapy for 6-12 months, until their functional goals had been reached. Demographic and clinical assessment Severity of radiological OA was determined using Kellgren and Lawrence (KL) grades (Kellgren & Lawrence, 1957) as scored by JS and VB. Baseline anthropometric characteristics (e.g., body mass, height, and BMI) were obtained during the pre-operative screening visit. In addition, PROMs were assessed using the Knee Injury and Osteoarthritis Outcomes Score (KOOS) for TKA (de Groot et al., 2008) and Hip Disability Osteoarthritis Outcome Score (HOOS) (de Groot et al., 2007) for THA patients. More specifically, HOOS and KOOS subscales ''Pain'' and ''Activities of Daily Living (ADL)'' were used to represent pain and physical function. PROMs and gait were assessed pre-operatively -on the same day as the pre-operative screening visit -and at two and fifteen months follow-up. Follow-up measurements were initially set to take place at one year, but measurements were delayed with three months due to the COVID-19 pandemic. Timepoints of follow-up were chosen to match routine follow-up after TKA and THA in the Netherlands, and roughly reflect the moments when patients can walk independently without an assistive device (e.g., 2 months) and when full recovery has been achieved (e.g., 1 year). For HC, gait was investigated at only one occasion. Gait protocol Experimental procedures of the gait assessments were similar to the methods described in (Boekesteijn et al., 2021). Four inertial sensors (Opal V2, APDM Inc., Portland, OR) were attached to the dorsum of both feet, the waist (sacrolumbar level), and the sternum. Participants walked back and forth along a six meter trajectory making 180 • turns for a total duration of 2 min ( Fig. 1). Gait tests were performed at comfortable, self-selected speed. Data analysis Raw inertial data was processed using validated Mobility Lab v2 software (Morris et al., 2019). Turning steps were separated from straight walking based on the gyroscope data of the lumbar sensor (El-Gohary et al., 2014). Gait parameters were calculated for each stride during steady-state walking phases, excluding the two steps preceding and following a turn. Parameters were summarized as mean value of all valid strides or turns. Based on non-redundancy and size of the difference between individuals with end-stage knee and hip OA and HC as found previously (Boekesteijn et al., 2021), the following outcomes were extracted ( Fig. 1): (1) gait speed, (2) stride length, (3) cadence (4), step time asymmetry, (5) stride time variability, (6) peak turning velocity, (7) lumbar sagittal range of motion, (8) lumbar coronal range of motion, and (9) trunk coronal range of motion. Parameters were only evaluated for the TKA or THA group in case they were previously found to be sensitive to mobility impairment in knee or hip OA (Boekesteijn et al., 2021). For this reason, step time asymmetry, lumbar sagittal range of motion, and lumbar coronal range of motion were not evaluated in the TKA group. Statistical analysis Recovery trajectories of gait parameters and KOOS/ HOOS scores were visualized on group level by the mean and 95% confidence intervals (CI). Linear mixed models with gait parameters and KOOS or HOOS scores as dependent variable, time as two independent dummy variables (e.g., T2 and T15), and subject ID as random effect factor were constructed to investigate the effect of time on gait and KOOS/HOOS scores for TKA and THA separately. Addition of random slopes was evaluated, but these were not included in the final model for reasons of parsimony, as this did not contribute to a better model fit. Gait parameters of TKA and THA groups were compared with HC at 15 month follow-up using an independent samples t -test or non-parametric Mann-Whitney U test in case data was not normally distributed. Inferences of statistical significance were based on p < 0.05. Since multiple outcome parameters were used for the same construct (e.g., gait) we controlled the family-wise error rate using the Hommel procedure (Hommel, 1988), by adjusting the p-values for the number of gait parameters involved in each comparison. To assess discrepancies between gait and self-reported scores of physical function, we compared trajectories between gait speed, which was found to be most sensitive to gait impairment in knee and hip OA (Boekesteijn et al., 2021), and KOOS/HOOS-ADL scores. Meaningful improvements were defined as a change in gait speed >0.10 m/s (Bohannon & Glenney, 2014) and a change in KOOS/HOOS ADL score >20 points (Lyman et al., 2018). Data were processed in Python 3.8.3 and statistical analyses were conducted in RStudio 3.6.1 using the lme4 package (version 1.1-26) (Bates et al., 2015). Participant characteristics The study groups did not differ significantly in age, sex, height, or BMI (Table 1). Compared to HC, body mass was significantly higher in individuals scheduled for TKA and THA. All individuals scheduled for TKA or THA had moderate to severe OA (KL grades 3 or 4). In total we had missing data for eleven participants. Three participants had a complication within the study window. For details regarding missing data and complications, see File S2. Recovery of gait after arthroplasty Two months after surgery, gait speed, stride length, and cadence were not significantly different from baseline, both after TKA and THA ( (Table 2). There were no changes in step time asymmetry within the first two months after THA (Table 2), nor were there changes in stride time variability after TKA and THA at this timepoint ( (Table 2). Between two and fifteen months, large improvements in gait speed, cadence, and stride length were observed after both TKA and THA (Table 2; Figs. 2A-2C). For gait speed, the gain between two and fifteen months was 0.22 m/s (95% CI [0.15-0.29]) after TKA and 0.14 m/s (95% CI [0.06-0.20]) after THA. Peak turning velocity did not change significantly (mean diff: 17.4 deg/s, 95% CI [1.7-33.0], P corr = 0.105) between two and fifteen months after TKA. There were no significant improvements in turning velocity between two and fifteen months after THA (Table 2). Step time asymmetry did not change between two and fifteen months after THA. There were no changes in stride time variability, or trunk coronal RoM between two and fifteen months after TKA and THA (Table 2). Individuals after THA showed an increase of 1.4 degrees (95% CI [0.6-2.1]) in lumbar coronal RoM between two and fifteen months. Finally, none of the gait parameters were significantly different from HC at fifteen months after TKA and THA (Table 3; Figs. 2A-2I). Step time asymmetry ( Notes. HC, healthy control; TKA, total knee arthroplasty; THA, total hip arthroplasty; RoM, range of motion; P corr , Hommel adjusted p-value. Non-normal distributed data are presented in italic and are summarized as median (IQR) with median difference (95% CI). Test statistics represent either the t -value (normal data) or U (non-normal data). Changes on PROMs after arthroplasty Two months after TKA, individuals improved on all KOOS subscales, except for 'Symptoms' (Table 4). For all other subscales, self-reported scores showed large improvements (>20 points) with some individuals already reaching (sub)maximal scores (≥90 points) within the first two months (Fig. 2J & 2K). Further improvements were found for all KOOS subscales from two to fifteen months follow-up (Table 4). As for the HOOS, all subscales improved from baseline to two months after THA, as well as from two to fifteen months follow-up, with the largest magnitude of effects taking place in the first two months (Table 4). Relation between recovery trajectories of gait parameters and PROMs When comparing recovery trajectories of self-reported scores with gait parameters, substantial differences were observed (Fig. 2). Where KOOS and HOOS scores showed large improvements over almost all subscales in the first two months after surgery (Table 4), gait parameters generally improved between 2 and 15 months, with the exception of trunk-related gait parameters. More specifically, discrepancies between HOOS/KOOS-ADL scores and spatiotemporal parameters were present at two months after surgery. For gait speed specifically, there were no significant changes between baseline and two months after TKA and THA, while HOOS/KOOS-ADL improved with 42 points and 21 points, respectively. To illustrate, two months after surgery, 10/23 individuals after TKA reported meaningful improvements in ADL scores, while merely 4/23 showed a meaningful improvement in gait speed. Similarly, after THA, 20/23 individuals reported meaningful improvements in ADL scores at 2 months, with 10/23 individuals showing meaningful improvements in gait speed. DISCUSSION This study evaluated the use of inertial sensors to monitor functional recovery after TKA and THA. In concordance with our previous work, that sensor-derived gait parameters show sensitive to knee and hip OA (Boekesteijn et al., 2021), this study showed that these parameters were also responsive to TKA and THA at two and fifteen months after surgery, and recovered to the same level as HC fifteen months after surgery. In addition, discrepancies between recovery trajectories of spatiotemporal gait parameters and HOOS/KOOS scores were observed, particularly at two months post-operatively. Recovery trajectory of gait after TKA and THA There were limited improvements in spatiotemporal gait parameters two months after TKA and THA, which is in agreement with previous studies (Senden et al., 2011;Bahl et al., 2018). However, the observed faster turning in absence of higher gait speed two months after THA is interesting, and may suggest that turning is more sensitive to short-term improvements in physical function after THA than gait speed. In contrast to these basic spatiotemporal parameters, normalization of trunk movement was found already two months after TKA and THA. Pre-operatively, individuals with knee OA may increase lateral trunk lean as a strategy to reduce knee joint loading and/or pain (Mündermann, Notes. OA, osteoarthritis; TKA, total knee arthroplasty; THA, total hip arthroplasty; HC, healthy controls; ADL, activities of daily living. , 2005;Hunt et al., 2008;Linley et al., 2010), which is no longer required two months after TKA. Increased lumbar RoM in the sagittal plane, in its turn, may serve as pre-operative compensation for individuals with hip OA to overcome pain and hip joint stiffness (Hurwitz et al., 1997;Lenaerts et al., 2009). Taken together, these results suggest that while two months is too early for meaningful recovery of spatiotemporal gait parameters, pre-operative compensations of the trunk and pelvis already disappear within the first two months after TKA and THA. Large and clinically relevant improvements were observed on spatiotemporal parameters between two and fifteen months after TKA and THA. This is in agreement with literature investigating gait with inertial sensors one year after TKA (Fransen et al., 2019;Bolink, Grimm & Heyligers, 2015;Kluge et al., 2018) and THA (Bolink et al., 2016;Wada et al., 2019). Recovery of muscle strength (e.g., quadriceps and hip abductors)-which coincides with this period (Mizner, Petterson & Snyder-Mackler, 2005;Ismailidis et al., 2021) -may underly these improvements in walking capacity. As for trunk kinematics, both individuals after TKA and THA showed an increase in lumbar coronal RoM from two to fifteen months after surgery, which may relate to the restored ability of the hip abductors to control frontal plane pelvic movement (Bolink et al., 2016;Reininga et al., 2012). Compensations like lateral trunk lean, which limit pelvic RoM, are then longer required (Bolink et al., 2015). When combining these results with those of gait recovery at two months, it can thus be concluded that a wide range of sensor-derived gait metrics is responsive to TKA and THA, with spatiotemporal parameters and trunk kinematics each showing a distinctive recovery trajectory. Dyrby & Andriacchi None of the gait parameters were different from HC mean values at fifteen months after TKA and THA. This in contrast with some earlier studies reporting remaining gait differences between HC and individuals one year after TKA (Kluge et al., 2018;Naili et al., 2017;Outerleys et al., 2021) or THA (Bahl et al., 2018). Although one year after arthroplasty is generally considered as endpoint of recovery, these differences between studies might be attributed to the longer follow-up time in our study. This seems like a reasonable explanation given that improvements in gait were larger in our study compared to these earlier studies (Kluge et al., 2018;Naili et al., 2017;Bahl et al., 2018). Our findings underscore the success of TKA and THA in improving physical functioning, and indicate that normal spatiotemporal gait parameters and normal trunk kinematics may be achieved 15 months after TKA and THA. Whether other aspects of gait, including lower-extremity kinematics and kinetics, also recover to the level of healthy controls remains to be elucidated. Despite our findings of full recovery after TKA and THA, current literature suggest that more advanced parameters, including lower extremity kinematics and kinetics, may still reveal deficits in gait one year after surgery (Naili et al., 2017;Bahl et al., 2018;Outerleys et al., 2021). Relationship between PROMs and objective gait measures Objective gait parameters showed a different recovery trajectory than subjective reports of physical function and pain. Scores on the KOOS and HOOS greatly improved within the first two months, while spatiotemporal gait parameters mainly improved between two and fifteen months after surgery. Similar discrepancies between PROMs, gait, and performance-based tests have previously been recognized in the literature (Bolink et al., 2016;Stevens-Lapsley, Schenkman & Dayton, 2011;Naili et al., 2017;Dayton et al., 2016;Luna et al., 2017;Mizner et al., 2011). For example, inverse recovery trajectories (i.e., early improvements in PROMs compared to worsening of performance-based outcomes) have been observed between KOOS/HOOS ADL scores and performance-based outcomes, including the 6 min walk test, stair climbing test, and timed up and go test, during the first month of recovery after TKA and THA (Stevens-Lapsley, Schenkman & Dayton, 2011;Dayton et al., 2016;Luna et al., 2017;Mizner et al., 2011). For sensor-derived gait parameters specifically, poor agreement with PROM scores has been found after TKA and THA (Bolink et al., 2016;Bolink, Grimm & Heyligers, 2015). On a similar note, Fransen et al. (2021) found that, although perceived walking ability and self-reported physical function improved, there were no improvements in quality or quantity of daily life gait three months after surgery. The current study adds that the discordance between gait parameters and self-reported function scores is most prominent at two months after surgery, with the exception of parameters related to trunk motion. The general consensus is that physical function subscales of PROMs assess a different domain than performance-based tests and gait analysis (Fransen et al., 2019). This discrepancy may first be related to a strong relation of physical function subscales with pain (Stevens-Lapsley, Schenkman & Dayton, 2011), as was also apparent from the similarity between the recovery trajectories of HOOS/KOOS Pain and ADL subscales in our study. One potential explanation for this is that improvements in pain directly translate to a more positive reflection on daily life performance, and that patients considered pain as the main limiting factor in their daily life activities. Second, these self-reported scores ask about experienced difficulty during a wide range of activities, rather than how they execute a specific activity, which is inherently different from what these gait parameters measure. Finally, there is evidence that objective parameters of physical function are more sensitive to remaining functional deficits after TKA than PROMs (Naili et al., 2017), which may attributed to early ceiling effects of PROMs. Since improving mobility-specifically walking-is an important goal of joint replacement (Lange et al., 2017), these sensor-derived parameters may thus add a relevant dimension to evaluation of physical functioning, although their clinical value still has to be demonstrated. Limitations and future directions This study has a number of limitations which merit attention. First, we measured gait recovery in a well-defined cohort of patients with unilateral osteoarthritis without pain complaints in any other joint or previous joint replacement. While this was relevant for the aims of the current study, this limits the generalizability of our findings. Second, in the present study, evaluation of physical function was limited to gait and turning in the present study while other daily life activities, including sit-to-stand transfers and stair climbing, are also relevant for physical functioning after TKA and THA (Dobson et al., 2013). Third, gait parameters in this study were limited to spatiotemporal parameters and gait-related trunk kinematics. Other parameters, such as knee and hip kinematics that can be derived from a different set-up of inertial sensors may provide additional information about gait recovery after TKA and THA, especially in light of remaining gait deficits (Bahl et al., 2018). While the current study touches upon the potential value of objective measurement of physical function, the actual value of clinical implementation of gait tests cannot be derived from our study results. Future studies with larger samples and a more diverse population are required to investigate the applicability of objective gait assessment systems to identify poor-responders. Another valuable direction would be to explore whether such data can be used to adjust patient expectations during clinical visits and to further tailor post-operative care. Finally, there is a need for studies employing inertial sensors for remote monitoring during daily life, which may not only enable more efficient (digital) healthcare pathways in the future, but may also contribute to data with greater ecological validity (Van Ancum et al., 2019;Takayanagi et al., 2019). CONCLUSIONS This study showed that objective gait measures derived from inertial sensors are responsive to TKA and THA. Not only speed-related parameters, but also turning and trunk motion provide important information about functional status before and at two and fifteen months after joint replacement. There were no remaining gait differences between individuals after TKA or THA and healthy participants at fifteen months. Recovery trajectories of objective gait data were different from those of KOOS and HOOS ADL subscales, with a marked discordance at two months after surgery. Altogether, these results strengthen the premise that sensor-derived gait metrics may provide meaningful information about recovery of physical functioning after TKA and THA that is not captured by self-reported ADL or pain scores. ADDITIONAL INFORMATION AND DECLARATIONS Funding The Innovation Fund of the Sint Maartenskliniek sponsored this study. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript. Grant Disclosures The following grant information was disclosed by the authors: Innovation Fund of the Sint Maartenskliniek.
2022-09-30T15:26:55.499Z
2022-09-28T00:00:00.000
{ "year": 2022, "sha1": "cffbbd25af5774635da1fb49cbcb0b7e6e98be73", "oa_license": "CCBY", "oa_url": "https://doi.org/10.7717/peerj.14054", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "802a605ca24ea80a6f3a922833ec41100e7bb774", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
736776
pes2o/s2orc
v3-fos-license
Bayesian dynamic modeling of time series of dengue disease case counts The aim of this study is to model the association between weekly time series of dengue case counts and meteorological variables, in a high-incidence city of Colombia, applying Bayesian hierarchical dynamic generalized linear models over the period January 2008 to August 2015. Additionally, we evaluate the model’s short-term performance for predicting dengue cases. The methodology shows dynamic Poisson log link models including constant or time-varying coefficients for the meteorological variables. Calendar effects were modeled using constant or first- or second-order random walk time-varying coefficients. The meteorological variables were modeled using constant coefficients and first-order random walk time-varying coefficients. We applied Markov Chain Monte Carlo simulations for parameter estimation, and deviance information criterion statistic (DIC) for model selection. We assessed the short-term predictive performance of the selected final model, at several time points within the study period using the mean absolute percentage error. The results showed the best model including first-order random walk time-varying coefficients for calendar trend and first-order random walk time-varying coefficients for the meteorological variables. Besides the computational challenges, interpreting the results implies a complete analysis of the time series of dengue with respect to the parameter estimates of the meteorological effects. We found small values of the mean absolute percentage errors at one or two weeks out-of-sample predictions for most prediction points, associated with low volatility periods in the dengue counts. We discuss the advantages and limitations of the dynamic Poisson models for studying the association between time series of dengue disease and meteorological variables. The key conclusion of the study is that dynamic Poisson models account for the dynamic nature of the variables involved in the modeling of time series of dengue disease, producing useful models for decision-making in public health. Introduction Dengue is an arboviral disease caused by a Flavivirus, leading to high morbidity in children and adults in tropical countries of Asia and Latin America [1]. There are four genetically distinct but antigenically related (different serotypes) dengue viruses named DEN-1, DEN-2, DEN-3, and DEN-4. All serotypes can cause a spectrum of illness ranging from unapparent or mild fever to the potentially fatal syndrome characterized by hemorrhage, fever and shock syndrome [2]. The infective female Aedes aegypti mosquito is the main vector involved in transmiting the viruses causing dengue. The mosquito acquires the virus when it feeds on the blood of an infected human. Several studies show that climate is associated with the mosquito ecology, the infectious agents they carry, and the arboviral transmission of dengue disease [3] [4] [5]. Naish et al.(2014) [3] reviewed the studies associating climatic factors and dengue transmission, concluding that higher temperatures affect the rate of larval development, shorten the emergence of adult mosquitoes, increase the biting behavior of mosquitos, and accelerates virus replication within the mosquitos. Meanwhile, the combined effect of temperature and relative humidity impact mosquito feeding behavior, vector survival and the probability to be infected and the ability to transmit dengue. Epidemiological research on dengue incidence is based on passive surveillance data from case reports [5] [6]. Racloz et al. (2012) [5] reviewed early warning modelling in dengue disease, concluding that epidemiological modeling is constrained by limited data sources. Authors encouraged the collection of information at the spatial and temporal level of climatic and socio environmental variables to develop models with stronger predictive capabilities, while Runge-Ranzinger et al. (2014) [6] concluded that passive surveillance provides the baseline for outbreak alert, which should be strengthened through the definition of appropriate alert thresholds. DGLMs are extensions of the dynamic linear models [30] [31], based on two sets of equations, a measurement or observation equation and the transition or state equations. The observation equation establishes a link between observations and unobserved variables, and the transition equations describe the evolution of state variables. DGLMs allow the inclusion of components modeling seasonality, trend, cyclicity and covariates [31]. The classic models for calendar trend are the first-order random walk model, the local linear trend model (first-order random walk plus trend) and the second-order random walk [32]. Modeling seasonality and cyclicity is accomplished through dummy variables or trigonometric series defined in the transition equations, and covariates are included with constant or time-varying coefficients [32]. DGLM parameter estimations have followed different approaches. Linear Bayes estimation with conjugate updating [30] [31] or iteratively weighted Kalman filter and smoother, accompanied by the expectation-maximization (EM) algorithm for the estimation of unknown hyperparameters [32], was applied by Chiogna and Gaetan [33] to explore the association between pollution covariates and respiratory diseases. Shepard et al. [34] applied likelihood base inference for non-Gaussian state space parameters, based on importance sampling. DGLMs estimated by Markov Chain Monte Carlo (MCMC) simulations have been explored by Gamerman [35], Ferreira and Gamerman [27] (modeling Dengue disease and meningitis with covariates and seasonal terms), Schmidt and Pereira [28] and Alves et al. [36] including covariates with constant coefficients for time accompanied by covariates modeled by transfer functions. Malhão et al. [29] implemented DGLM for time series of dengue cases, capturing temporal dependencies not explained by covariates, and modeling dengue over-mortality. Colombia is one of the countries with the highest incidence of dengue disease in the tropics, and it is testing dengue control by vaccination [37], a topic of interest among the research community [38]. The country possesses climatic, environmental and socio-geographic conditions favoring the growth and development of the dengue vector. The Aedes aegypti mosquito is found across more than 80% of the territory, which has an altitude of 1000 m and 2200 m above sea level, and the Aedes albopictus (forest and urban dengue vector) has also been reported [39]. Bucaramanga is among the Colombian cities with the highest annual dengue incidence for the 2008-2015 period. In 2010 and 2012 the city experienced incidence rates of 1515 and 279.93 cases per 100,000 people, respectively, while for the same years the incidence rates for the country were 657 and 221.9 cases per 100,000, respectively [39] [40]. The Aedes aegypti mosquito has been reported as the dengue vector in the city of Bucaramanga. While vectorial surveillance studies did not exist in 2008-2015 to quantify the presence of vectors, their abundance, occurrence, distribution and other epidemiological parameters at monthly or weekly temporal scales for Bucaramanga, information of climatic variables such as environmental temperature, rainfall, solar radiation, and relative humidity are available from several sources at these temporal scales. These data offer opportunities to analyze the relation between time series of dengue cases and climatic variables, as Rúa-Uribe et al. (2013) [8] show for another Colombian city. The aim of this study is to model the association between time series of dengue case counts and meteorological variables, in a high-incidence city of Colombia, applying Bayesian hierarchical dynamic generalized linear models, during the period January 2008 to August 2015. Additionally, we evaluate the model's performance in short-term prediction of dengue cases. Data Bucaramanga is a medium-sized city in Colombia, at 959 meters above sea level, with a population of 527,913 people (projected population, 2015), at the coordinates 7˚07 0 07@N, 73˚06 0 58@W. We collected dengue case counts for 2008-2015 in metropolitan Bucaramanga from the Surveillance National System of Public Health (SIVIGILA). The total dengue case counts (probable and confirmed cases of dengue and severe dengue plus dengue mortality) by epidemiological week (EW) were computed in the interval between the first EW of January 2008 to the last EW of August 2015, for a total of 396 EW. For the meteorological variables (MV), daily maximum temperature (˚C), daily total rain fall (mm), daily maximum solar radiation (Watts/m 2 ) and daily maximum relative humidity (%) were obtained from three stations of the Defense Corporation of the Bucaramanga Plateau (CDMB). Daily maximum temperature (˚C) and daily total rain fall (mm/m 2 ) were obtained from the Institute of Hydrology, Meteorology and Environmental Studies of Colombia (IDEAM) for two meteorological stations. Daily values for every variable were averaged by EW and by station, and then the weekly averages of all stations were averaged, obtaining one value per MV and EW. Hierarchical dynamic Poisson models We fitted Bayesian hierarchical dynamic Poisson models to dengue case counts. Let y t be the case count for dengue in EW t (t = 1, Á Á Á, T and T = 396), and The logarithm of the mean λ t is modeled with two options. The first option is the inclusion of a constant coefficient α for the calendar trend, where α is Normal with mean 0 and variance 10, which allows flexibility for the exploration of the parameter space. The second option is the inclusion of time-varying coefficients α t for the calendar trend, where for the Normal(3,0.2) prior, the mean of 3 for α 1 and α 2 in the exponential scale is close to the observed dengue case counts at time points 1 and 2, and 0.2 is a precision (variance of 20) that allows flexibility for these parameters. τ α is the precision parameter with Gamma (1,0.1) hyperprior, which represents a Gamma prior noninformative distribution centered at 10 with variance of 100. In Eqs 2 and 3, the x t−1,j (j = 1, Á Á Á, J and J = 4) are the mean centered MVs temperature (j = 1), rainfall (j = 2), solar radiation (j = 3) and relative humidity (j = 4). The β j are constant coefficients for lag-one MV, and b t,j are time-varying coefficients for lagone MV. Normal priors with mean 0 and variance 10 were assigned to the constant coefficients β for the covariates. The time-varying coefficients for the lag-one covariates received firstorder Normal RW1 priors, where for the Normal(0,0.1), we let b 1,j start centered at zero, with a 0.1 precision (variance of 10), allowing a large space for exploring the parameter. Gamma(1,0.001) prior distributions (Gamma centered at 1000 with variance of 100,000) are assigned to the precision parameters τ b j . The reason for this prior is that we constrain the variance of the b t,j to be very small, smoothing the trend of the time-varying coefficients and allowing us to visualize the smoothed trend of the covariate effects. We modeled missing data in the covariates by imputing the empty values, assuming a Normal(μ t−1 , τ j ) prior for t = 1, Á Á Á, T and T = 396, where μ t−1 is the value of the lag-one week meteorological centered variable, where τ j is a precision parameter with Gamma(0.1,0.1) priors for temperature, for rainfall, solar radiation, and relative humidity, where the Gamma prior is an informative prior centered at 0.1 with dispersion 10, slightly constraining the imputed values of the covariates to have a small variance, without restricting to high variance values. Models were fitted applying MCMC using WinBUGS 1.4 software [41], with 3 chains, 50,000 iterations total, 46,000 iterations burn-in and thinning of 4, obtaining a final sample of 1000 iterations per chain. Convergence was assessed by Gelman-Rubin diagnostic [42] and visual inspection of the simulations chains. Model selection was accomplished using deviance information criteria (DIC) [43]. When DIC measures are used for model selection, models with small deviance " D, a small number of parameters p D and a small DIC are selected for inference. After fitting all models, and selecting the final model for inferences, we were interested in evaluating the short-term prediction performance of the selected final model. We obtained predictions at several time points, during the study period T = 396. We selected estimation periods 1 to t, where t was in increments of 20 EWs, starting in the 20th EW of the study period and ending in the 380th EW. We obtained 19 upper bounds for the estimation period 1 to t. Then we fitted models for periods 1 to p, where p = t + k (k = 1, Á Á Á, 4), and the k are prediction periods (one, two, three or four weeks ahead). We used the same conditions defined above for the MCMC simulations. Samples from the posterior predicted distribution for the prediction periods k were obtained, and the mean and 95% credible intervals (CIs) for the cases of dengue were calculated. To evaluate the prediction performance from the final model, we calculated the mean absolute percentage error (MAPE) per MCMC iteration between the predicted cases of dengue y pred k and the observed case count y k , at prediction periods k (∑ k |(y pred k − y k )/y k |/k). We present the median MAPE of the posterior predictive distribution for all the estimation periods t for one, two, three and four weeks ahead as a measure of short-term model performance for predicting dengue case counts. Exploratory data analysis The total number of cases of dengue disease for the study period was 26,755. The weekly case count averaged 67.6, with a median of 52 (range 7 to 247). There were three dengue disease outbreaks in 2010, 2013 and 2014, with small case counts in 2011 and 2012 (Fig 1). The partial autocorrelation function for the time series of dengue case counts (Fig 1) suggest a first-or second-order autoregressive process. Maximum weekly temperature averaged 27˚C, with a minimum of 23.6˚C, a maximum of 30.4˚C, and 18 missing values. Mean and median values of weekly rainfall were 2.7 mm/m 2 and 3.6 mm/m 2 , respectively, with a minimum of 0, a maximum of 24.8 mm/m 2 , and 11 missing values. Weekly maximum solar radiation averaged 946.5 Watts/m 2 , with median of 940.9 Watts/m 2 , a minimum of 733.5 Watts/m 2 , a maximum of 1279 Watts/m 2 , and 66 missing values. Maximum weekly relative humidity averaged 94.2%, with a minimum of 79.2%, a maximum of 99.5%, and 63 missing values. While time series for temperature and relative humidity display an upward trend over the 396 EWs, solar radiation decreases, and precipitation shows highly volatile behavior. Dengue disease case counts are positively correlated with temperature, and negatively correlated with solar radiation. There is no apparent association between dengue case counts and precipitation or relative humidity. In Fig 3, linear correlations between the meteorological variables and dengue case counts show positive and moderate correlation with temperature and negative and moderate linear correlation with relative humidity, solar radiation and rainfall. Relative humidity and solar radiation display high positive correlations with their own lag-1 and lag-2 values, followed by temperature and rainfall. Rainfall, relative humidity and solar radiation are positively and moderately correlated, while rainfall and temperature show negative and moderate correlation. Finally, we highlight the negative and low correlation between solar radiation and temperature. Dynamic Poisson models In this section, we begin by presenting the results from the models without covariates (only constant coefficient (CC) (α) or RW1 or RW2 time-varying coefficients (TVCs) (α t ) for calendar trend). We define calendar trend as the pattern observed in the model's parameters over the EWs in the entire study period (2008)(2009)(2010)(2011)(2012)(2013)(2014)(2015), not the trends observed over any given epidemiological year. We then present the results from models including CC (β j ) for covariates, and CC (α) or RW1 or RW2 TVCs (α t ) for calendar trend. Finally, we exhibit the results from models including RW1 TVCs (b t,j ) for the covariates with CC (α) or RW1 or RW2 TVCs (α t ) for calendar trend. Models without covariates. For the models without covariates, the deviance and DIC for the model with CC (α) for calendar trend are 15,959.8 and 15,960.8, respectively. For the models with RW1 or RW2 TVCs (α t ) for trend, the respective deviance and DIC are 2716.4 and 2901.1 for the RW1 model, and 2901.5 and 2990.0 for the RW2 model. We conclude that the model with CC (α) for trend shows worse fit than the models with RW1 or RW2 TVCs (α) for trend of calendar time. The models with RW1 or RW2 TVCs (α t ) for calendar trend have similar DIC, while the model with RW1 TVCs (α t ) for calendar trend offers the best fit (small deviance). Models with CC (β j ) for the covariates. Table 1 presents the DIC selection measures from the simple (single covariate) Poisson regression models with CC (β j ) for the covariates, and CC (α) or RW1 or RW2 TVCs (α t ) for calendar trend. First, for every meteorological variable, the model with CC (α) for calendar trend and CC (β j ) for the covariates corresponds to the simple Poisson regression, while the models with RW1 or RW2 TVCs (α t ) for trend and CC (β j ) for the covariates are the simple dynamic Poisson regression. Second, the simple Poisson regression models display worse fit than the simple dynamic Poisson regression models, evidenced by high DIC and deviance values. Third, the fit of the simple Dynamic Poisson models with CC (β j ) for the covariates, and RW1 TVCs (α t ) for calendar trend is better than models with RW2 TVCs (α t ) for calendar trend. Table 2 displays parameter estimates of the CC (β j ) for the covariates, from models with CC (α) or RW1 or RW2 TVCs (α t ) for calendar trend, from Table 1. Parameter estimates for the CC for temperature are 0.207 (95% CI: 0.197, 0.217); solar radiation, -0.309 (95% CI: -0.324, -0.294); and rainfall -0.026 (95% CI: -0.030, -0.022), from models with CC (α) for calendar trend suggesting a strong association between these variables and the weekly case counts of dengue. There is no statistical association between cases of dengue disease and relative humidity (0.026, 95% CI: -0.029, 0.031). These parameters correspond to the simple Poisson regression model. Although models with CC (α) for calendar trend show strong statistical association between covariates and dengue, the point estimates and 95% CIs from models with RW1 or RW2 TVCs (α t ) for trend show a weak association between cases of dengue and the meteorological variables, while these models present the best fit (small DIC and deviance). Models with RW1 TVCs (b t,j ) for the covariates. Next, we fitted models with CC (α) or RW1 or RW2 TVCs (α t ) for calendar trend, with RW1 TVCs (b t,j ) for the lag-one covariates. Information criteria for these simple dynamic Poisson regression models with TVCs (b t,j ) for the covariates are presented in Table 3. For temperature, DIC for the models with CC (α) or RW2 TVCs (α t ) for calendar trend are higher than the model with RW1 TVCs (α t ) for calendar trend. DIC for rain fall display similar results as temperature, i.e., DIC for the model with CC (α) or RW2 TVCs (α t ) for trend are higher than the model with RW1 TVCs (α t ) for calendar trend. For solar radiation, DIC for the model with RW2 TVCs (α t ) for calendar trend is smaller than the models with RW1 TVCs (α t ) and CC (α) for calendar trend. Lastly, the model with RW1 TVCs for relative humidity plus CC (α) for calendar trend have the smallest DIC for this covariate (DIC = 2490.9), but the number of parameters (p D ) is negative (p D = -539.1), which makes this model a poor option. DIC from the models with RW1 or RW2 TVCs (α t ) for calendar trend do not present negative p D . The smallest DIC is for the model with RW1 TVCs (α t ) for calendar trend. At this stage of the analysis, we identified models with RW1 TVCs (α t ) for calendar trend plus RW1 TVCs (b t,j ) for the covariates, as the models offering the best fit (smallest deviance and DIC). Then, in addition to the simple dynamic Poisson regression models with TVCs (b t,j ) for the covariates, we fitted multiple (multiple variables) dynamic Poisson models, presenting the information criteria in Table 4. DIC measures for all the models with RW1 TVCs (α t ) for trend plus RW1 TVCs (b t,j ) for the meteorological variables range from 2831.4 to 2897.6 ( Table 3). The model with RW1 TVCs for solar radiation and relative humidity (b t,SR + b t,RH ) presents the smallest DIC (DIC = 2831.4) and effective number of parameters (p D = 133.5), followed by the model including all the MVs in the predictors (b t,T + b t,RF + b t,SR + b t,RH ) (DIC = 2847.2), which presents the smallest deviance, selecting this saturated model for inference instead of model with solar radiation and relative humidity, because the model with the lowest DIC is also the model with the most imputed variables (solar radiation and relative humidity). We include the WinBUGS code for the selected model in S1 File, and convergence diagnostic measures in S1 Appendix for the model parameters in Table 4. Finally, from the model with TVCs for all the meteorological variables (b t,T + b t,RF + b t,SR + b t,RH ) in Table 3, we plot the time-varying parameter estimates (mean and 95% CIs) in Fig 4. TVCs for temperature and solar radiation present higher variability than the coefficients for relative humidity and rainfall. Point estimates for temperature start at values higher than zero, in contrast with relative humidity, solar radiation and rainfall, which begin almost at Short-term prediction of dengue case counts. We use the model with RW1 TVCs (α t ) for calendar trend plus TVCs (b t,j ) for the covariates (logðl t Þ ¼ a t þ P 4 j¼1 b t;j Þ (j = 1, temperature; j = 2, rainfall; j = 3, solar radiation; j = 4, relative humidity) to obtain a forecast for several time points during the study period 1 to T (T = 396). Table 5 presents the MAPE between the predicted mean and the observed dengue case counts for short-term prediction periods at one, two, three and four weeks, estimated at selected EW after the first EW of 2008, from the model selected for inferences. A quick inspection reveals that the highest MAPEs correspond to the EW associated with outbreaks in 2010, 2013 and 2014. Fig 6 show the MAPE results presented in Table 5. In the Figure, we added an horizontal line at 25% to help the inspection of the MAPEs. We conclude that for most periods, the MAPEs are under 25%, meaning that if we fitted the model for different estimation periods over the course of the study (January 2008 to August 2015) we could estimate the observed dengue case count for one or two weeks ahead with an error no more than 25%. Discussion In this report, DGLMs are employed to model time series of dengue disease case counts and meteorological variables. DGLMs for the data at hand included two components: the first substracts the temporal pattern, and the second models the covariate effect. We observed weak time-varying associations between cases of dengue disease and solar radiation and temperature. Time-varying associations mean that the dengue case counts are associated with solar radiation and temperature changes over time, where some intervals show a positive association, while in other intervals the association is negative. DGLMs are a straightforward way to deal with count data, without the need to transform or alter the response variable, accounting for covariates with natural time-varying behavior. For parameter estimation, we applied MCMC using WinBUGS 1.4, providing the flexibility to include constant and time-varying coefficients for calendar trend and covariates. There are few examples of studies including time-varying coefficients. Lee and Shaddick (2008) [44] fit DGLMs to pollution data and respiratory diseases, based on the block sampling algorithm from Knorr-Held (1999) [45]. Ruiz-Cardenas et al. (2012) [46] employed Integrated Laplace Approximation (INLA) to illustrate the fit of simulated and real time series of counts, using augmented data with the inclusion of time varying-coefficients for calendar trend and covariates. Our findings can be summarized as follows: in the models without covariates, the best model was the RW1 TVCs (α) for trend. Within the models with CC (β j ) for covariates, we found the worst fit in models with CC (α) for trend, which display strong association (95% CIs not including zero) between weekly cases of dengue and temperature, solar radiation and rainfall, but not with relative humidity. However, models with RW1 or RW2 TVCs (α t ) for calendar trend had a good fit, revealing a weak association between dengue and the covariates. These findings are important because simple and multiple Poisson regression models with constant coefficients for the covariates are statistical methods commonly employed to model counts of infectious diseases like dengue [4]. For example, Hii et al. [16] modeled dengue and weather variables, applying a Poisson multiple regression model with piecewise linear spline functions for the covariates and constant coefficient terms to model autoregression, seasonality and trend. They validated the model by forecasting cases of dengue for week 1 of 2011 up to week 16 of 2012 using weather data alone. In the class of models with RW1 TVCs (b t,j ) for the covariates, the best model corresponds to the simple dynamic Poisson model with RW1 TVCs (α t ) for calendar trend. After fitting the simple dynamic regression models, we fitted multiple dynamic regression models, with several combinations of TVCs (b t,j ) for the covariates, and we selected the model including all the meteorological variables. Our final model delineates the time-varying association between the covariates and cases of dengue, although the inspection of the mean estimates and 95% CIs of the RW1 TVCs (b t,j ) for the covariates shows a weak association. In the literature associating dengue and weather variables, many of the modeling strategies show strong association (evidenced by low p-values) between dengue and meteorological variables, with different lag periods. As an example, Xu et al. [19] established an association between absolute humidity (relative humidity adjusted by temperature) and dengue cases using a Poisson distributed lag non-linear model, with cubic splines for the covariates and accounting autoregression with constant coefficients for the lag-one and lag-two response. We also evaluate the short-term predictive performance of the selected model, concluding that it enables relatively accurate (< 25% error) prediction of weekly dengue case counts at one or two weeks ahead although the predictions are strongly influenced by volatility in the weeks preceding the prediction periods, with high volatility associated with high MAPE in the predictions, as occurred in the peak of the 2010, 2013 and 2014 outbreaks in Bucaramanga. Before finishing our discussion, we acknowledge some study limitations. The dengue case counts used in the data corresponded to the probable and confirmed cases reported to the official public health surveillance system in Colombia. The weekly dengue data was the sum of the the dengue and severe dengue cases per EW. Romero-Vega et al. (2014) [47] concluded that the expansion factor (the factor by which the reported cases should be multiplied to adjust for underreporting) of dengue was 7.6 for 2013, which is high. This implies that efforts to decrease underreporting must be undertaken to improve data quality for the entire surveillance system. It would be difficult to quantify the impact of underreporting in our conclusions, but still, the methods we used are valid for adjusted time series of dengue. The covariates data (time series of temperature, rainfall, solar radiation and temperature) were a composition of several time series at daily and hourly temporal scales from several meteorological stations at different locations in the city. We summarized the data, averaging them for the different temporal scales and stations and consequently losing some data. However at some point the analyst must decide how to summarize the information to input variables for a modeling exercise. If the temporal scale is reduced (from weekly to daily data) the dengue case counts will be lower, and the Poisson models presented in the study could fit the data much better than Normal models. One of this study's referees remarked on the absence of vector data in the study. We explored several sources of vector data in the city, but we did not find any data at the temporal scale of the study. We recognize that the inclusion of data for the distribution, presence and ecology of the vector would improve the conclusions of the study, but this is an opportunity to show that dengue in Colombia, and particularly in Bucaramanga, is a neglected disease, despite its huge impact on the population and the allocation of resources for dengue research (Villabona-Arenas et al., 2016) [38]. One interesting experience in ongoing vectorial surveillance is in the city of Medellín, Colombia. Rúa-Uribe (2016) [48] reported that the Health Office of this city designed an entomological surveillance system using mosquito larval traps. We hope that the results of this interaction between the public sector and the research community will be disseminated to the country, and similar surveillance systems will be applied in all Colombian cities affected by arboviral diseases. In the mean-time, for the city of Bucaramanga, we applied a dynamic Poisson model with time-varying coefficients for the covariates and calendar trend, which helps to establish the association between climatic factors and dengue case counts at a small temporal scale, providing a prediction model within the bounds of the limitations presented in the study. Forecasting models are commonly deployed in dengue research literature. Earnest et al. [10] compare the forecasting ability of the ARIMA model and the two-component Knorr-Held model (seasonal and epidemic Bayesian hierarchical time series model) to predict out-of sample cases of dengue. They found similar predictive ability (lower MAPE values) for the Bayesian K-H model and the ARIMA model. Forecasting models of dengue disease usually account cyclical or seasonal behavior of the time series at hand. Earnest et al. [10] and Hii et al. [16] included seasonal trend by means of sinusoidal terms with trigonometric series structure. In a previous stage, we included seasonal terms, but we removed them from the models, allowing the time-varying coefficients for calendar trend alone account for dengue incidence trends. We establish the short-term predictive performance of a model with time-varying coefficients (α t ) for calendar trend and time-varying coefficients (b t,j ) for meteorological covariates. We found a moderate predictive ability from the model to forecast cases of dengue disease at one or two weeks, which could be used by public health authorities interested in employing predictive models to help in the labors of dengue surveillance and control in Colombia. For the future, we will explore the study models in different datasets from other cities of Colombia because, the enviromental and physical conditions are generally similar between many cities and municipalities. The models presented in the study are not only available for use with climatic variables. They can also include data from vectorial studies, socioeconomic variables and many more, if these are available at weekly or monthly temporal scales. In conclusion, we found that dynamic generalized linear models can forecast dengue cases at one or two weeks in Bucaramanga, based on temperature, rainfall, solar radiation and relative humidity, and the models allow us to explore the association between weekly cases of dengue and these covariates through the time.
2018-04-03T03:06:48.160Z
2017-07-01T00:00:00.000
{ "year": 2017, "sha1": "4eb5ecaeb2a0a2f3a54c41d02e6d4c118ea55877", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosntds/article/file?id=10.1371/journal.pntd.0005696&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4eb5ecaeb2a0a2f3a54c41d02e6d4c118ea55877", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Medicine" ] }
106401516
pes2o/s2orc
v3-fos-license
Greenhouses Gases , Carbonyls , and Volatile Organic Compounds Surface Flux Emissions at Three Final Waste Disposal Sites Located in the Metropolitan Area of Costa Rica The surface flux emissions for volatile organic compounds (VOC’s) (alcohols and aromatic species), priority carbonyls and greenhouse gases, were measured in three different final disposal sites for urban solid waste located in the metropolitan area of Costa Rica, between July and October 2014. The emissions fluxes were determined using the static sampling chamber technique coupled to two different adsorption tubes: active charcoal (Supelco, ORBO 32) to capture BTEX and alcohols; and 2,4-DNPH coated silica gel (SKC, 226-119) for carbonyls. As for the VOCs, the BTEX, Alcohols, and Carbonyls total fluxes were in the range of 3 to 258, 1 to 318 and 0.4 to 8.5 mg/(mdía), respectively. The magnitudes per site were in the following order La Carpio > El Huaso > Rio Azul. Ethanol and BTEX presented a high correlation in all the cases because possibly they are sharing the same sources or formation mechanisms. The emission fluxes spatial distributions among the sites were very variable and dependent on the location of the active cells and their age. Only La Carpio showed a more homogeneous distribution due to its middle age. Introduction In Costa Rica, according to the statistics reported by the Health Secretary, daily about some landfills and controlled open disposal sites.This high percentage is possible since 84% of the country's households have a waste collection service, as for the rest: 5% buried them, 10% burned the wastes, and the remaining percentage improperly disposed of them in water bodies [1].At both the cantonal and national levels, MSW collection is a permanent activity where there is a continuing tendency to use landfills and dumps for final waste disposal and treatment.The operative costs are around $40/ton, which is higher than Mexico ($11 -12/ton), Colombia ($ 8/ton) and Chile ($ 13 -22/ton).Despite that, the MSW generation rates per year increased leading to a negative consequence related to the useful life reduction of the disposal sites [2]. According to the national greenhouse gases emissions inventory [3], during 2012, 70.20 Gg of methane were released from solid wastes treated in landfills of which only 16.44 Gg were recovered.Given this scenario, the development of GHG mitigation strategies in the solid waste sector is of great importance, since in the past GHG inventory this category accounted for 4.4% of the total emissions, which exceeds the world average of 3.6% [4]. The composition of the gas produced in landfills, as well as their generation rate, depends both on the characteristics of solid wastes as well as on various environmental factors, among which we can mention the presence of oxygen in the landfill, temperature, and content of moisture [7].The anaerobic biodegradation of the organic matter contained in the solid waste includes three successive stages: acidogenesis, acetogenesis and methanogenesis to which is added a residue stabilization phase [8].During the first two stages, the hydrocarbon matrices contained in the organic residue decompose generating the formation of monomers and the fermentation of alcohols which produce volatile fatty acids and esters.The continuation of this reaction leads to the formation of acetic acid, hydrogen and carbon dioxide.These compounds are then consumed in methane production in the methanogenesis phase [9] [10].The main families of VOCs emitted are alcohols, aldehydes and ketones, chlorinated hydrocarbons, terpenes, aromatic compounds, alkanes and alkenes.This formation or production of VOCs results from associated or competitive side reactions, i.e. the monomerization of polymers in organic matter, the reorganization of organic matter during humification or from the separation of compounds initially present in the residue [11]. The gases generated as a product of the waste decomposition expands and accumulates internally leading to an increase in volume and pressure.Since the pressure inside the landfill is higher than the atmospheric pressure, natural convection tends to be the main mechanism governing the rate of gas emissions in landfills [12]. Several methodologies have been developed to determine the surface flux emis-Open Journal of Air Pollution sions generated in landfills, which use both direct and indirect methods of measurement applied continuously or discretely [13]. The direct flux measurement system is based on the static chamber allowing only one time measurements.In this technique, a chamber is used to enclose a small surface area at a defined sampling site, while a controlled sweeping zero air flow is introduced at a rate which exceeds the gas release rate from the covered surface.This sweeping zero air mixes with the landfill gases coming from the surface and transports these gases through an outlet port, which is connected directly to automatic analyzers or adsorption tubes for the gases of interest [14]. In the present work, the fugitive emission fluxes of VOCs, methane and carbon dioxide generated in three waste disposal final sites located in the Metropolitan Area of Costa Rica were measured in order to analyze their temporal and spatial variability. Sampling Sites Description The study took place in three urban solid waste disposal sites located in the metropolitan area of Costa Rica, during 2014.Two of the sites are considered to be fully functional landfills, and both are still active.They receive solid wastes from residential, commercial and industrial areas of the major cities of Costa Rica. Sampling Flux Devices The sampling devices consisted of plastic flux chambers adapted to sample each group of target compounds.For methane sampling, a static chamber was built using a 30 L plastic cylindrical recipient provided with an internal fan (for homogeneous mixing), a temperature sensor and a sampling port located in the middle of the body structure.A glass syringe was used to take 12 ml samples and store them in 10 ml vacutainer glass tubes, at 0, 5, 10 and 15 minutes.The collected tubes for each sampling point were placed in a sealed plastic bag and kept in a cold box with ice until its arrival to the laboratory, and moved to a fridge at 4˚C before the analysis. For the carbon dioxide flux measurement a similar chamber was used, but having two sampling ports connected to a closed recirculation system through an infrared sensor made by Li-COR (LI-7200).This set-up allowed real-time measurements of carbon dioxide at 10 Hz, during 15 to 30 minutes, depending on the flux magnitude to avoid sensor saturation. For the VOCs fluxes, a different chamber setup was used.The system consisted of a 30 L chamber with two ports: the first one located on the lower side and attached to an air scrubbing system to inject a zero carrier gas through the chamber. The second one was on the upper part connected to a sampling media and a portable vacuum pump (Sensidyne).This system was used for two different sampling tubes: one with active charcoal (Supelco, ORBO 32) to capture BTEX and alcohols; and the second having 2,4-DNPH coated silica gel (SKC, 226-119) for carbonyls sampling.The tubes were wrapped in aluminum foil to protect them from the sunlight during and after the sampling.The pump flow rate was 0.5 liters per minute, which also was about the same for the fresh air injection, during 1 to 3 hours.The collected sorption tubes were transported at 4˚C to the laboratory and moved to the fridge before the analysis. Laboratory Analysis For the methane analysis, each lot of four tubes were analyzed in an Agilent 7890A gas chromatograph using a flame ionization detector.The analysis conditions were: injection volume 300 μl, splitless inlet temperature 200˚C, detector temperature 300˚C, helium carrier gas flow 12 ml•min −1 , capillary column PLOT-Q and oven temperature 35˚C.The samples were quantified by interpolation in a calibration curve made by dilution using a certified concentration methane cylinder (Air Li-quide).The standards were prepared in vacutainer tubes to replicate the sample storage conditions.The concentration results from the four tubes were plotted against the sampling time, to apply a regression line.The slope of the fitted line represents the emission rate for methane.This change in volumetric concentration was converted to a mass flux by using the ideal gas law.The methane flux, F (g•m −2 •d −1 ), is calculated as in Equation (1): where P is pressure, V is chamber volume, M is the molar mass of methane (16 g/mol), A is the surface area covered by the chamber, T is chamber temperature (kelvin), and R is the gas constant [14]. The activated charcoal tubes were broken to remove and divide the sampling media (front and back side).Each part was extracted in 2 ml of carbon disulfide and analyzed for BTEX and alcohols using an Agilent 7890A gas chromatograph with a flame ionization detector.A HP-WAX tercapillary column was used with different analysis conditions, according to NIOSH methods 1400 and 1600. The carbonyls tubes were also opened and the 2,4-DNPH coated silica divided into front and back portions.They were extracted in 3 ml of acetonitrile (HPLC grade) and analyzed by liquid chromatography (ICS-3000) and ultraviolet-visible detection.The analysis conditions were the same described by method USEPA TO-11A. Finally, the VOCs emissions rates were calculated using the species concentrations (C), chamber volume (V), surface area (A), and sampling time, according to Equation (2). Quality Controls The laboratory analysis methods were validated according to the guidelines established by EUROCHEM (2014) [15].An estimated detection limit for the flux measurements was calculated for each parameter, as it is presented in Table 1. For the sampling campaigns, a field blank was prepared for each day to check for possible external contamination during the transportation or the storage.The blank was tested with no significant contamination found for any carbonyl and VOC. The performance of the entire analytical system was checked by means of analyzing duplicates and tubes with known concentrations.Concentrations measured in duplicate samples were in good agreement, with a relative standard deviation of less than 15%. The validity of the sampling was checked by comparing the mass of analyte quantified in front and the back side of the capture tube.If the front/back ratio was below ten the sample was discarded and repeated.statistical treatment to be applied.The Shapiro-Wilk's method and Q-Q plots were used to check data distribution for each site.Both, visual and statistical method indicated non-normal flux results distribution for all the measured parameters (p-values ≤ 0.05).Some cases, like methane (Figure 1), exhibited nonnormality in the statistical test but a semi-normal tendency in the plots.For a more conservative data analysis further statistical tests were of the non-parametric type. Surface Flux Emissions From the results, the greenhouse gases surface fluxes were the highest of all followed by BTEX, alcohols, and carbonyls.Which is expected behavior when solid wastes are buried, due to the anaerobic conditions that generate, mostly, methane and carbon dioxide in some extent.The magnitude of the VOCs emissions depends on the soil gas permeation, cell conformation, lixiviates residence time and waste depth [16]. From Figure 2, methane fluxes showed higher data dispersion for El Huaso, followed by Rio Azul and La Carpio.These results are explained due to important flux differences in the spatial surface distribution which is very specific of the dynamics presented on each sampling site.The methane flux magnitude was found to be El Huaso > La Carpio > Rio Azul, despite the fact La Carpio should have the conditions for higher fluxes due to his age and waste input.However, this landfill is the only one that has a biogas extracting tubing network provided with a central unit that generates a vacuum in the whole system.This design probably causes causing less methane loss through the soil layers which results in lower fluxes compared to El Huaso.In the latter, also exist a tubing network for biogas extraction, but there is no centralized unit to collect and burn the gas.Instead, each major biogas extraction well has a natural draft burner manually ignited each time the flame extinguishes.This nonpressurized system could allow more surface flux of methane to the atmosphere because the biogas is not forced to go through the pipes.As for Rio Azul, this presented the lowest average value since it is an old, already closed, waste disposal site with no geomembrane or inner biogas extraction tubing.It is very likely that most of the carbon stock is already depleted, at least the fast degradable fraction. Figure 3 shows the fluxes comparison for carbon dioxide in the same three sites.Similarly, as for methane, the emissions were higher for El Huaso, followed by La Carpio and Rio Azul.The main difference with the methane results is a higher data dispersion in all the sites, especially in La Carpio, which points out a greater spatial variability. As for the VOCs, the BTEX, alcohols, and carbonyls total fluxes were in the range of 3 to 258, 1 to 318 and 0.4 to 8.5 mg/(m 2 d), respectively.The magnitudes per site are in the following order La Carpio > El Huaso > Rio Azul.These VOCs comes from the solid waste composition and decomposition processes where the soluble ones tend to go with the landfill leachate.Most of them are intermediaries to the methane and carbon dioxide formation [17].La Carpio is the landfill that Open Journal of Air Pollution receives the highest solid waste input per day and moves more cubic meters of soil than any other.This could explain why La Carpio has the greater VOCs soil emissions; also the solid waste composition could play a vital role, but this was unknown for the moment of this project.However, major composition differences are not expected between La Carpio and El Huaso since both receive urban solid wastes from the same metropolitan area of Costa Rica.Rio Azul presented the lowest VOCs flux mainly due it is closed many years ago, so most of the organic matter is decomposed.El Huaso and Rio Azul showed fluxes in the following magnitude order per group: BTEX > Alcohols > Carbonyls, unlike La Carpio where the order is: Alcohols > BTEX > Carbonyls.In the BTEX group, the highest flux was for ethylbenzene found in La Carpio and El Huaso.For alcohols, thanol was the most important for both active landfills emissions, and this is related to the fact of being a by-product of the acidogenesis stage during the anaerobic biodegradation of the buried solid wastes [18].In the case of carbonyls fluxes, acetone and acetaldehyde were the greatest among all.Both can be formed in significant quantities during the organic matter decomposition processes.However, they also can be present in industrial and domestic wastes because of the use of several chemical products and solvents containing these compounds.For example, acetaldehyde is often employed in the food industry as an additive, and also produced from natural sources as fruits and alcohol fermentation [19]. The benzene/toluene ratios for the emission fluxes were calculated for the sampling sites, with the following results: 0.24 ± 0.04, 0.33 ± 0.08 and 0.35 ± 0.10 for Rio Azul, La Carpio yel Huaso, respectively.For landfill biogas and areas near the waste disposal sites the values are between 0.1 -0.3.These ratios depend of the type of residues buried in the waste disposal site, e.g.degreasers, paints, industrial solvents and cleaning products [20] [21]. A Spearman correlation analysis was performed with the results of each site to determine any meaningful relationships between the analyzed parameters.Figures 4-6 shows the correlation matrixes as heat maps with the significative coefficients p < 0.05) indicated without a cross.A strong correlation (>0.75) was found between the components of each VOCs group since they belong to the same compound family that probably shares a common origin.Medium coefficients were observed (around 0.5) between ethanol and BTEX, which suggest their relationship during the biodegradation or disposal of the solid wastes.This behavior was observed for all the sampling sites. Temporal Variations The results obtained on each campaign were compared for each location to establish any difference due the time of the year and the hour when the measurements took place.In the first campaign, the sampling activities were done during early hours of the day (7 to 9 am) and for the second after noon (12 to 2 pm). The Mann-Whitney U test was applied to each group of parameters with an α = 0.05.The results of the statistical test show no significative differences between Open Journal of Air Pollution Spatial Distribution For a better data spatial visualization and analysis, an interpolation map was made for methane fluxes on each sampling site.The inverse distance weighting (IDW) was the interpolation method selected because of its simplicity and fewer data assumptions to take into account.Figure 7 shows the interpolation maps for Rio Azul, La Carpio y El Huaso. Rio Azul presented higher methane fluxes from the center to the southeast side of the waste disposal area.This particular location matches with the most recent part where solid wastes wereburied15 years ago and also was the only cell that was properly managed as in a landfill, from 2002 to 2007.El Huaso is a younger landfill facility with two working cells where only one was being used for waste disposal; the other was closed due to the conclusion of the first stage. Río Azul La Carpio El Huaso design for waste treatment.The first site is La Carpio (open since 2002) located at 9.96N -84.15W, with an elevation of 994 masl, ambient temperature ranging 17˚C -24˚C, 2000 mm annual precipitation, and a waste input of around 1250 tons per day.The second site is El Huaso (open since 2005) located at 9.85N -84.06W, with an elevation of 1240 masl, ambient temperature ranging 24˚C -25˚C, 2000 mm annual precipitation, and a waste input of around 1200 tons per day.The third site is Rio Azul located at 9.89N -84.03W, with an elevation of 1189 masl, ambient temperature ranging 17˚C -24˚C, 2500 mm annual precipitation, and a waste input of around 1200 tons per day.The latter is the oldest and started as an open dump site (1965), which later turned into semi-landfill management (2002).It did not have geomembrane lining or an internal tubing network for biogas collection and was closed in 2007.On each site, the waste covered area was estimated from the project plans to calculate a sampling grid based on the Mexican Standard NMX-AA-132-SCFI-2006, regarding soil sampling for contaminated soils.The working front and solid waste uncovered areas were excluded from the sampling grid.The total sampling points were 19 for La Carpio, 10 for El Huaso and 20 for Rio Azul.The sites were visited twice, the first campaign in July, during the morning, and the second one in October 2014 for the afternoon.July is a transition month between summer and rainy season, while October is mostly rainy season.A higher ambient air temperature occurs in the afternoon compared to the early hours of the morning, for those times of the year.DOI: 10.4236/ojap.2017.64012152 Open Journal of Air Pollution Figure 1 . Figure 1.Normality plot for methane fluxes data measured in the three waste final disposal sites. Figure 4 . Figure 4. Heatmap spearman correlation matrix for surface emission fluxes measured at El Huaso. Figure 5 . Figure 5. Heatmap spearman correlation matrix for surface emission fluxes measured at La Carpio. Figure 6 . Figure 6.Heatmap spearman correlation matrix for surface emission fluxes measured at Rio Azul. Figure 7 . Figure 7. Interpolation map for the methane emissions fluxes measured in the waste final disposal sites. Table 1 . Detection limits for GHG, VOCs and carbonyls determination in the surface flow emissions. Table 2 cohols for ethanol, 2-propanol and tert-butanol; and the carbonyls include fifteen priority compounds established by USEPA.A normality test was performed to determine data distribution and further Table 2 . Average surface flux emissions for GHG, BTEX, alcohols, and carbonyls in three waste final disposal sites in Costa Rica. DL: detection limit, NA: no value.
2019-04-07T08:21:40.065Z
2017-11-09T00:00:00.000
{ "year": 2017, "sha1": "77054007e4700bd325e2f69613c290ae7b78bb46", "oa_license": "CCBY", "oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=80871", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "77054007e4700bd325e2f69613c290ae7b78bb46", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
159417645
pes2o/s2orc
v3-fos-license
New Strategies to Improve Co-Management in Enclosed Coastal Seas and Wetlands Subjected to Complex Environments : Socio-Economic Analysis Applied to an International Recovery Success Case Study after an Environmental Crisis Enclosed coastal seas and wetlands are areas of high ecological value with singular fauna and flora, but several cases of environmental catastrophes in recent decades can easily be referenced in the international literature. The management of these natural territories is complex in developed countries since they are usually subjected to intense human activity with a varied catalog of activities and anthropizing features that alter the balance of the ecosystem. In this article, the concept of the Socio-Ecological System (SES) to diagnose and achieve a sustainable cohabitation between human anthropization and the natural values based on the tool of GIS participatory mapping is proposed as an innovative approach for the management and recovery of these complex areas. The article develops a comprehensive general methodology of spatial GIS diagnosis, planning, and co-management implementation between public and private stakeholders combined with economic tools such as the Willingness to Pay (WTP) and the Cost Transfer Sector (CTS). This innovative approach is applied to the Mar Menor lagoon, which is an international and successful case study of environmental recovery on the Spanish Mediterranean coast. The coastal lagoon suffered an unprecedented eutrophication crisis in 2015, but it managed to recover in the summer of 2018 without the need to implement major structural measures. In this case study, several solutions to redress the current impacts will be developed through a participatory process based on GIS mapping. Lastly, the discussion reflects the concept of self-resilience of an ecosystem based on the unexpected positive turn of the environmental crisis in the lagoon ending. The Concept of Management for the Recovery of Natural Areas after an Intense Process of Anthropization Enclosed coastal seas and wetlands are traditionally areas of high ecological importance with valuable fauna and flora [1][2][3].They provide significant ecosystem services for biodiversity such as food, recycling and removal of dangerous chemicals, climate regulation, culture and landscape, and more [4][5][6].Because of these factors, their natural and geographical conditions are also usually interesting for the development of human activities such as tourism, agriculture, industry, and more [7][8][9].This makes managing these maritime-coastal territories sometimes very complicated, which results in well-known examples of environmental catastrophes caused by the impact of human anthropization in developed countries [10][11][12][13].The recovery of these natural spaces after periods of long exposure to anthropogenic impacts or the emergence of specific environmental crises is usually very difficult. As the first step, it is initially necessary to determine exactly the focus of the environmental problem, or which elements cause the anthropic transformation process, and diagnose its clear origin [14].This task may not prove easy.The origin of the detected phenomenon is not always easy to determine or does not respond to a single agent but is the sum of a combination of multiple factors [15,16].Afterward, there is the problem of implementing the necessary measures to put an end to those current impacts.This issue may give rise to numerous, rather complex, scenarios and usually creates several conflicts of interest among the stakeholders affected. On the one hand, we have the problem of the economic cost involved in the task of recovering the affected natural area, given that the economic cost of reversing the process and restoring the area to its previous status is usually far greater than the cost of altering it [11,17].On the other hand, the usual controversy arises from determining who should be responsible for carrying out this restoration process [18,19].On many occasions, the fact that the administrations take on this process is very controversial because, although it does guarantee to a large extent a correct action, it fails to respect the "polluter pays" principle [20][21][22][23].At other times, the social cost of completely eliminating the focus of anthropization is not politically acceptable since it implies ending economic activities that provide many jobs or involves an activity with strong local roots [13,24]. This context usually requires the implementation of complex management systems for the recovery process, which must be sustainable over time and able to even retain cohabitation with the presence of pre-existing human activities [25][26][27].The formulas and mechanisms of these processes are not easy to standardize and require periodic updates at the research field, since the level of complexity of the impact on our society in developed countries is growing [28,29].Moreover, the rapid incorporation of developing countries into the global production process and mass consumption has multiplied the number of cases that exist, which makes scientific research a major issue for the future [30][31][32]. The scientific literature provides interesting cases of major environmental disasters such as the Salton Sea (US) or the Thau lagoon (France) that have required great recovery plans.The management for the recovery of the 974 km 2 Salton Sea lake, which was a tourist picture postcard location in the 1950s with sandy beaches and thousands of migratory birds, has proved very controversial.Its ecosystem is currently severely damaged, mainly due to the effects of agricultural activities, which leaves it saturated with salts and pesticides [33].The US Authorities, through the Salton Sea Land Act from 1999, planned great works for 75 years seeking the lake's rehabilitation [34].Nevertheless, the development of such a long-term proposal, the need for very important public investment during periods of economic crisis, and the absence of involvement of many of the private process stakeholders means that, 20 years after the beginning of the plan, appreciable results are difficult to demonstrate. The Thau coastal lagoon located in France represents another interesting case.This natural area of 70 km 2 of surface has been subjected to a process of diffuse anthropization for decades as a result of a varied catalog of human activities (mass tourism, agriculture, fishing, marinas, and motor boating, etc.).In recent years, these activities have led to various imbalances in the ecosystem of the lagoon, such as oligotrophication and the emergence of picocyanobacteria and a toxic dinoflagellate [35], contaminated sediment [12], and algal blooms [36].The difficulty in determining the exact causes that generated these alterations in the quality of the water prompted the development of the DITTY EU project in 2003 [37].A decision support system (DSS) was developed for the lagoon, which gave end-users different scenarios, according to financial, socio-economic, and environmental constraints.The different scenarios were then ranked, according to the requirements of the end-users [38].Afterward, 15 years after the start of the project, alterations in the waters of the lagoon continue to exist.Furthermore, the causes of the agents generating those imbalances continue to be heterogeneous and sometimes even unexpected.The latest crisis forced the lagoon's closure in March 2017 due to an increase in the rates of coliforms above the levels authorized for human health.However, the cause (which was attributed initially to problems in urban sewage systems), was the defecation of the high population of birds in the lagoon, which denotes the level of complexity in the comprehensive management of the lagoon as a whole ecosystem [39]. These two examples, as well as several others [40][41][42], clearly illustrate the complex management problems existing in the environmental recovery of these natural areas, and the need to deepen research in this field to achieve satisfactory results in processes of this type that frequently need to be undertaken in such areas [43].Despite the growing scientific interest in incorporating the economic variables and the social perspective into environmental recovery processes [44][45][46], there are currently important gaps at the level of research in this field.In addition to the usual difficulty in defining the scope of action and the responsibility that public and private stakeholders should assume in the process [47], we must also consider the increasing complexity of finding an optimal management framework [48].In this context, the use of a socio-ecological framework implementing economical approaches and GIS methodologies for the diagnosing and managing complex natural areas exposed to the intense process of diffuse or specific anthropization can prove very interesting [2,49,50].The breakthrough developed by GIS tools in recent years can bring new high value approaches to the existing integrated management strategies in the field of diagnostics [14,51], participation of stakeholders, and implementation of measures for the recovery of these areas facing environmental crises.This article presents a comprehensive socio-ecological framework implemented with GIS and cost value methodologies for the Mar Menor lagoon in Spain and its latest results, which have become a case of some success in 2018 in the aftermath of an environmental crisis of the international relevance in 2015. The Mar Menor Case Study The Mar Menor is a salt lagoon of 170 km 2 located on the Spanish Mediterranean coast (Figure 1).It is separated from the Mediterranean Sea by an old dune strip, which allows the exchange of water only through five natural channels called golas [52].This configuration gives it a hypersaline character that has generated a valuable ecosystem with crystal clear waters.This enables abundant marine fauna and flora to flourish, with singular species such as Pinna nobilis (the largest bivalve mollusk in the Mediterranean Sea) or autochthonous variants of the hippocampus.heterogeneous and sometimes even unexpected.The latest crisis forced the lagoon's closure in March 2017 due to an increase in the rates of coliforms above the levels authorized for human health.However, the cause (which was attributed initially to problems in urban sewage systems), was the defecation of the high population of birds in the lagoon, which denotes the level of complexity in the comprehensive management of the lagoon as a whole ecosystem [39]. These two examples, as well as several others [40][41][42], clearly illustrate the complex management problems existing in the environmental recovery of these natural areas, and the need to deepen research in this field to achieve satisfactory results in processes of this type that frequently need to be undertaken in such areas [43].Despite the growing scientific interest in incorporating the economic variables and the social perspective into environmental recovery processes [44][45][46], there are currently important gaps at the level of research in this field.In addition to the usual difficulty in defining the scope of action and the responsibility that public and private stakeholders should assume in the process [47], we must also consider the increasing complexity of finding an optimal management framework [48].In this context, the use of a socio-ecological framework implementing economical approaches and GIS methodologies for the diagnosing and managing complex natural areas exposed to the intense process of diffuse or specific anthropization can prove very interesting [2,49,50].The breakthrough developed by GIS tools in recent years can bring new high value approaches to the existing integrated management strategies in the field of diagnostics [14,51], participation of stakeholders, and implementation of measures for the recovery of these areas facing environmental crises.This article presents a comprehensive socio-ecological framework implemented with GIS and cost value methodologies for the Mar Menor lagoon in Spain and its latest results, which have become a case of some success in 2018 in the aftermath of an environmental crisis of the international relevance in 2015. The Mar Menor Case Study The Mar Menor is a salt lagoon of 170 km 2 located on the Spanish Mediterranean coast (Figure 1).It is separated from the Mediterranean Sea by an old dune strip, which allows the exchange of water only through five natural channels called golas [52].This configuration gives it a hypersaline character that has generated a valuable ecosystem with crystal clear waters.This enables abundant marine fauna and flora to flourish, with singular species such as Pinna nobilis (the largest bivalve mollusk in the Mediterranean Sea) or autochthonous variants of the hippocampus.Human activity (present since the Roman era with small mining activities, fishing, or salt mines) has increased since the mid-20th century with the arrival of mass tourism.This has intensely urbanized a great deal of the coastal perimeter (the coastal perimeter has gone from being 4.5% urbanized in 1956 to the current level of 76.6% while the population grew from 15,000 inhabitants to the 600,000 reached with tourists in the summer), and recreational navigation developed with the construction of 10 marinas and the introduction of numerous motor boats [53].From the 1980s, the construction of the water transfer network in Spain between the rivers Tajo and Segura contributed to a strong development of intensive agriculture around the lagoon [54].From the 1990s, the Mar Menor became a natural, highly protected territory being catalogued as Special Protection Area (SPA), Special Area of Conservation (SAC), and Marine Protected Area (MPA) by the European network Natura 2000 and included as an international RAMSAR wetland, in addition to the development of other local and regional environmental protection figures. In this period, anthropic activity along the perimeter of the lagoon began to be progressively restricted in issues such as the construction of houses, marinas, earthworks on beaches, etc. because these elements became the main issues of controversy and claims from social platforms and environmental groups.The only pending threat still to be resolved appeared to be the land drags arriving through the wadis from the nearby areas during flooding and its subsequent sediment at the bottom of the lagoon.However, the exceptional nature of a serious impact of these events, associated with the torrential rains of the Mediterranean climate during the autumn-winter and the fact that they did not affect summer tourism left this question to not be considered a priority at the social and scientific level.From that time, we also saw an exponential growth in the number of jellyfish of the species Cotylorhiza Tuberculata, whose proliferation reached its annual maximum in the summer with populations exceeding 100 million specimens [49].This question did, however, generate greater social relevance, which forced local administrations to collect thousands of tons of these jellyfish periodically in order to not harm tourism (see Supplementary Materials). An intense phenomenon of eutrophication in the lagoon bloomed in the summer of 2015, which transformed the traditionally crystal-clear waters into a greener color.The turbidity of the water meant that the level of visibility, always surpassing at least two meters in the lagoon, did not even reach 10 centimeters (see Supplementary Materials).This loss in transparency of the water prevented the sunlight from reaching the seabed, whose vegetation cover almost completely died in just one year.The phenomenon, which was accompanied by the disappearance of the jellyfish population, caused a major social alarm.The situation, aside from the environmental issues, generates heavy economic losses for tourism through the loss of the blue flags on all beaches since 2016. In this context of the environmental crisis, a project for the diagnosis and implementation of solutions was launched in 2015 through the European mechanism for financing an Integrated Territorial Investment (ITI) [18].The present research has been carried out within the structure of this project.The investigation may be of great interest for researchers in the field of environmental recovery processes, since it has developed an integrated framework for diagnosis and management in which all involved stakeholders participated.In this sense, an innovative socio-economical perspective for evaluating the viability of the recovery process is proposed.The analysis introduces an unusual point of view of a GIS participatory mapping approach to determine the stakeholders' degree of responsibility and participation in the problems and their solutions, which mixes the GIS approach with socio-economic concepts in the field of managing natural protected areas, such as the Willingness To Pay (WTP) and the Cost Transfer between Sectors (CTS).This enables a public-private partnership (PPP) framework to be established for managing the solutions in an optimized way. The following sections will develop the methodological aspects carried out in the project.Then the results obtained for the case study will be explained.Lastly, the successful and unexpected situation will be addressed in the discussion section. Methodology The lagoon recovery project is proposed in two major phases (Figure 2) to implement the process and gain the commitment of the stakeholders to establish a framework of sustainable cohabitation in the future.The methodology of the two phases will be described through two subsections.The philosophy of the project is not based on a sanctioning or punitive purpose, but on an approach that enables a sustainable framework of cohabitation of the existing activities and the natural values of the lagoon to be established.In this way, the stakeholders' commitment with the solutions adopted is guaranteed, as well as the maintenance of a new framework of co-responsibility that prevents the current situation from recurring in the future.However, this does not mean that the solutions adopted do not reflect, in a fair and balanced manner, the degree of responsibility of the current environmental crisis in each of the agents involved. the future.The methodology of the two phases will be described through two subsections.The philosophy of the project is not based on a sanctioning or punitive purpose, but on an approach that enables a sustainable framework of cohabitation of the existing activities and the natural values of the lagoon to be established.In this way, the stakeholders' commitment with the solutions adopted is guaranteed, as well as the maintenance of a new framework of co-responsibility that prevents the current situation from recurring in the future.However, this does not mean that the solutions adopted do not reflect, in a fair and balanced manner, the degree of responsibility of the current environmental crisis in each of the agents involved. The first phase develops an integrated diagnosis that must be able to involve all stakeholders and determines all the issues that affect the current situation of the lagoon in a hierarchical manner.The second phase focuses on the proposal of measures and their implementation in the short, medium, and long-term in a coordinated manner through the establishment of a governance framework for the Mar Menor.Both phases are, in turn, developed under two approaches that run in parallel: a more operational and a strategic approach.The backbone of this biphasic procedure is the so-called GIS participatory mapping process.This concept allows us to diagnose, analyze, and propose measures based on the objective and detailed spatial information provided by the GIS indicators.The framework facilitates reaching agreements and a technical-social consensus among the different stakeholders.This process leads to the phase of implementation and the maintenance of measures through a system of public-private collaboration.This co-management system guarantees the minimum public investment for a comprehensive recovery of the lagoon as well as the private investment necessary to sustainably maintain the measures.In this way, the execution of the measures is optimized, which makes the stakeholders co-responsible for the tasks for which they are more specialized.The process of implementing these different stages will be described in detail below.The first phase develops an integrated diagnosis that must be able to involve all stakeholders and determines all the issues that affect the current situation of the lagoon in a hierarchical manner.The second phase focuses on the proposal of measures and their implementation in the short, medium, and long-term in a coordinated manner through the establishment of a governance framework for the Mar Menor.Both phases are, in turn, developed under two approaches that run in parallel: a more operational and a strategic approach.The backbone of this biphasic procedure is the so-called GIS participatory mapping process.This concept allows us to diagnose, analyze, and propose measures based on the objective and detailed spatial information provided by the GIS indicators.The framework facilitates reaching agreements and a technical-social consensus among the different stakeholders. PRIVATE INVESTMENT AND MANAGEMENT This process leads to the phase of implementation and the maintenance of measures through a system of public-private collaboration.This co-management system guarantees the minimum public investment for a comprehensive recovery of the lagoon as well as the private investment necessary to sustainably maintain the measures.In this way, the execution of the measures is optimized, which makes the stakeholders co-responsible for the tasks for which they are more specialized.The process of implementing these different stages will be described in detail below. Integrated Diagnosis In the first place, the so-called Socio-Ecological System of the Mar Menor (hereinafter SESMM) must be configured to develop a decision panel formed by the legitimated stakeholders.Given that the situation of the Mar Menor goes beyond mere environmental problems, it is necessary to establish a framework that approaches the situation from a multidisciplinary focus in which it is possible to involve all stakeholders.Stakeholders have been selected following a DPSIR (Driving forces, Pressure, State, Impact, and Response) model [55] (Figure 3). Integrated Diagnosis In the first place, the so-called Socio-Ecological System of the Mar Menor (hereinafter SESMM) must be configured to develop a decision panel formed by the legitimated stakeholders.Given that the situation of the Mar Menor goes beyond mere environmental problems, it is necessary to establish a framework that approaches the situation from a multidisciplinary focus in which it is possible to involve all stakeholders.Stakeholders have been selected following a DPSIR (Driving forces, Pressure, State, Impact, and Response) model [55] (Figure 3).During the first stage (Driving Forces), a preliminary analysis has been carried out through which the most significant elements of the problem and possible information prescribers have been determined.This analysis has allowed us, during the second phase (Pressures), to determine the stakeholders to be included in the SESMM through numerous interviews with the prescribers determined in the previous stage.The weighting of each one of the stakeholders has been obtained in the following stage (State) by measuring the degree of cross-references linked to the involvement with the causes of problems, possible solutions, management responsibility, and capacity of being objective evaluators and interested evaluators, according to the information from the interviews and following Formulas (1) and (2). 𝜑 , , , , with ∑ 1 where is the function that evaluates the number of references of prescribers for a stakeholder in relation to its responsibility in the causes of the problem, in its solutions , in its management at a competitive level, its capacity as an objective evaluator , and its capacity as an interested evaluator . is the importance factor of these variables and is a design parameter of the analysis. Ψ is the function that evaluates the relationship between the number of references of this stakeholder and the number of total references.During the first stage (Driving Forces), a preliminary analysis has been carried out through which the most significant elements of the problem and possible information prescribers have been determined.This analysis has allowed us, during the second phase (Pressures), to determine the stakeholders to be included in the SESMM through numerous interviews with the prescribers determined in the previous stage.The weighting of each one of the stakeholders has been obtained in the following stage (State) by measuring the degree of cross-references linked to the involvement with the causes of problems, possible solutions, management responsibility, and capacity of being objective evaluators and interested evaluators, according to the information from the interviews and following Formulas (1) and (2). where ϕ i is the function that evaluates the number of references of prescribers for a stakeholder i in relation to its responsibility in the causes α of the problem, in its solutions β, in its management δ at a competitive level, its capacity as an objective evaluator γ, and its capacity as an interested evaluator ε. λ i is the importance factor of these variables and is a design parameter of the analysis.Ψ i is the function that evaluates the relationship between the number of references of this stakeholder and the number of total references.Stakeholders must be involved in the four stages of the DPSIR model to be legitimate in order to participate in the recovery process.The development of a model based on interrelated thematic features and not a finalist evaluation based on watertight analysis allows us to prioritize the importance of the agents that must participate in the process.When configuring the decision panel of the SESMM, it will be necessary to take into account that a stakeholder may appear several times for different topics analyzed in the DPSIR model.In this way, the relative weight of each of the stakeholders will be different since each of them has different weightings in the decision process. Once the stakeholders have been established, it is important to guide the debate properly to obtain homogeneous results that allow the establishment of a comprehensive diagnosis.To implement a truly integrated diagnosis, all the stakeholders (or at least sufficiently majority consensus) must reach agreement on issues such as the scope of action of the problem, the degree of interrelationship of each of the stakeholders with the current situation, and the role of each of the agents in the future process.Agreements will be based on consensus by establishing a qualified majority function Φ based on Formula (3), where Φ is a design variable adjusted to the specific process. where Φ j is the qualified majority function for each decision j performed by the n members of the SESMM through an approval function I that is a design variable modeled for each study.Four discussion groups including members from the four main categories of selected stakeholders (social, scientific, administration, and business categories) comprise the decision panel.The analysis is conducted following the organizational criteria of Table A1.For the development of the process, the concept of GIS participatory mapping (GPM) is implemented as an innovative approach in the field of diagnosis of issues in natural areas.This tool allows us to spatially assess the relationship and crossed links between environmental and anthropic issues by using GIS indicators.The analysis should provide detailed spatial data to reach agreements among the stakeholders about the spatial scope of action to implement the recovery of the lagoon and to hierarchize the areas of the preferential work within it. Proposal of Measures, Implementation, and Management The phase of the proposal and implementation of measures was developed based on the results obtained during the integrated diagnostic stage.This phase has been carried out following two different approaches developed in parallel.On the one hand, a strategic approach seeks to establish a new governance framework that enables the sustainable coexistence of human activities with the natural values of the lagoon to avoid new environmental crises.This framework must lay the foundations of the new regulatory context to ensure that the measures, which will be implemented during the recovery process are maintained and their management is balanced.On the other hand, an operational approach seeks to generate various programs that should face the issues detected during the diagnosis stage.These programs must develop and agglutinate the different executive actions for the recovery of the lagoon. The solutions are proposed by stakeholders and grouped into various operational programs to differentiate those that can be implemented in the short, medium, or long-term.In this sense, it is important to combine measures from the three groups in a balanced way.On the one hand, administrations may find it difficult to address an excessive number of short-term measures economically, and may generate tensions with affected stakeholders.Conversely, an excessive number of very ambitious but long-term measures can convey to citizens the feeling that nothing is being done and it can expose the recovery plan's execution to future political swings in the administration or the impact of inevitable economic cycles.The measures proposed were previously validated by the SESMM by means of a majority function Φ and, subsequently, selected by a function of priority χ that takes into account the importance, urgency, and motricity criteria for each proposal, according to Formula (4). where χ j is the priority function applied to each j solution selected after evaluation, which implements the weighting and correcting factors Ψ and ϕ applied to the vote of the n members of the SESMM. The weightings given to each member of the SESMM's vote are obtained through a function that implements three characteristics of each i proposal: importance ρ, urgency σ, and motricity ω (by motricity we refer to the ability of a proposal to generate positive inertias in other solutions proposed for this or other problems).All these three factors are specific design parameters of the analysis.The operational programs must take into account which stakeholders should manage their execution by using a Ω matrix selective process of implementation for optimal management (MSPIOM).The matrix will select stakeholders responsible for the process by taking into account factors such as the degree of responsibility of the agent who generated the problem in which the stakeholder is the most qualified for the implementation and correct maintenance of operational programs, or what each stakeholder's competence at the administrative and legal level is.The list of possible stakeholders' participation will be crossed with different models of public-private configuration of the process by configuring the Ω ij matrix.These models will calibrate private participation at different stages of the process of the lagoon recovery (i.e., main/secondary/short-term/long-term, etc. investments, management/maintenance/exploitation, etc. costs, or other features, all of which are design parameters of the specific case study). As indicated above, while not all these questions are raised from a punitive or sanctioning point of view during the process, the approach is entirely positive to facilitate the participation and commitment of the stakeholders to the project.In this context, it must be borne in mind that, when defining possible responsibilities for the environmental crisis generated, there may be both responsibilities for actions by private agents, as well as inaction or lack of supervision by the public administrations.This legal determination may be very controversial (and not easy to determine when so many stakeholders are involved) and does not lie within the scope of this project.Clearly, the determinations from the project are not exempt from any legal responsibilities that may be derived for the environmental crisis generated and which must be settled by the pertinent judicial proceedings. The selection of measures is made through the participatory process by applying the GIS mapping concept to clearly define the scope of each of them.This participatory process uses the same weighting factors that were finalized in the diagnostic phase by taking into account the level of importance of each of the stakeholders in the overall solution.In this case, at the level of resource allocation, the project is based on the philosophy of optimizing the financing and maintenance of the solutions proposed to recover the lagoon under a criterion of justice and responsibility ("polluter pays principle [56]"), but, at the same time, from a realistic approach.This second approach should optimize the efficiency for managing the process and ensure sustainable future cohabitation between the whole ecosystem and the pre-existing economic activities in the area (or at least a reasonable maintenance of them).To achieve this target, we will calibrate economically the level of commitment of the stakeholders involved in the Ω ij matrix.This process will be performed by using the Willingness To Pay (hereinafter WTP) models for benefiting environmental ecosystems and natural tourism resources, which are based on Newton et al. [57] and Haab & McConnell [58,59]. Both non-parametric (with Turnbull estimation, [60]) and parametric (with logit/probit estimation, [61]) approaches that evaluate WTP values will be used.We calculate non-parametric Turnbull lower and upper bound estimates of WTP mean for the necessary investment for the lagoon's recovery and its sustainable maintenance.We also model the probability of answering "yes" to the WTP question as a function of the coverage level of the payment and stakeholder characteristics.The mathematical details of the formulation implemented to compose both estimates can be seen in Appendix B. In this context, we can also find, for instance, actions that generate a benefit to one sector may be causing damage to another sector, which both can be monetized.In this sense, to propose the economical cause-effect interrelations between sectors of the global SESMM in a balanced way, models of the Cost Transfer Sector (hereinafter CTS) are implemented to the WTP evaluation.The model is calculated with (10) based on the valuation data of Velasco et al. [62]. where κ 1 2 is the suitable CTS from sector 1 to 2 that takes into account the estimated increase in the profit ∆B 1 of the activity of sector 1 due to the detrimental effect on sector 2, the estimated increase in losses ∆L 2 of the activity in sector 2 as a result of activity 1, and the estimated mean willingness to pay WTP 1 from sector 1 to adapt its activity.The κ function is a design parameter adaptable to the specific context of the multiparametric analysis.The whole process can be summarized in the scheme of Figure 4.In this context, we can also find, for instance, actions that generate a benefit to one sector may be causing damage to another sector, which both can be monetized.In this sense, to propose the economical cause-effect interrelations between sectors of the global SESMM in a balanced way, models of the Cost Transfer Sector (hereinafter CTS) are implemented to the WTP evaluation.The model is calculated with (10) based on the valuation data of Velasco et al. [62]. where is the suitable CTS from sector 1 to 2 that takes into account the estimated increase in the profit ΔB of the activity of sector 1 due to the detrimental effect on sector 2, the estimated increase in losses Δ of the activity in sector 2 as a result of activity 1, and the estimated mean willingness to pay from sector 1 to adapt its activity.The κ function is a design parameter adaptable to the specific context of the multiparametric analysis.The whole process can be summarized in the scheme of Figure 4. Results Following the criteria established in the methodology section, a three-phased process (integrated diagnosis, solutions selection, and management implementation) was developed with the following results. Integrated SESMM Diagnosis The SESMM has been configured from the DPSIR model described in the previous section.This element includes all the legitimated stakeholders of the process, considering their proportional weight in the stated process by categories.The SESMM represents the decision panel through which the different stages of the global process will be completed by using the GIS participatory mapping framework.The resulting configuration of the SESMM granting the following factors of importance to the implication in the causes ( = 0.25), solutions ( = 0.31), competence management ( = 0.27), objective evaluation ( = 0.23), and interested evaluation ( = 0.19) can be seen in Table 1. Results Following the criteria established in the methodology section, a three-phased process (integrated diagnosis, solutions selection, and management implementation) was developed with the following results. Integrated SESMM Diagnosis The SESMM has been configured from the DPSIR model described in the previous section.This element includes all the legitimated stakeholders of the process, considering their proportional weight in the stated process by categories.The SESMM represents the decision panel through which the different stages of the global process will be completed by using the GIS participatory mapping framework.The resulting configuration of the SESMM granting the following factors of importance to the implication in the causes (λ 1 = 0.25), solutions (λ 2 = 0.31), competence management (λ 3 = 0.27), objective evaluation (λ 4 = 0.23), and interested evaluation (λ 5 = 0.19) can be seen in Table 1.The members included in the four categories are distributed randomly among the four groups described in Table 1 to configure the diagnosis decision panel of the SESMM.It must be taken into account that the values of Ψ i and the number of members is not numerically correlated.The result of the Ψ i values respond only to the calculation, according to Formulas ( 6) and (7), while the evaluation of the number of members belonging to the solution panel is given by the qualitative interpretation of compliance with the conditions from Section 2.1 to constitute a legitimized stakeholder.Even so, we must not forget that each weighting coefficient applies to each one of the different stakeholders.Therefore, the decisions regarding the diagnosis and proposal of solutions will be affected in any case by these correction coefficients regardless of the number of stakeholders participating in the SESMM. To develop the strategic and operational diagnoses described in Table 1, the multi-parametric analysis has been carried out using the GIS participatory mapping approach.In this stage, various agreements have been reached among the stakeholders at the strategic level to first determine the nature of the problem and what the desired future scenario should be.To this end, convergence proposals have agreed that, in the absence of hard-to-reach unanimities, they generate qualified majorities of the SESMM (in this case, the function Φ implies reaching 80% of individual weighted agreement within each of the four categories and the four groups for approval).At the strategic level, a global diagnosis has been agreed upon that assumes the main role of intensive agriculture and its contributions (surface and underground) of nitrates to the lagoon as a fundamental source of the environmental crisis that emerged in 2015 as a result of an intense process of eutrophication.Even so, the multidisciplinary nature of the process of diffuse anthropization of the Mar Menor and its surroundings, in which many other agents intervene, is also recognized.To reverse the current situation, agreement was reached regarding the need for a governance framework that regulates pre-existing activities so that they are sustainable but from a positive approach (i.e. from a perspective of making a non-punitive regulatory framework but to control current activities by additionally implementing the necessary investments and measures to eliminate or reduce their impact so that they remain economically viable).It is important to remember that this new regulatory framework must be able to overcome the existing inadequacies of the current regulation in the territory, which suffered an important environmental crisis, despite the Mar Menor area being highly protected by various environmental figures of the Natura 2000 Network. At the operational level, the scope of action of this new regulatory framework and preferential areas of action where the necessary measures and investments will be implemented have been agreed upon.For this, a GIS analysis was developed from a multi-parametric approach at the administrative, hydrological, geological, hydrogeological, land use, and flood risk levels.The criteria agreed upon to achieve both results through the GIS participatory mapping process have had various levels of qualified consensus.At a geographical level, it has been observed that the scope of administrative action cannot be limited to municipal scope, since several municipalities are affected, and it is not considered necessary to go beyond the regional scope.In addition, the municipalities affected are not only the coastal municipalities, since at the hydrological level the area of influence of the lagoon extends to several interior municipalities.This issue is repeated both at the geological level, as well as at the risk of flood levels and as hydrogeological level, with the last of these being the biggest determinant of the three.Hydrological and hydrogeological analyses both reveal a strong correlation with the phenomenon of nitrate contribution to the lagoon from intensive agriculture even though a consensus has not been reached on which of these routes is the predominant route. Regarding the objective established at the operational level of establishing preferential action areas, four different areas have been differentiated through the GIS participatory mapping process.The lagoon itself is undoubtedly the critical zone.However, it is an area that already has a maximum level of environmental protection thanks to the Natura 2000 Network, which, however, failed to prevent the current environmental crisis.It is, therefore, necessary to act in the three annexed zones (coastal perimeter, Mediterranean marine area, and annexed agricultural area of influence called Campo de Cartagena), which establishs a regulatory framework of land use that protects the lagoon and implements the measures that enable the situation generated in 2015 to be reverted.The consensus agreed upon for the strategic and operational diagnosis is summarized in Table 2 and Figure 5. Selection of Operational Measures and Strategic Governance Framework Once the main objectives and the framework of the diagnostic phase had been established, the proposal phase of the measures has been developed, both at a strategic and operational level.In the operational field, the proposals validated by the SESMM through a majority function Φ (qualified majority of 50% required for each category group) are represented in Figure 6 by separating short, medium, and long-term proposals. For the selection of the solutions among the validated proposals, the function χ was applied to the proposal with the highest weighted score in each group with the following correction factors: importance ρ = 0.40, urgency σ = 0.35, and motricity ω = 0.25.The four categories of the previous section have been grouped into two groups in order to more clearly differentiate the behavior of the stakeholders in the proposal of measures in Figure 6.The first group, which we would call social stakeholders, basically includes those that formed the first and fourth categories of the previous section (business groups, agrarian associations, environmental groups, citizen's platforms, etc.).The second group is of a more technical character and includes public administrations as well as scientific institutions.This does not imply that the first group does not contain people with a technical background, but that the interests that legitimize the presence of these groups in the SESMM is more social in nature when compared with the interests of the second, which are more technical.Actions selected by the SESMM (in red) including the average value granted by the "social" and "scientific" stakeholders (upper and bottom line of each box), and the maximum and minimum values obtained for each of the proposals.Note: black box when the average value of social/business stakeholders is higher than that of scientific/administrative stakeholders, and the white box is otherwise. At the strategic level, as concluded in the diagnostic phase, it was agreed that a new governance global framework should be developed by creating an entity that ensures the coordination between different administrations involved in the SESMM and the social participation.This new Actions selected by the SESMM (in red) including the average value granted by the "social" and "scientific" stakeholders (upper and bottom line of each box), and the maximum and minimum values obtained for each of the proposals.Note: black box when the average value of social/business stakeholders is higher than that of scientific/administrative stakeholders, and the white box is otherwise. The operative actions that attract the greatest consensus of the stakeholders are the need to implement a plan of "zero discharges" to the lagoon (L2 in Figure 6), the implementation of urban planning actions to reduce the impacts of flooding (M3), the renovation of coastal infrastructures in a way that generates less impact on the sedimentary dynamics (L5), and the control of the underground flow of nitrates (S1) to the lagoon.Other actions such as the construction of storm tanks to reduce the contribution of phosphates and fats from the washing of streets and urban areas (M4), the execution of natural barriers in private agricultural areas for the decrease of sediment contributions in floods (S5) or the construction of the so-called "green filters" (M7, large areas of lagooning for natural filtering of surface runoff) obtained a significantly lower consensus.Among the actions not selected since they did not fulfill the requirements established in function Φ, we can highlight the expansion and dredging of the communication channels between the Mar Menor and the Mediterranean Sea to facilitate the renewal of the waters of the lagoon (L1) due to the risk involved for the balance of the ecosystem.Similarly, imposing a tourist tax destined for the conservation of the lagoon (S2) was rejected because of the negative impact it would have on tourism (such as conservation taxes S3 and S4, which were also deemed inadequate measures).We must bear in mind that some of these actions, such as the so-called "Zero Discharge Plan", involve short, medium, and long-term actions of a structural and developmental nature (for example, the construction of a large collector that gathers all the contributions from agriculture and takes them to an authorized discharge point for treatment (S6), must be complemented by a highly branched secondary network that reaches all the large plots of intensive nitrogen agriculture, included in M4, M5, and L8).The optimal implementation, management, and cost sharing of these solutions and their analysis as main or secondary investments will be addressed in the following section using the WTP method approach. It is interesting to observe how the social and political stakeholders fundamentally opt for more aggressive and fundamentally prompt simple and short-term solutions, while those of the latter are more oriented to proposals that are structural, complex, and focused towards the medium and long term.This question will be addressed more thoroughly in the discussion section.In reference to this question, we must also bear in mind that the separation and differentiation of the proposals into the short, medium, and long-term refers exclusively to their implementation.In this sense, a solution established as short term does not necessarily mean that its results will be observed in the short term (although it will clearly be easier for results to be observed if the execution period of a solution is shorter). At the strategic level, as concluded in the diagnostic phase, it was agreed that a new governance global framework should be developed by creating an entity that ensures the coordination between different administrations involved in the SESMM and the social participation.This new organizational structure should also count on the scientific support of different stakeholders included in the SESMM.This new structure will be responsible for overseeing the implementation and maintenance of strategic and operational measures. Moreover, in line with the determinations of the diagnostic phase, a new regulatory framework has been implemented for each of the four preferential work areas determined in the final scope of action.This specific regulatory framework will be developed at the technical detail level by the new inter-administrative entity with the social participation of the SESMM members and the technical support of the Mar Menor scientific committee.However, the guidelines of this new regulatory framework have been agreed upon within the SESMM by implementing the GIS mapping participatory process.In this way, a subsequent conflict at the socio-political level is avoided, and the risk of litigation at the legal level because of particular stakeholders is limited (both factors could be a posterior delay of the implementation of the regulatory framework and, therefore, the effectiveness of the comprehensive recovery plan for the lagoon).For the approval of the guidelines, Φ functions of a qualified majority have been used (60% agreement for each of the four groups in Table 2).As the most significant example of the results obtained in the implementation of the regulatory framework in the preferential work areas, the guidelines of the regulatory framework of the lagoon's agricultural area of influence are included as supplementary material.Through the GIS mapping participatory process, three zones with different levels of restriction in agricultural use have been established: the first closest zone with a high level of restriction on agricultural use, a second with medium restriction levels, and a third one further away where merely precautionary measures are imposed on agricultural activity to avoid the surface and underground nitrate contributions.These measures are not only focused on the agricultural sector.They also seek, for example, to stop the erosion that facilitates the surface flows of earth that settle in the lagoon after floods, or the arrival of heavy metals coming from the disused mines located to the south of the lagoon or regulate urban planning development.The result of the GIS participatory mapping process for this part of the regulatory framework can be seen spatially summarized in Figure 7. of a qualified majority have been used (60% agreement for each of the four groups in Table 2).As the most significant example of the results obtained in the implementation of the regulatory framework in the preferential work areas, the guidelines of the regulatory framework of the lagoon's agricultural area of influence are included as supplementary material.Through the GIS mapping participatory process, three zones with different levels of restriction in agricultural use have been established: the first closest zone with a high level of restriction on agricultural use, a second with medium restriction levels, and a third one further away where merely precautionary measures are imposed on agricultural activity to avoid the surface and underground nitrate contributions.These measures are not only focused on the agricultural sector.They also seek, for example, to stop the erosion that facilitates the surface flows of earth that settle in the lagoon after floods, or the arrival of heavy metals coming from the disused mines located to the south of the lagoon or regulate urban planning development.The result of the GIS participatory mapping process for this part of the regulatory framework can be seen spatially summarized in Figure 7. Implementation and Management of the Process Applying MSPIOM, WTP, and CTS Methods To determine who should implement, manage, and maintain the different solutions selected in the previous section, the parametric and non-parametric MSPIOM process described in the methodology section has been applied.We have introduced four different models in the simulation 20 km 0 N Figure 7. Regulation framework established for the agricultural area annexed to the lagoon with three levels of regulation: area 1 with a high level of restrictions on agricultural use (zone in green), area 2 with medium restrictions (blue zone), and area 3 with low restrictions (yellow zone). Implementation and Management of the Process Applying MSPIOM, WTP, and CTS Methods To determine who should implement, manage, and maintain the different solutions selected in the previous section, the parametric and non-parametric MSPIOM process described in the methodology section has been applied.We have introduced four different models in the simulation scenarios by taking into account five variables (Appendix C).The models propose two extreme scenarios (investment and management/maintenance of solutions fundamentally public or private) and two mixed scenarios (one with greater public participation and another with greater private participation).Within the extreme models, it must be noted that 100% purely public or private investment and management/maintenance scenarios have not been considered, since these situations would not be possible in practice due to legal or technical issues. The sensitivity of the stakeholders included in Ω to participate in the different investment and management models for the recovery of the lagoon has been evaluated through the parametric and non-parametric estimation of Willingness To Pay.For this purpose, the level of incidence in each one of the sets of short, medium, and long-term solutions selected in the previous section has been previously analyzed at a statistical level.In this part of the study, in order to have a critical mass at the statistical level, stakeholders were asked to provide a series of opinion data from their associates (since most of them were representatives of associations) through surveys. To estimate the upper and lower thresholds of private participation in the cost of recovery, the mean WTP value of the set of Ω was evaluated by the non-parametric Turnbull method.Subsequently, the results obtained were also corrected by implementing the sensitivity level of the stakeholders to introduce the CTS variable.On the other hand, to assess the sensitivity of WTP for the different management configurations proposed, we model the probability of a "yes" response using parametric probit and logit specifications.The four models were tested based on two combinations of covariates, which are the solutions selected with more economic implications and the different groups of stakeholders.The results obtained are summarized in Table 3.If we observe the results for the different models and the possible combinations of stakeholder participation while taking into account their impact on the solutions and their predisposition to participate, we can extract interesting considerations.It is evident that, in the vast majority of cases, there is a significant aversion on the part of private stakeholders to participate in the main investments to be made in the short-term.However, there is also a positive appreciation of an important willingness to participate in the management and maintenance of the solutions implemented by these same stakeholders, and a much lower aversion to contributing to the implementation of more secondary investments. From the point of view of different types of stakeholders, we can observe that fundamentally the agricultural stakeholders, among those economically involved, present the greatest predisposition to participate in the process.This is quite reasonable given their important link with the causes of the main problem, but also because of the economic impact that a more restrictive regulation could have on the economic viability of their agricultural activity.There is also a strong refusal to recognize and implement factors of rebalancing between sectors by costs-benefit transfer, finding only a certain predisposition on the part of the agricultural sector to recognize some damage to the tourism sector.This context is consistent with the strong refusal to implement direct or indirect permanent taxes for the conservation of the lagoon.As expected, a greater predisposition can also be observed among the agents not affected or less economically involved in the matter (social agents or the general public) and among those most affected (business agents), which highlights the importance to demarcate this aspect in this type of participatory processes to shape realistic recovery strategies in the long-term. In the context of efficiency in management, it should be noted that the most extreme models (1 and 4) are those that globally obtain the worst scores.It is also interesting to observe how it is more efficient for the structuring actions of the process to remain within a public management framework and, thus, avoid possible conflicts of interest between those responsible for implementing the solutions and the causes of the problems observed. Therefore, if we focus on efficiency criteria and the predisposition to participate by different stakeholders, it is clear that the optimal solution environment would have an initial investment of the main actions (Main Infrastructures of the Zero Discharge Plan, Plan against flooding, underground run-off collection network, etc.) of a mostly or completely public character to a secondary investment (surface drainage networks in agricultural spaces, pipelines to authorized discharge points, less aggressive coastal infrastructures with sedimentary dynamics, etc.).This comes with a private majority component and a mixed public-private management and maintenance framework in the main investments and mostly private in the secondary ones. Discussion and Conclusions The work carried out demonstrates the importance of three common issues in environmental crises in territories subjected to processes of diffuse anthropization.In the first place, it is important to correctly delineate the cause or causes that lead to environmental problems and identify the stakeholders related to them.The origin of the problems may not necessarily be in direct contact with the affected environment -or even geographically close -as has been seen.In addition, there may not be a single focus to the problems, but rather we can find what we refer to as a phenomenon of "diffuse anthropization" (human impact on the environment in which there is no clear cause-effect relationship).This question is increasingly addressed in recent scientific works in other areas [37,[63][64][65].In the present case, it is evident that intensive agriculture with its contribution of nitrates to the lagoon was the main agent of the eutrophication phenomenon that led to the 2015 environmental crisis.However, the problems of the Mar Menor go beyond the surface nitrate inputs of agriculture. In this context, we find the second issue, which must usually be addressed in this type of situation: the proposal for a solution for environmental recovery.In this context, such complex problems cannot be approached from the merely traditional scientific dimension, but the social factor, administrative viability, and economic impact to the existing productive fabric must all be considered.In this sense, the socio-ecologic system (SES) represents an innovative approach to develop comprehensive solutions to this kind of environmental problems, which has recently begun to be consolidated in the scientific context [66,67].In the case studied, this methodology represents a significant advance toward multi-disciplinarily in approaching the problem with respect to the traditional scientific analysis previously carried out in the Mar Menor.All these studies oriented solutions in a more segmented manner to watertight proposals in the field of ecology, biology, geology, or chemistry [68][69][70][71].In addition, the development of a complete model with majority functions Φ and priority functions χ to establish a fair and balanced framework in decision-making supposes an important evolution in the field of traditional open participatory processes.In these processes, issues such as the level of responsibility in the problems, the degree of knowledge of them or the legitimacy of each one of the stakeholders are not usually addressed and evaluated scientifically. In line with this new approach, the development of methodologies to optimize the management of the proposed solutions and ensure the commitment of the stakeholders involved with environmental recovery strategies is essential.In this sense, the incorporation of evaluation methods for the implementation of a public-private partnership (PPP) in the development and management of environmental solutions is a significant advance in this field.The development of PPPs in these situations is both mandatory to preserve the criterion of justice (polluter pays principle) and necessary to develop an optimized management model that assigns the responsibility best to perform in each case to each agent of the process.In addition, one of the main problems detected in natural areas affected by similar environmental crises is usually the difficulty in implementing solutions that are economically realistic in the medium and long-term.In the Mar Menor case study, the implementation of the process optimization matrix Ω and the evaluation through parametric and non-parametric systems of the stakeholders' WTP is a very innovative approach in the environmental field. The results obtained have enabled us to know the degree of involvement of the different stakeholders in each of the solutions and their thresholds of participation in the economic section.This question is especially relevant if we seek to develop a realistic PPP process that allows us to recover the lagoon environmentally and maintain sustainable cohabitation with the pre-existing economic activities.The methodology used at the WTP parametric level has allowed us to propose four scenarios offering differing degrees of public-private collaboration.The results obtained confirm that both those models in which the great totality of the weight of the process falls on the public administration and those that fall on the private sector are the least realistic and viable models.The range of maximum thresholds of WTP obtained with the non-parametric Turnbull method allows us to assume the two mixed PPP models proposed in the parametric method are viable.These results are consistent with those obtained in similar problems in other cases of major environmental crises in developed countries [10,37], but they do present differences that may be understood as advances in the field for smaller cases [72,73] or in countries with lower environmental demands [74]. Another different question is the variant of the CTS implemented in the WTP as a rebalancing mechanism between private stakeholders.The results obtained are scarce, due to the scarce predisposition of the stakeholders to implement this type of economic rebalancing mechanisms.Only agricultural stakeholders have a certain predisposition to recognize some transfer of costs between sectors since their activity may harm the tourism sector.We, therefore, find ourselves in a context that is still too immature to deal with this issue decisively.However, this question of the indirect transfer of costs between sectors will sooner or later become a variable to be addressed in the design of econometric models that are capable of valuing complex environmental contexts by taking into account the "Beneficiary Pays Principle" in a realistic way.In this sense, the more in-depth development of the CTS variable can be an interesting future line of research. Lastly, we must reflect on what has happened in the lagoon during recent months.This phase of starting the implementation of measures through public-private co-management coincided with the unexpected anticipated recovery of the lagoon in the summer of 2018, without having to actually implement truly structural measures.There are currently clear symptoms of recovery in the lagoon, which reach the levels of visibility prior to the 2015 eutrophication crisis.Nevertheless, the fragility of this ecosystem and the uncertainty regarding the fact that this recovery has occurred without the material implementation of structural measures opens new questions.This new situation has forced us to rethink the analysis of the project and reconsider the importance of concepts such as the intrinsic resilience of natural areas and the fragility of a recovery process over time. The waters of the Mar Menor regained during 2018 show the levels of visibility and transparency of more than four meters in many areas.As can be observed in Figure 8, the reduction of the level of turbidity during the second half of 2018 has allowed us to recover a part of the natural marine seabed killed by the environmental crisis.It is, thus, evident that the process had not reached the "point of no return."The current context is very likely a consequence of the cessation of discharges (legal and illegal) that intensive agriculture had been carrying out for years into the lagoon.This situation was due to the insufficient (or inefficient) regulatory framework in the area, which had focused its measures on activities (such as coastal urbanization), located within the area protected by the Natura 2000 Network, underestimating the effect of other activities that were found beyond its boundaries (such as the effects of agriculture).Therefore, the recovery is derived to a large extent by the multidisciplinary and integrated diagnosis made and the social pressure placed on agriculture during the last three years within the framework of the SESMM.The strong proliferation of the population of jellyfish Cotylorhiza tuberculata during the years prior to the eutrophic crisis of 2015 due to the contribution of nutrients to the lagoon from agriculture was the visible symptom of a problem that had been poorly addressed due to the absence of a correct diagnosis.As a conclusion to all the above, we must highlight the main ideas achieved that allow the implementation of "new strategies" for the recovery of this type of natural space in other areas of the world in the aftermath of an environmental crisis.As has been observed, it is essential to correctly diagnose the problem if we aim to propose the correct solution.For this, the diagnosis must be multidisciplinary, and address the problem from both the point of view of a focalized origin, and from the perspective of diffuse anthropization phenomena without a concrete origin.In this field, the use of GIS tools combined with a socio-ecological approach can be essential to establish a framework that addresses the problem from a comprehensive perspective.On the other hand, the implementation of the solutions cannot be only approached from the scientific point of view, nor implemented only from the public administration.A socio-ecological balanced framework that is capable of integrating all stakeholders to obtain consensus solutions must be established for the diagnosis of problems, the proposal of solutions, and the management of such solutions.For this, it is necessary to approach the process from a non-punitive perspective while respecting the "polluter pays" and "the beneficiary pays" principles of sustainability.At this point, the combination of the GIS tools, a socio-ecological framework, and socio-economic analysis methods such as the WTP and the CTS has proved to be very useful in obtaining proposals that are rigorous, viable, and optimized in their management.Despite the current good news, as can be seen in Figure 6, the speed of seabed regeneration is much slower than the speed of its destruction, which can give us an idea of the evolution of the global process.Even so, this new situation has raised some doubts in the process of integral environmental recovery of the lagoon.In this sense, it is, therefore, important in long-term recovery processes to factor in the environmental resilience variable of the damaged ecosystem itself.There have been some approaches to this question in the case of coastal lagoons and wetlands, such as in Reference [51].Nevertheless, this question is currently a field in which future lines of research should be developed to refine the models and make them even more realistic. In any case, given the lack of better and more advanced elements of judgment at present, it is evident that the possible conjunctural variation of the global context must not imply a reduction in the determination of the stakeholders to participate in the process of environmental recovery nor a reduction in the implementation of the scientifically established measures to solve the problems diagnosed.In the spring of 2017, some improvement in water transparency parameters was also seen. However, this situation proved conjunctural because, with the sharp rise in temperatures that the region had in the summer of that year, the waters again lost all visibility, which revert to the green tonality.The present recovery seems to be far more stable.Even so, it is important to implement the set of proposed measures to permanently reverse the situation because agriculture is neither the only cause of the phenomenon of diffuse anthropization nor can the final stability of the process of environmental recovery of the lagoon be circumscribed to the simple reduction of water turbidity. As a conclusion to all the above, we must highlight the main ideas achieved that allow the implementation of "new strategies" for the recovery of this type of natural space in other areas of the world in the aftermath of an environmental crisis.As has been observed, it is essential to correctly diagnose the problem if we aim to propose the correct solution.For this, the diagnosis must be multidisciplinary, and address the problem from both the point of view of a focalized origin, and from the perspective of diffuse anthropization phenomena without a concrete origin.In this field, the use of GIS tools combined with a socio-ecological approach can be essential to establish a framework that addresses the problem from a comprehensive perspective.On the other hand, the implementation of the solutions cannot be only approached from the scientific point of view, nor implemented only from the public administration.A socio-ecological balanced framework that is capable of integrating all stakeholders to obtain consensus solutions must be established for the diagnosis of problems, the proposal of solutions, and the management of such solutions.For this, it is necessary to approach the process from a non-punitive perspective while respecting the "polluter pays" and "the beneficiary pays" principles of sustainability.At this point, the combination of the GIS tools, a socio-ecological framework, and socio-economic analysis methods such as the WTP and the CTS has proved to be very useful in obtaining proposals that are rigorous, viable, and optimized in their management. Table A1.Organization of the analysis panel performed in the SESMM for the diagnosis. Figure 2 . Figure 2. GIS participatory mapping process and co-management strategy for Mar Menor recovery. Figure 2 . Figure 2. GIS participatory mapping process and co-management strategy for Mar Menor recovery. Figure 3 . Figure 3. DPSIR model developed for the diagnosis process. Figure 3 . Figure 3. DPSIR model developed for the diagnosis process. Figure 4 . Figure 4. Summarized organization of the process for proposal selection, optimization management, and public-private financing evaluation for the recovery of the lagoon. Figure 4 . Figure 4. Summarized organization of the process for proposal selection, optimization management, and public-private financing evaluation for the recovery of the lagoon. Figure 5 . Figure 5. GIS participatory mapping approach: (a) Administrative delimitation in the area, (b) hydrological analysis, (c) geological analysis, (d) hydrogeological analysis, (e) land use analysis, (f) flooding risk analysis, to define the (g) final scope of action as the selected overlapping administrative context, and (h) the four preferential areas of work (see Supplementary GIS material online for more detail). FinalFigure 5 . Figure 5. GIS participatory mapping approach: (a) Administrative delimitation in the area, (b) hydrological analysis, (c) geological analysis, (d) hydrogeological analysis, (e) land use analysis, (f) flooding risk analysis, to define the (g) final scope of action as the selected overlapping administrative context, and (h) the four preferential areas of work (see Supplementary GIS material online for more detail). Figure 6 . Figure 6.Actions selected by the SESMM (in red) including the average value granted by the "social" and "scientific" stakeholders (upper and bottom line of each box), and the maximum and minimum values obtained for each of the proposals.Note: black box when the average value of social/business stakeholders is higher than that of scientific/administrative stakeholders, and the white box is otherwise. Figure 6 . Figure 6.Actions selected by the SESMM (in red) including the average value granted by the "social" and "scientific" stakeholders (upper and bottom line of each box), and the maximum and minimum values obtained for each of the proposals.Note: black box when the average value of social/business stakeholders is higher than that of scientific/administrative stakeholders, and the white box is otherwise. Figure 7 . Figure 7. Regulation framework established for the agricultural area annexed to the lagoon with three levels of regulation: area 1 with a high level of restrictions on agricultural use (zone in green), area 2 with medium restrictions (blue zone), and area 3 with low restrictions (yellow zone). Sustainability 2019 , 28 Figure 8 . Figure 8. Evolution of the plant cover of the seabed of the Mar Menor (2014-2016-2018).Source: Spanish Institute of Oceanography and environmental group ANSE. Figure 8 . Figure 8. Evolution of the plant cover of the seabed of the Mar Menor (2014-2016-2018).Source: Spanish Institute of Oceanography and environmental group ANSE. Table 1 . Conformation of the SESMM resulting from the DPSIR analysis. Table 2 . GIS participatory mapping criteria and Φ reached in the strategic and operative diagnosis. Table 3 . Turnbull non-parametric and logit-probit parametric estimation of WTP of stakeholders.
2019-05-21T13:04:52.535Z
2019-02-16T00:00:00.000
{ "year": 2019, "sha1": "9be9bc6fa7c8bd04b2e5c0ab9ee1d266354b4c41", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2071-1050/11/4/1039/pdf?version=1550457619", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "9be9bc6fa7c8bd04b2e5c0ab9ee1d266354b4c41", "s2fieldsofstudy": [ "Environmental Science", "Economics" ], "extfieldsofstudy": [ "Geography" ] }
16425778
pes2o/s2orc
v3-fos-license
A prospective cohort study of the use of domiciliary intravenous antibiotics in bronchiectasis Background: We introduced domiciliary intravenous (IV) antibiotic therapy in patients with bronchiectasis to promote patient-centred domiciliary treatment instead of hospital inpatient treatment. Aim: To assess the efficacy and safety of domiciliary IV antibiotic therapy in patients with non-cystic fibrosis bronchiectasis. Methods: In this prospective study conducted over 5 years, we assessed patients’ eligibility for receiving domiciliary treatment. All patients received 14 days of IV antibiotic therapy and were monitored at baseline/day 7/day 14. We assessed the treatment outcome, morbidity, mortality and 30-day readmission rates. Results: A total of 116 patients received 196 courses of IV antibiotics. Eighty courses were delivered as inpatient treatment, 32 as early supported discharge (ESD) and 84 as domiciliary therapy. There was significant clinical and quality of life improvement in all groups, with resolution of infection in 76% in the inpatient group, 80% in the ESD group and 80% in the domiciliary group. Morbidity was recorded in 13.8% in the inpatient group, 9.4% in the ESD group and 14.2% in the domiciliary IV group. No mortality was recorded in either group. Thirty-day readmission rates were 13.8% in the inpatient group, 12.5% in the ESD group and 14.2% in the domiciliary group. Total bed days saved was 1443. Conclusion: Domiciliary IV antibiotic therapy in bronchiectasis is clinically effective and was safe in our cohort of patients. INTRODUCTION Bronchiectasis is a chronic debilitating respiratory condition. Patients suffer from daily cough, excess sputum production and recurrent chest infections because of inflamed and permanently damaged airways. It is a common condition, with an incidence of 1 in 1,000 in Scotland. Management of bronchiectasis consists of airway clearance and prompt treatment of infections with antibiotics, administered intravenously in more severe cases. There is evidence that patients with bronchiectasis who have more frequent exacerbations have worse quality of life. 1 The current British Thoracic Society (BTS) guidelines for non-cystic fibrosis bronchiectasis recommends antibiotics for exacerbations that present with an acute deterioration (usually over several days) with worsening local symptoms (cough, increased sputum volume or change of viscosity, increased sputum purulence with or without increasing wheeze, breathlessness, haemoptysis) and/or systemic upset. 2 Over 720 bronchiectasis patients in Edinburgh, UK, are monitored in secondary care. They frequently utilize primary and secondary care resources through consultations, A&E attendances and inpatient admissions. The economic burden is significantinpatient admissions alone for bronchiectasis in NHS Lothian cost just over £1 million per year. There is a worldwide drive for the domiciliary management of chronic respiratory diseases like COPD. Outpatient intravenous (IV) therapy has gained widespread acceptance because of its advantages over inpatient hospitalizations, including fewer absences from school or work and less disruption of family life, decreased costs and high patient satisfaction. [3][4][5][6][7] Outpatient and domiciliary parenteral antibiotic therapy programs are well-recognized and accepted modes of providing healthcare in the community worldwide, but the UK has been relatively slow to adopt this practice. [8][9][10] Although domiciliary IV antibiotic therapy has already been implemented in cystic fibrosis, this has not been done in noncystic fibrosis bronchiectasis, where the cohort of patients are middle aged and elderly with comorbid conditions, compared to a relatively younger cohort in cystic fibrosis. The aim of our study was to evaluate the efficacy and safety of domiciliary IV antibiotic therapy for treating exacerbations of noncystic fibrosis bronchiectasis. Domiciliary IV antibiotic team All cases were reviewed by the domiciliary team-comprising one respiratory physician leading the bronchiectasis service in NHS Lothian, one specialist registrar, one clinical nurse specialist, one physiotherapist and one respiratory pharmacist. Patients are referred to the bronchiectasis team by completing an outpatient IV antibiotic referral form specifying the antibiotic to be prescribed for 14 days. If the patient was unwell and required hospital admission he or she was taught how to self-administer IV antibiotics while an inpatient, and if competent was given early supported discharge (ESD). Patients were taught to self-administer IV antibiotics via a cannula, midline catheter or a totally implanted port-the midline catheter was the mode used in the majority of patients. A pall filter is attached to the cannula to aid self-administration. Clear instructions were given to patients on how to flush IV access, make up antibiotics and secure access once antibiotics were administered. All procedures were done aseptically. Patients were taught by the clinical nurse specialist and had to demonstrate the technique of administration of antibiotics to the nurse specialist and were deemed eligible for domiciliary therapy only once the domiciliary team were satisfied with the technique and safety measures. Patients were provided with antibiotics, an epipen in case of anaphylaxis, flushes, syringes, needles, sharps bin, bandages and a patient information booklet. The patients returned at 1 week to be reviewed by the clinical nurse specialist and were provided with a fresh supply of equipment and antibiotics to complete the course of antibiotic therapy. All patients returned for a final visit on day 14 to return left-over equipment and to finish treatment assessment. Patients were given the contact details of the domiciliary team, for them to contact if they had any problems with IV access, adverse reactions or worsening of symptoms. If there were occasions out of clinic hours when a patient presented with problems, the patient would phone the respiratory ward, at Royal Infirmary of Edinburgh, and then would be reviewed by the on-call team. The clinical nurse undertook routine management of outpatients on IV antibiotics and monitored their blood, lines/access devices, sputum, spirometry, incremental shuttle walking test and progress/condition. The medication aspects were supported by the ward pharmacist. There had to be a unified consensus from the domiciliary team that the patient was suitable for domiciliary IV antibiotic therapy, for safety reasons. Patients were refused by the domiciliary team if they had any of the following features: unable to cope at home; development of cyanosis or confusion; breathlessness, with a respiratory rate ⩾ 25/min; circulatory failure; respiratory failure; temperature ⩾ 38°C; unable to take oral therapy. If requiring initial hospital admission, patients were considered for ESD if they had none of the above adverse features for 24 h or longer. Choice of antimicrobial and drug delivery All patients received 14 days of IV antibiotic therapy using antibiotics as per sensitivity testing, and the respiratory physician decided this. Antibiotics were administered by inserting an antecubital peripheral long line catheter. Study design Patients were recruited prospectively over 5 years from December 2006 to December 2011, from the Royal Infirmary of Edinburgh, UK. All patients requiring IV therapy for an acute exacerbation were assessed by the domiciliary IV team for consideration of 14-day domiciliary IV therapy or ESD with domiciliary IV therapy. Outcome measures-at the start and end of exacerbation Outcome measures recorded were treatment outcome (by measuring forced expiratory volume in 1 s (FEV 1 ), forced vital capacity (FVC), incremental shuttle walk test, 24-h sputum volume, sputum microbiology, markers of inflammation-white cell count, C-reactive protein and erythrocyte sedimentation rate, health status questionnaires-Leicester Cough Questionnaire 11 and St George's Respiratory Questionnaire 12 ), morbidity, mortality and 30-day readmission rates. Patients Inclusion criteria. Patients were included if they satisfied any of the following criteria: (1) had an established radiological diagnosis of bronchiectasis (high resolution CT scan of the chest); (2) had an exacerbation defined by acute deterioration (usually over several days) with worsening local symptoms (cough, increased sputum volume or change of viscosity, increased sputum purulence with or without increasing wheeze, breathlessness, haemoptysis) and/or systemic upset; (3) needed IV antibiotics because of failure to respond to oral antibiotics, having a pathogen requiring IV antibiotic therapy or severe exacerbations necessitating inpatient admission. Patients who were considered to be suitable for domiciliary IV treatment or ESD had to meet the following requirements: (1) were committed and able to attend the hospital for assessments; (2) were able to demonstrate that they can safely administer IV antibiotics; (3) had home circumstances appropriate for treatment; and (4) had no evidence of potential IV drug abuse. Lung function FEV 1 , FVC and FEV 1 /FVC ratio were recorded, according to national guidelines. 13 Incremental shuttle walk test Patients walked a 10-m course mapped out by two cones. The speed gradually increased each minute. The test was stopped if the patient was too breathless or failed to attain the desired speed. The distance walked was recorded in metres. 14 Health status Patients were asked to complete both the Leicester Cough Questionnaire and St George's Respiratory Questionnaire at all review time points. The Leicester Cough Questionnaire has 19 items divided into three domains: physical (8 items), psychological (7 items) and social (4 items). The total severity score ranges from 3 to 21, where a lower score indicates a greater impairment of health status due to cough. The minimum clinical important difference for the Leicester Cough Questionnaire is 1.3 units. 11 We have validated the Leicester Cough Questionnaire for use in non-cystic fibrosis bronchiectasis. 15 The St George's Respiratory Questionnaire has 50 items divided into three main domains: symptoms, activities and impacts. The total score ranges from 0 to 100, where a higher score indicates a poorer health-related quality of life (HRQoL). The minimum clinical important difference for the St George's Respiratory Questionnaire is 4 units. 12 Blood samples Fifteen millilitres of venous blood was collected and white cell count, erythrocyte sedimentation rate and C-reactive protein were measured. Complications All patients receiving ESD or 14-day domiciliary IV therapy received an information booklet and emergency contact number should they develop any complications. Successful therapy A successful therapy was considered if patients felt back to their usual clinical state and there was objective improvement in sputum purulence and/or a reduction in 24-h sputum volume and/or sputum bacterial clearance. 2 Statistical analysis All data were analysed using Graphpad prism (Graphpad software, San Diego, CA, USA). For demographic and clinical variables, data are presented as median (interquartile range) for continuous variables and n (%) for categorical variables unless otherwise stated. Comparison of changes within the groups was done using Wilcoxon signed rank test. An analysis of variance was used to compare the groups. Data were complete for all events. A P-value of o0.05 was considered statistically significant for each analysis. RESULTS Patients were divided into three groups based on where the antibiotic courses were delivered-those who received IV inpatient antibiotic therapy for the 14 days, those who were allowed ESD, and those who received domiciliary IV antibiotic therapy for the 14 days. There were 80 patients who received inpatient treatment for 14 days, 32 had ESD and 84 received the full 14 days of domiciliary therapy (Figure 1). The total patient number represents the total number of antibiotic courses as one patient may have received more than one course of antibiotic. The median (interquartile range) duration of inpatient treatment in the ESD group was 8 days (7)(8)(9)(10)(11). In all, 74.3% needed IV antibiotics because of failure to respond to oral antibiotics, 10.2% had a pathogen requiring IV antibiotic therapy and 15.5% had severe exacerbations necessitating inpatient admission. Patient selection A total of 196 patients were referred and thereby assessed for domiciliary IV antibiotic therapy. Baseline characteristics Of the total 80 episodes admitted to hospital for 14 days, there were a total of 36 patients who received IV therapy on one or more occasion. For the ESD group, of the 32 episodes, 23 patients received IV therapy on one or more occasions. For the domiciliary group, of the 84 episodes, 52 patients received IV therapy on one or more occasions. The characteristics of the individual patients in the cohort are shown in Table 1. The three groups differed at start of IV therapy by age, gender, smoking status, comorbidities, pretherapy FVC and exercise capacity. The group receiving inpatient IV therapy was older, had more patients who had coexistent COPD, and had less number of patients with coexistent asthma and previous malignancy. In addition, this group had lower baseline spirometry and exercise capacity compared to the ESD or domiciliary group. Use of domiciliary intravenous antibiotics in bronchiectasis P Bedi et al Sputum microbiology Sputum was sent for qualitative microbiology in all patients prior to starting IV antibiotic therapy (see Table 2). In all groups, the most common microorganism identified was Pseudomonas aeruginosa. The groups add up to more than 100% as some patients had more than 1 pathogen isolated. Treatment used Ten different types of IV antibiotics (Table 3) were used alone or in combination. Domiciliary IV therapy produces similar clinical outcomes compared to inpatient therapy There was significant improvement in the FEV 1 , FVC, incremental shuttle walking test, 24-h sputum volume, sputum bacterial clearance, parameters of inflammation (white cell count, C-reactive protein and erythrocyte sedimentation rate), Leicester Cough Questionnaire score and St George's Respiratory Questionnaire score from day 1 to day 14, in all groups (Table 4). Domiciliary IV therapy safety (in our study cohort) Morbidity was recorded in 13.8% in the inpatient group as compared to 9.4% in the ESD group and 14.2% in the domiciliary IV group. The main morbidities developed in all the groups were haemoptysis, heart failure and stroke. In the inpatient group 13 Use of domiciliary intravenous antibiotics in bronchiectasis P Bedi et al stroke). No mortality was recorded in the groups. Thirty-day readmission rates were similar in all groups and the reason for readmission was further exacerbation of bronchiectasis, in all episodes recorded. Side effects with antibiotics, including allergies, developed in 5% in the inpatient group as compared to 6.3% and 4.7% in the ESD group and the domiciliary IV group, respectively. There was no IV access-related complications in the inpatient group in comparison to 6.3% in the ESD group (50% had line blockage and in 50% the line fell out) and 3.6% in the domiciliary IV group (60% had line blockage, 20% line sepsis and in 20% the line fell out). These results are summarized in Table 5. Bed days saved Together, the domiciliary IV therapy and the ESD group saved a total of 1,443 bed days. This allowed freeing up inpatient beds, which could be reallocated. Subgroup analysis of individual patients Of the total 196 episodes of IV antibiotics, a total of 111 patients received treatment. There were 36 individual patients who were admitted as inpatients, of whom 16 required more than one course of IV antibiotics. In the ESD group, 23 individual patients received IV antibiotics, of whom 6 had more than one course of antibiotics. In the domiciliary group, 52 individual patients received IV antibiotics, of whom 19 had more than one course of antibiotics. The first event of individual patients was used. There were similar results in individual patient outcomes (Table 6) as compared to outcomes of all episodes (Table 3), in all groups. Main findings We have introduced domiciliary IV antibiotic therapy in patients with bronchiectasis in a tertiary centre in the UK, by using a team to promote patient-centred domiciliary therapy instead of inpatient treatment. Although domiciliary IV antibiotic therapy is common in cystic fibrosis and other infectious diseases, this is the first large study reporting IV antibiotic therapy in bronchiectasis. This prospective study found that in the patients assessed as suitable by the home IV team, domiciliary IV antibiotic therapy in bronchiectasis is clinically effective and safe. This study has shown that domiciliary therapy with IV antibiotics results in similar clinical outcomes compared to inpatient therapy. There was significant improvement in exercise capacity, spirometry, sputum volume reduction, markers of systemic inflammation, microbial clearance and health-related quality of life at the end of therapy, in both groups. A subgroup analysis of individual patients, in all three groups, showed similar outcomes to the analysis of all episodes. Morbidity was recorded in 13.8% in the inpatient group as compared to 9.4% in the ESD group and 14.2% in the domiciliary IV group. No mortality was recorded in any of the three groups. Readmission rates at 30 days were o 15% in all groups. Side effects with antibiotics, including allergies, were similar (o 7%) in all groups. There was no IV access-related complications in the inpatient group in comparison to 6.3% in the ESD group and 3.6% in the domiciliary IV group. No cases of Clostidium difficile were recorded in the groups. This study shows that in our centre, domiciliary IV (both ESD and domiciliary de novo) antibiotic therapy is a safe and efficient model of health-care delivery in the treatment of exacerbations in non-cystic fibrosis bronchiectasis. Over the past 5 years, 116 episodes of inpatient admissions were avoided by this service, which meant releasing 1,443 bed days, which could be reallocated. It is known that the acquisition costs of antibiotics for domiciliary IV therapy can sometimes exceed inpatient alternatives. 16 However, in our centre, antibiotic regimen was the same for both groups. Hence, directs costs including antibiotics, saline flushes and equipment did not come at any higher costs than that needed for inpatient therapy. Interpretation of findings in relation to previously published work Owing to lack of research in non-cystic fibrosis bronchiectasis, data are often extrapolated from studies done in cystic fibrosis, to guide therapy. To date, there has been only one blinded, randomized control trial investigating the role of domiciliary IV antibiotics versus hospital treatment in cystic fibrosis-related bronchiectasis. 17 This was done in 19 patients and all patients had at least 2-3 days treatment in hospital before being started on domiciliary IV antibiotic treatment. We accept that our study was not a randomized trial, but this is a large study done in a tertiary centre in the UK, where we have been able to demonstrate that domiciliary IV antibiotics for acute exacerbations can be done safely and effectively. Ideally, domiciliary treatment should be as effective as inpatient treatment and clinical improvement not sacrificed on the basis of economic considerations and convenience. 18 Domiciliary treatment allows patients to be treated at home, which should translate into better quality of life and decreased risks of inpatient errors and nosocomial complications. 19 In addition, domiciliary IV antibiotics provide the opportunity to deliver more patient-centred care than in the traditional inpatient setting. 20 All these benefits support the aim of the UK healthcare quality strategy, with emphasis on patient-centred and ambulatory care. 20 We have been able to establish this service with careful risk assessment and management, and have been able to demonstrate that this service is safe and clinically effective. This is of significant importance in bronchiectasis, where prompt treatment of exacerbations with appropriate antibiotics is one of the key aims in managing this chronic condition. Strengths and limitations of this study To the best of the authors' knowledge, this is the first large prospective cohort study assessing the safety and efficacy of domiciliary IV antibiotic therapy in non-cystic fibrosis bronchiectasis in the UK, where patients are middle aged and elderly and have pre-existing comorbid conditions. This study provides data that will help both primary and secondary care teams consider domiciliary therapy for bronchiectasis, if a service is available in their centre. We accept that this study is not a randomized control trial. Inpatient or domiciliary treatment was up to the discretion of the patient and domiciliary team. Also, we did not assess the cost effectives that domiciliary treatment would have to the NHS. However, the main aim of this study was to establish a domiciliary IV antibiotic service for exacerbations in bronchiectasis and demonstrate the safety and efficacy of this service in our centre. Implications for future research, policy and practice A prospective randomized trial would consolidate our research findings. In patients deemed suitable for domiciliary treatment, domiciliary treatment either fully or as ESD is safe and efficacious. Conclusion In patients assessed as suitable by the home IV team, domiciliary IV antibiotics in bronchiectasis is clinically effective and was safe in our cohort of patients.
2017-11-08T22:13:07.367Z
2014-10-23T00:00:00.000
{ "year": 2014, "sha1": "c50ea5f2ac07d05a90d500099f0441612274f13c", "oa_license": "CCBYNCND", "oa_url": "https://www.nature.com/articles/npjpcrm201490.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c50ea5f2ac07d05a90d500099f0441612274f13c", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
54980155
pes2o/s2orc
v3-fos-license
A SYNTACTIC ANALYSIS ON THE ENGLISH TRANSLATION OF SURAH AL QIYAMAH USING TREE DIAGRAMS In this research, the researcher analyzed syntactical patterns of the whole verses (ayah) in the English translation of surah Al Qiyamah, which has 40 ayah, using tree diagrams theory to be able drawing and seeing hierarchical syntax structure of the verses in the surah. After analyzing the data, the researcher finally found twenty four syntactic patterns of the surah: there are sixteen patterns of sentence and eight patterns of phrases. The phrases patterns are : a) the pattern of noun phrase appears in one position, b) the patterns of verb phrase appear in three position, c) the patterns of adjective phrase appear in two position, d) the pattern of prepositional phrase appears in one position, and e) the pattern of complement phrase appears in one position. Corresponding Author: INTRODUCTION In linguistics, study about the sentence of a language is called syntax. Syntax focused on the ways in which words are placed and combined together as one sentence.Once we have structural knowledge of English sentences, it is easy to get the meaning and the purpose of certain sentence and utterance correctly.It In this research, the researcher chose Tree Diagrams as a mean to analyze verses in the English translation of surah Al Qiyamah by T.B Irving.Tree diagram is one of popular theory of syntactic analysis.It is very interesting if we are able to analyze sentences using tree diagrams.Tree Diagrams is sentences analysis by using internal hierarchical structure of sentences as generated by set of rules.There are some advantages of using Tree Diagrams.Bornstein (1977, p. 48) states that a sentence is the basic unit of syntactic analysis which is easier to see the parts of phrases and subparts (parts of speech) of the sentence in a tree diagram.Finch (1998, p. 107) states that the advantage of tree diagrams is that they enable us to see at a glance the hierarchical structure of sentences. All of the translations of the holy Qur"an have grammatical rule.It contains phrase, clause, and sentences.Each of them must be in language as the structure in order to avoid misunderstanding between translators and readers.It is very impossible for the existence of translation without any structural form.In the structural form, the message can be accepted easily and the intention can be understood effectively.For example, in the English translation of surah Al The pattern of the sentence consists of noun phrase (pronoun) "I" followed by verb phrase consists of verb "do" and followed verb pharse which consists of verb "swear" and followed by prepositional phrase that consists of preposition "by" and followed by noun phrase which consists of N1 "Resurrection" and N2 "Day". Syntax In Oxford Advanced Learners Dictionary (1995, p.1212) syntax is defined as the rule of grammar for the arrangement of words into phrases and of phrases into sentences.While in Webster (1988Webster ( , p.1359)), syntax is defined as a branch of linguistics which studies the arrangement of and relationship among words, phrases and clauses forming sentences.Bornstein (1977, p. 246) explained that syntax is the processes by which words and grammatical categories are combined to form phrase, clause and sentences in language.Then, Chomsky (1966, p. 1) said that syntax is the study of the principles and process by which sentences are constructed in particular languages.A linguistics level such as phonemics, morphology, phrase structure is essentially a set of descriptive devices which are made available for the constructions of grammars, it constitutes a certain method for representing utterances. Laurel (2000, p. 167) states that the study of syntax is the analysis of the constituent parts of a sentence: their form, positioning, and function.Constituents are the proper subparts of sentence.In addition, Herman and Haegeman (1989, p. 3) said that syntax or syntactic analysis may be defined as: (a) determining the relevant component parts of the sentence, (b) describing these parts grammatically.The component parts of a sentence are called constituent.In other words, Matthews (1974, p. 154) explained that syntax is concerned with their external functions and their relationship to other word within the sentence. Based on those definitions which are stated by the experts above, the researcher concludes that syntax is one of linguistics branch which is very important to be used while analyzing sentences.By using syntactic analysis, we are able to know the sentence patterns of the sentence such as N, VP, V, DET, and AUX.Furthermore, it can be concluded that syntax is the science which studies (1996, p. 1420), transformational grammar is a system of grammatical analysis that posits the existence of deep structure and surface structure and uses a set of transformational rules to derive surface structure forms from deep structure.In addition, Bornstein (1977:97) states that the term transformation is given a specialized technical meaning: it is a grammatical process that operates on a string of words and symbols with a particular constituent structure and converts it into a new string with a new derived constituent structure.Also, Matthews (1974, p. 177) says that the rules of correspondence (rules relating deep and surface structure) are transformation, and it is from these that transformational syntax takes it name. Chomsky (1972, p. 17) defines that the grammar of a language must contain a system of rules that characterizes deep and surface structure and transformational between them.We should use grammatical transformations of the sort described to convert deep structure to surface form.Moreover, Chomsky (1972, p. 155) also states that the grammar of a language must allow for infinite use of finite means, and we assigned this recursive property to the syntactic component, which generates an infinite set of paired deep and surface structures. Deep and Surface Structure Based on Yule"s book (2008, p. 87-88), he gives explanation of deep and surface structure by showing example of two superficially different sentence as follows: Charlie broke the window. The window was broken by Charlie. P a g e | 21 Alfini Phrase Structure Rules According to Bornstein (1977, p. 39-46), in Transformational Grammar (TG) that the phrase structure rules are illustrated by means of tree diagrams that are called phrasemakers, which show the hierarchical structure of sentence.Transformational grammarians define that verb as the head word in the verb phrase (Bornstein, 1977, p. 77) The most common environment where an adjective phrase (AP) occurs is in "linking verb" constructions as in: Masruroh feels _______. Expressions in the following paragraph can occur in the blank space above as follows: happy, uncomfortable, terrified, sad, proud of him, proud to be his student, proud that she passed the exam, etc. Since these all include an adjective (A), we can safely conclude that they all from an AP.Looking into the constituents of these, we can formulate the following simple phrase structure rule for the AP: The verb sounded requires an AP to be followed, but in example (c) we have no AP.In addition, observe the contrasts in the following example: a. *The employees seem (want to leave the meeting). b.The employees seem (eager to leave the meeting). c. *Sari seems (know about the theater).From the examples above, we can deduce the follo wing general rule for forming a PP: The rule states that a PP consists of a P followed by an NP.We cannot construct unacceptable PPs like the following: *in angry, *into sing a song, *with happily …. Modal According to Laurel (2000, p. 199) that the second in the verb group is modal (M).The modal auxiliary is the first independent in the verb group, but it need not be present because modal is optional.If a modal is present, it carries tense (however, past tense forms of the modals do not usually express past time).The form of the auxiliary (be or have) or main verb which follows the modal is the basic stem form. The negative item to appear under the auxiliary is modal.Since it is optional, it is placed within parentheses which are used to indicate that an item may or may not be chosen: Aux  M Be Have The negative form indicated by the word "not" appears under the auxiliary.The helping verb in which precedes the word "not" also come from the auxiliary.Generally, modal auxiliaries express a speaker"s attitudes or "moods". For example, modals can express that a speaker feels something is possible or probable, necessary, permissible, or advisable; they can convey the strength of these attitudes.Bornstein (1977, p. 40) states that the Aux (auxiliary) can be rewritten as a modal auxiliary (can, must, will), one of the "helping verb" (do, be, have) of traditional grammar, but it also includes tense (present or past) as its first element.Tense must appear under the auxiliary: Tense must be rewritten as either present or past.This is indicated by placing these two items within brackets.When brackets are used, one and only one item from within the brackets must be selected: Tense  Present Past The next item to appear under the auxiliary is modal.Due to that it is optional, it is placed within parentheses which are used to indicate an item may or may not be chosen: If the optional modal is chosen, tense is joined to the modal.The sequence "pres + M" leaves the form of the modal unchanged: Lailatun will leave Aux  pres + M If a modal or another auxiliary ("have", "be", or "do") is not present, the tense ending will be attached to the main verb: Lailatun leaves Aux  pres When present tense is selected, a form change on the verb appear only for third person singular (he, she, it), and not at all for modals.Then, when past tense is selected, a form change is produced to modals and for main verbs for all persons: The next item to appear under the Aux is the perfect aspect which introduces "have" plus the past participle ending into the sentence.Since perfect aspect is optional, it is placed within parentheses: If the perfect aspect is chosen and there is no modal, tense attaches to "have", and the past participle ending is placed on the main verb: Lailatun has left Aux  tense (have + -en) The last item to appear under the Aux is the progressive aspect which introduces "be" plus the present participle ending into the sentence.Like the perfect aspect, it is optional and is placed within parentheses: Aux  tense (M) (have + -en) (be + -ing) If the progressive aspect is chosen, modal and the perfect aspect are not chosen, tense is attached to "be", and the present participle ending is placed on the main verb: Lailatun is leaving Aux  pres + (be + -ing) Tree Diagrams Based on Bornstein"s theory (1977, p. 39), a tree diagram shows the hierarchical structure of the sentence.The sentence is considered the basic of the syntactic system.Instead of beginning with actual sentences, we begin with directions for generating or producing structural descriptions of sentences, which are set forth in phrase structure rules.The rules should be interpreted as an instruction to rewrite or expand the symbol on the left of the arrows as the sequence on the right. In S NP + VP, S stands for sentence, NP (Noun Phrase) and VP (Verb Phrase).The item on the left dominates the elements on the right.Diane Bornstein starts with S as the highest level and works down to lower level until she comes to the maximally specific level, where in addition symbol can be written.This process is called derivational in the sentence. P a g e | 32 Alfini in the form of words than series of number.This research does not present data and result in the form of digit or static but it yields the data and the result in the form phenomena description. Subject The subject of this research is the English translation of surah Al Qiyamah by T.B Irving. Object The object of this research is syntactical analysis of the whole ayah in the English translation of surah Al Qiyamah by T.B Irving which will be analyzed using theory of tree diagrams proposed by Diane Bornstein. Data and Source of Data The data used in this research is the English translation of surah Al Data Collection Procedure The steps to collect the data are: finding T.B Irving"s English translation of surah Al Qiyamah, reading the English translation of the surah, and presenting it as the data. Data Analysis Procedure After collecting the data, several steps will be done as follows: First, drawing tree diagram of the whole ayah in surah Al Qiyamah.Second, analyzing the data using Bornstein"s theory of tree diagrams.Third, describing the diagrams The pattern of the sentence consists of noun phrase (pronoun) "I" followed by verb phrase consists of verb "do" and followed verb pharse which consists of verb "swear" and followed by prepositional phrase that consists of preposition "by" and followed by noun phrase which consists of N1 "Resurrection" and N2 The pattern of the prepositional phrase consists of conjunction "as" and followed by a sentence which consists of pronoun (noun phrase) "I" and followed P a g e | 36 Alfini Iasya Putri LET: Linguistics, Literature and Language Teaching Journal Vol.7 No.1 2017 by verb phrase which consists of verb "swear" and followed by prepositional phrase consists of preposition "by" and a noun phrase consists of article "the" as determiner and followed by a noun phrase which consists of an adjective "rebuking" and noun "soul".This ayah is a complement phrase which is consists of complement "while" and followed by a sentence.The sentence in this ayah consists of pronoun "you", verb "neglect", determiner "the", and noun "Hereafter".This ayah is a noun phrase consists of noun "Looking" which is an verbing and followed by a prepositional phrase consists of preposition "toward", determiner "their", and noun "Lord". Datum 25 (ayah 25): thinking that some impoverishing blow will be dealt them thinking that some impoverishing blow will be dealt them The formula of the diagram : VP V + CP This ayah is a verb phrase, consists of verb "thinking" and followed by a complement phrase consists of complement "that" and a sentence.The sentence consists of determiner "that" and noun "impoverishing blow" as noun phrase and followed by verb phrase consists of auxiliary "will be", verb "dealt", and noun This ayah is an adjective phrase consists of conjunction "though" and followed by adjective phrase.The adjective phrase consists of adjective "closer", complement "to", pronoun "you", conjunction "and", and adjective "even closer". P a g e | 38 Alfini Available online at: jurnal.uin-antasari.ac.id/index.php/letP a g e | 18 Alfini Iasya Putri LET: Linguistics, Literature and Language Teaching Journal Vol.7 No.1 2017 helps us to avoid or decrease misunderstanding in speaking and reading comprehension. by Resurrection Day The formula of the diagram : S  NP + VP P a g e | 19 Alfini Iasya Putri LET: Linguistics, Literature and Language Teaching Journal Vol.7 No.1 2017 , Literature and Language Teaching Journal Vol.7 No.1 2017 about the arrangement and relationship among words, phrases, and clauses forming sentences or larger constructions based on grammatical rules.Transformational Grammar There are a lot of definition of transformational grammar based on some sources and experts.According to Websters New World College Dictionary d. Sari seems (certain about the theater).P a g e | 27Alfini Iasya Putri LET: Linguistics, Literature and Language Teaching Journal Vol.7 No.12017These examples tell us that the verb seem combines with an AP, but not with a VP.Adverb PhraseAnother phrasal syntactic category is adverb phrase (AdvP), as exemplified in the following: soundly, well, clearly, extremely, carefully, very soundly, almost certainly, very slowly, etc.These phrases are often used to modify verbs, adjectives, and adverbs themselves, and they can all occur in principle in the following environments: a. Cici behaved very ________.b.They worded the sentence very _________.c.He treated her very _______.Phrases other than AdvP cannot appear here.For example, an NP the student or AP happy cannot occur in these syntactic positions.Based on what we have seen so far, the AdvP rule can be given as follows: AdvP  (AdvP) Adv Preposition Phrase Another major phrasal category is preposition phrase (PP).PPs like those in the following are generally consist of a preposition plus an NP: from Seoul, in the box, in the hotel, into the soup, with Sarah and her cat, under the table, etc.These PPs can appear in a wide range of environments: a. Rina came from Seoul.b.They put the book in the box.c.Wahdah, Atma, and Kiya stayed in the hotel.d.The fly fell into the soup.One clear case in which only a PP can appear is the following: the squirrel ran straight/right.P a g e | 28 Alfini Iasya Putri LET: Linguistics, Literature and Language Teaching Journal Vol.7 No.1 2017 The intensifiers straight and right can occur neither with an AP nor with an AdvP: a.The squirrel ran straight/right up the tree.b. * The squirrel ran straight/right angry.c. * The squirrel ran straight/right quickly. , Literature and Language Teaching Journal Vol.7 No.1 2017 Qiyamah and the data source in this study the English translation of surah Al Qiyamah which is translated into English by Thomas Ballantine Irving.The researcher took English translation of the Holy Qur"an surah because it has grammatical rule and all of them are analyzable.Moreover, the structures of the sentences in the surah have different patterns so that the researcher wants to analyze deeply about the syntactic patterns of the sentences. , Literature and Language Teaching Journal Vol.7 No.1 2017 descriptively.Fourth, finding and mentioning the syntactic patterns used in the data.Fifth, consulting the result of the analysis with expert, Dr. Saifuddin Ahmad Husin, MA.Finally, making final conclusion. the diagram : S  NP + VP ayah 2): as I swear by the rebuking soul, the rebuking soul The formula of the diagram : PP  P + NP + VP the diagram: NP  N + PP In transformational grammar, phrase structure rules are described by means of tree diagram called phrase-makers which show the hierarchical structure of the sentence.We begin with S (sentence) as the highest level, and go down to the lower levels until we get to maximally specific of terminal level where no additional symbols that can be written.This process is called a derivation of sentence. Bornstein further symbolizes some of the common symbols used in phrase structure rules as follows: Alfini Iasya Putri LET: Linguistics, Literature and Language Teaching Journal Vol.7 No.1 2017 Alfini Iasya Putri LET: Linguistics, Literature and Language Teaching Journal Vol.7 No.1 2017 Alfini Iasya Putri LET: Linguistics, Literature and Language Teaching Journal Vol.7 No.1 2017The tree diagrams above can be explained more detailed as follows:Acording to its purpose, sentence can be classified into four kinds; declarative, imperative, interrogative, and exclamatory.a.Declarative SentenceThis sentence kind makes a statement and ends with a period (.).Example: Every person, man or woman, faithful to Islam, must make "hajj" pilgrimage, at least once during his life-time, unless hindered by poverty, ill-health or other reasonable cause.b.Imperative Sentence It gives a command or makes a request.Most imperative sentences end with a period.A strong command ends with an exclamation point.Alfini Iasya Putri LET: Linguistics, Literature and Language Teaching Journal Vol.7 No.1 2017 that there are three types of sub clause, and those are have the name differently according to their function in the sentence.They are: 1) noun clause, 2) adjective clause, and 3) adverbial clause.The explanation of the three sub clauses as follow: Noun PhraseIn transformational grammar,Bornstein (1977, p. 242)states that noun is defined as the name of a person, place, thing, or quality.Noun phrase is a group of words in which the head word (main word) is a noun or pronoun.Then, a noun phrase can consist of a single noun or pronoun, or of noun or pronoun with modifiers(Bornstein, 1977, p. 55).Noun phrase can be in the form on the following examples: a. NP  N (broom, blanket) If one node is immediately by another, it is called a daughter node.If two nodes are immediately dominated the same node, they are called sister nodes.In the following diagram, the nodes NP and VP are daughter nodes of S and sister nodes to each other.NP is the left sister, whereas VP is the right sister: Iasya Putri LET: Linguistics, Literature and Language Teaching Journal Vol.7 No.1 2017 Alfini Iasya Putri LET: Linguistics, Literature and Language Teaching Journal Vol.7 No.1 2017 Based on the result, there are twenty four syntactical patterns found in the English translation of surah Al Qiyamah which are described into several part appropriate with the types as follows: Iasya Putri LET: Linguistics, Literature and Language Teaching Journal Vol.7 No.1 2017 CONCLUSIONS AND SUGGESTION Alfini Iasya Putri LET: Linguistics, Literature and Language Teaching Journal Vol.7 No.1 2017
2018-12-05T15:56:37.362Z
2017-07-17T00:00:00.000
{ "year": 2017, "sha1": "e716bcf2094b42af6fd5da620f395d0f77fd3866", "oa_license": "CCBY", "oa_url": "http://jurnal.uin-antasari.ac.id/index.php/let/article/download/1510/1108", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "e716bcf2094b42af6fd5da620f395d0f77fd3866", "s2fieldsofstudy": [ "Linguistics", "Education" ], "extfieldsofstudy": [ "Computer Science" ] }
9681534
pes2o/s2orc
v3-fos-license
Optimization of an Efficient Semi-Solid Culture Protocol for Sterilization and Plant Regeneration of Centella asiatica (L.) as a Medicinal Herb The present study investigates the effects of different concentrations, as well as type of plant growth regulators (PGRs) and medium (MS, Duchefa) on the growth and development of Centella asiatica in semi-solid culture. In addition, a protocol for successful sterilization of C.asiatica explants prepared from field-grown plants highly exposed to fungal and bacterial contamination was determined. Results for sterilization treatments revealed that applying HgCl2 and Plant Preservative Mixture (PPM) with cetrimide, bavistin and trimethoprim which were included after washing with tap water, followed by the addition of PPM in the medium, produced a very satisfactory result (clean culture 90 ± 1.33%) and TS5 (decon + cetrimide 1% + bavistin 150 mg/L + trimethoprim 50 mg/L + HgCl2 0.1% + PPM 2% soak and 2 mL/L in medium) was hence chosen as the best method of sterilization for C.asiatica. The synergistic combination of 6 benzylaminopurine (BAP) and 1-naphthaleneacetic acid (NAA) in concentrations of 2 mg/L and 0.1 mg/L, respectively, in Duchefa medium compared with MS induced the most optimal percentage of sprouted shoots (93 ± 0.667), number of shoots (5.2 ± 0.079) and nodes (4 ± 0.067) per explant, leaf per explant (14 ± 0.107) and shoot length (4.1 ± 0.67 cm). Furthermore, optimum rooting frequency (95.2 ± 0.81%), the number of roots/shoot (7.5 ± 0.107) and the mean root length (4.5 ± 0.133 cm) occurred for shoots that were cultured on full-strength MS medium containing 0.5 mg/L indole-3-butyric acid (IBA). In this study, the acclimatized plantlets were successfully established with almost 85% survival. The findings of this study have proven an efficient medium and PGR concentration for the mass propagation of C.asiatica. These findings would be useful in micropropagation and ex situ conservation of this plant. Introduction Centella asiatica is a valuable medicinal and aromatic herb which is spread throughout the tropical and sub-tropical regions. It has its origin in the wetlands of Asia, such as in China, India, and Malaysia. It contains a number of triterpene saponins (e.g., asiaticoside), sapogenins, glycosides, alkaloids (hydrocotylin) and flavonoids with therapeutical properties [1]. The demand for C.asiatica is now met from the natural population, which has led to its gradual reduction. Tissue culture techniques can play a significant role in the rapid multiplication of elite genotypes and germplasm conservation of C.asiatica. Meanwhile, in vitro plant regeneration has been conducted in C.asiatica via callus culture from leaf explants [2] and somatic embryos [3]. Previous study by Karthikeyan et al. [4] on nodal segments of C.asiatica found that BAP and IBA are appropriate for shooting and rooting, respectively. In this study, a procedure for high frequency of plant regeneration from nodal segments is described. The experimental aims of these investigations were to examine the effects of different concentrations, as well as type of plant growth regulators (PGRs) and medium on the growth and development of C.asiatica. Moreover, previous research involving C.asiatica revealed a high percentage of contamination during the culture establishment stage [5], so this study also determined a protocol for successful sterilization of C.asiatica explants prepared from field-grown plants that are typically highly contaminated with fungus and bacteria. Nowadays, the most common method employed for the micropropagation of C.asiatica involves the propagation of shoots via a solid or semi-solid system. Such a semi-solid system has successfully contributed to improved multiplication yields, and it has become gradually more important in improving productivity and reducing the time taken to multiply commercially important material. Sterilization Protocol Prior to the initiation of culture, the nodal explants were collected from well-established field-grown plants. The sterilization procedures that were initially followed (TS1 and TS2) did not include the use of HgCl 2 and PPM treatments, therefore a high percentage of the explants (60 ± 2.15%) was found to be contaminated ( Figure 1). However, HgCl 2 and PPM with cetrimide, bavistin and trimethoprim were included after washing with tap water, followed by the addition of PPM in the medium, to give a very satisfactory result (clean culture 90 ± 1.33%) and (TS5) was hence chosen as the best method of sterilization for C.asiatica ( Figure 2). Tiwari et al. [4] reported that the initiation of the C.asiatica nodal culture proved to be difficult due to heavy fungal and bacterial contaminations. Thus, the nodal explants were first soaked in a solution of systemic fungicide (bavistin) and antibiotic (trimethoprim) to avoid heavy contamination, and subsequently, nearly 80% of the cultures were contamination-free. However, it was also found that the duration of the treatment for mercuric chloride is very critical due to the soft, herbaceous nature of the explants. Meanwhile, Patra et al. [6] stated that 20 minutes treatment with mercuric chloride would not only cause blackening of the tissues but also subsequent death of the explants. Hence, a limited treatment of 3-4 minutes of mercuric chloride was employed in this study. Joshee et al. [7] reported that a 60 min soak in a 2% PPM solution before culture establishment and all culture media supplemented with 2 mL/L PPM helped to control fungal and bacterial contaminations. Effects of different surface sterilization methods on percentage of clean culture of C.asiatica n = 4 (TS1 = decon + benomyl (dip explants 20 min) + clorox 15% and 10% (15 and 10 min respectively); TS2 = TS1 + benomyl 100 mg/L in medium; TS3 = Decon + benomyl + Clorox 15%, 12 min + PPM in medium 2 mL/L; TS4 = TS3 + soak in PPM 2% for 1 hour; TS5 = Decon + cetrimide 1% + bavistin 150 mg/L + trimethoprim 50 mg/L + HgCl 2 0.1% + PPM 2% soak and 2 mL/L in medium. Semi-Solid Culture Method Various media formulation and plant growth regulators, such as auxin and cytokinin, were assessed to regenerate shoots from the nodal segments of C.asiatica (Figures 3 and 4). Bud break did not occur during the initial 6-7 days after inoculation. However, the bud breaks started in most of the cultures from days 11-12, and the small sprouted buds then proliferated into fully expanded shoots with leaves within 3-4 weeks. Meanwhile, various responses and significant differences were observed for the shoot proliferation from the nodes in full-strength MS and Duchefa semi-solid medium that were supplemented with different concentrations of BAP and NAA after 3 weeks of culture. Although bud breaks were observed in all the media assessed in the present study, based on the visual observation treatment (B2N0.1) in Duchefa medium was found to be the optimal combination in terms of the percentage of sprouted shoot (93 ± 0.667), number of shoots (5.2 ± 0.079) and nodes (4 ± 0.067) per explant, leaf per explant (14 ± 0.107) and shoot length (4.1 ± 0.67 cm). Meanwhile, MS supplemented with (B2N0.1) recorded for the node/explant (3.5 ± 0.067), leaf/explant (12 ± 0.067) and shoot length (4 ± 0.067 cm). Nonetheless, there was no significant difference in terms of the shoot length in the full-strength MS and Duchefa semi-solid medium containing (B2N0.1) (Figures 3 and 4). The findings of the current study are consistent with those reported by Sharma [8] and Karthikeyan et al. [4] (using BAP maximum 2-3 mg/L), but they contradict the reports published by Tiwari et al. [5] (using 5 mg/L BAP). In fact, the nodal segments that were cultured on the MS and Duchefa media showed bud break, expansion of nodes and leaf/explant, as well as an increase in the length of the shoots. At the same time, the frequency of the responding explants was found to rapidly increase with the increase in the BAP concentration up to 2 mg/L + NAA 0.1 mg/L. Meanwhile, further increase in the level of both the phytohormones resulted in the formation of callus. Based on the results, the development of shoot regeneration (%), shoot number, leaf/explant, node/explant and shoot length are shown as the effects of different concentrations of BAP and NAA, as well as the medium formulation ( Figure 5). Meanwhile, there were no significant differences found between the means of the in vitro response of shoots in the different MS and Duchefa semi-solid media without PGR (B0N0) (MS= 45 ± 1.07; Duchefa= 46 ± 0.66). The two types of plant growth regulator applied in the study are cytokinins (BAP) and auxins (NAA, IBA). Cytokinins are derived from adenine and create two instant effects on undifferentiated cells: the stimulation of DNA synthesis and enhanced cell division. Auxins are indole or indole-like compounds that promote cell expansion, in particular, cell elongation. Auxins stimulate adventitious root development as well. In addition, light affects the physiological activity of IAA whereas synthetic auxins (such as NAA) are not as light sensitive. In the meantime, high cytokokinin to low auxin ratio promotes adventitious bud formation and overcome apical dominance [9]. Rooting The analysis of variance revealed significant differences among the root treatments in terms of the frequency of cultures showing root regeneration, the number of roots/explant and root length in the semi-solid method. Nonetheless, there was no rooting on full and half-strength MS basal medium without auxin, but rooting was found to occur in 70-95% of the shoots when both media were supplemented with IBA. Apparently, the full-strength MS medium was better than the half-strength MS medium for root initiation. A comparison using Duncan test revealed that the optimum rooting frequency (95.2 ± 0.81%), the number of roots/shoot (7.5 ± 0.107) and mean root length (4.5 ± 0.133 cm) occurred on shoots that were cultured on the full-strength MS medium containing 0.5 mg/L IBA (Figures 6 and 7). Banerjee et al. [2] also reported the positive effect of IBA on rooting in C.asiatica, whereas IAA and low level of sucrose were found to be the optimum by Patra et al. [6]. Establihment of Plants in Soil In this study, the acclimatized plantlets were successfully established with almost 85% survival. According to Tiwari et al. [5], the rooted plants of C.asiatica which were transferred from the culture tubes/flasks into plastic cups containing Soilrite had 90% survival (180 out of 200 plants), while the acclimatized plantlets were successfully established (with only 5% mortality) in the field. Sivakumar et al. [10] investigated a similar series of experiments and demonstrated that for acclimatization, well-developed rooted plants were transplanted to the tray filled with soil mixture, containing Canadian sphagnum peat moss, perlite and vermiculite, which were maintained in the growth chamber for two weeks. The plantlets were then transferred into the glasshouse, potted in natural red soil for 2 more weeks. Afterwards, these potted plants were located outside, i.e., under full sun, and this resulted in 95% survival. Evaluation of Different Sterilization Protocols Preliminary experiments with C.asiatica showed very high contamination during the culture establishment stage. This study elaborates on the protocol for a successful sterilization of the C.asiatica explants, which were prepared from field-grown plants, with an abundance of fungal and bacterial contaminations. For this purpose, five treatments were applied to find the best method to be used to overcome contamination, and these are as presented in Table 1 below. Table 1. Different treatments for sterilization of C.asiatica. To date, various methods have been developed and introduced for the sterilization of C.asiatica explants. In the present investigation, young shoots with nodes were collected. For sterilization treatment (TS1) the plant materials were washed with Decon, and then stirred in 1 g benomyl for 20 min. The next step was constant stirring of plant materials in 15% and 10% Clorox (sodium hypochlorite) for 15 and 10 min respectively. Treated plant materials were rinsed 3 times with autoclaved deionized water in a laminar flow cabinet. A different sterilization treatment was also employed (TS2) which included TS1 procedure as well as using benomyl (100 mg/L) in medium. Nodal explants in TS3 and TS4 were treated using PPM in medium (2 mL/L) and PPM as a soaking treatment respectively. Sterilization treatment TS5 is descibed as follows: after removing the leaves and roots the nodal segments (as part of the stolon) were washed with detergent using an anti-bacterial sponge, and then rinsed thoroughly under running tap water. After that, nodal pieces approximately 2.5 cm in length were excised from the stolons. The nodal segments were soaked in a mixture of 1% cetrimide solution, containing 150 mg/L bavistin and 50 mg/L trimethoprim for 20 min. The explants were then surface sterilized with 0.1% (w/v) mercuric chloride for 3-4 min, followed by five rinses with autoclaved sterilized distilled water. This was followed by 60 min soaking of the explants in PPM 2% and finally the addition of 2 mL/L of PPM into the medium. Later, the nodal explants were trimmed from both ends to about 1.5 cm prior to inoculation on the culture medium. Culture Medium The culture media used in the study included the Murashige and Skoog basal medium, as well as the shoot medium from Duchefa, using semisolid culture. The composition of Duchefa shoot medium differs from the MS medium in the amount of anhydrous NaH 2 PO 4 (128 mg/L) as one of the macroelements and thiamine HCl (0.4 mg/L) as one of the vitamines. Moreover, it only has myo-inositol and thiamine HCl as vitamins [11]. The single nodes which were separated from the nodal segment were cultured on the semisolid full-strength MS [12] and the shoot medium (Duchefa) that was supplemented with 11 different concentrations of BAP and NAA ( Table 2). The shoot number per explant, node/explant, leaf/explant and shoot length was measured at the end of the 3 rd week. The experiments were set up in RCBD in five replicates. There were 10 explants in each replicate. The analysis of variance (ANOVA) that is appropriate for the design was also carried out to detect the significance in terms of the differences among the means for all the treatments. The means for all the treatments were compared using the Duncan Multiple Range Test (DMRT) at a 5% probability level according to Gomez and Gomez [13]. Proliferating shoots were transferred to full or half-strength MS medium containing different levels of IBA for rooting (Table 3) so as to identify the optimal concentrations of the auxin and MS strength for root development after three weeks. Meanwhile, the percentage of root regeneration, number of root/shoot and root length were recorded. Culture Conditions The cultures were kept in an incubation room at 25 ± 2 °C, under the condition of a 16-h photoperiod of 50 µmol/m 2 /s irradiance, provided by cool white fluorescent light with 55-60% relative humidity. Conclusions An efficient protocol for in vitro propagation of the valuable medicinal plant C.asiatica (L.) through axillary shoot proliferation from nodal explants has been described. Multiple shoots were induced from the nodal segments that were cultured on semi-solid Murashige and Skoog (MS) and Duchefa shoot media containing various concentrations BAP and NAA. A comparison of MS and Duchefa semisolid media was made and the difference in shoot multiplication was noted and apparently, the best-developed shoots were obtained using the Duchefa semi-solid medium containing B2N0.1 after three weeks. Therefore, it could be reiterated that the selection of the different types of medium and the PGR is important for optimizing the proliferation of shoots in the culture. Elongated shoots were separated and rooted on half and full-strength MS semi-solid media that were fortified with different concentrations of IBA which ranged from 0 to 0.75 mg/L. The length of roots and the number of roots/shoot were found to be the greatest in full-strength MS medium with 0.5 mg/L IBA. Moreover, about 95% of the shoots were rooted. The micropropagation protocol standardized in the present study is established as an efficient tool for mass production of Centella asiatica and can be employed for its conservation. In fact, reconsidering the hypothesis postulated at the beginning of the study, it is now possible to state that the appropriate conditions for nodal segment culture of C.asiatica have been successfully established to cater for the continuous need of planting materials for the pharmaceutical industry. Regarding HgCl 2 , which is being widely used to overcome contamination, it should be noted that it has detrimental effects on health and causes environmental issues. Hence, in sterilization protocol HgCl 2 could be omitted. Although more experiments are needed to find the precise concentration of PPM (as a substitute for HgCl 2 ) to improve the protocol. Moreover this is the first time that Duchefa shoot media was examined on C.asiatica and introduced as an efficient media instead of MS.
2014-10-01T00:00:00.000Z
2011-10-25T00:00:00.000
{ "year": 2011, "sha1": "45d0e42cc6582671326fe8aa7f4b96027d134f00", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1420-3049/16/11/8981/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "119cbfd7603726909ebfc75ad70bf730270df1ed", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
251449243
pes2o/s2orc
v3-fos-license
Myocardial contrast echocardiography assessment of perfusion abnormalities in hypertrophic cardiomyopathy Background Perfusion defects during stress can occur in hypertrophic cardiomyopathy (HCM) from either structural or functional abnormalities of the coronary microcirculation. In this study, vasodilator stress myocardial contrast echocardiography (MCE) was used to quantify and spatially characterize hyperemic myocardial blood flow (MBF) deficits in HCM. Methods Regadenoson stress MCE was performed in patients with septal-variant HCM (n = 17) and healthy control subjects (n = 15). The presence and spatial distribution (transmural diffuse, patchy, subendocardial) of perfusion defects was determined by semiquantitative analysis. Kinetic analysis of time-intensity data was used to quantify MBF, microvascular flux rate (β), and microvascular blood volume. In patients undergoing septal myectomy (n = 3), MCE was repeated > 1 years after surgery. Results In HCM subjects, perfusion defects during stress occurred in the septum in 80%, and in non-hypertrophied regions in 40%. The majority of septal defects (83%) were patchy or subendocardial, while 67% of non-hypertrophied defects were transmural and diffuse. On quantitative analysis, hyperemic MBF was approximately 50% lower (p < 0.001) in the hypertrophied and non-hypertrophied regions of those with HCM compared to controls, largely based on an inability to augment β, although hypertrophic regions also had blood volume deficits. There was no correlation between hyperemic MBF and either percent fibrosis on magnetic resonance imaging or outflow gradient, yet those with higher degrees of fibrosis (≥ 5%) or severe gradients all had low septal MBF during regadenoson. Substantial improvement in hyperemic MBF was observed in two of the three subjects undergoing myectomy, both of whom had severe pre-surgical outflow gradients at rest. Conclusion Perfusion defects on vasodilator MCE are common in HCM, particularly in those with extensive fibrosis, but have a different spatial pattern for the hypertrophied and non-hypertrophied segments, likely reflecting different contributions of functional and structural abnormalities. Improvement in hyperemic perfusion is possible in those undergoing septal myectomy to relieve obstruction. Trial registration ClinicalTrials.gov NCT02560467. Graphical Abstract Supplementary Information The online version contains supplementary material available at 10.1186/s12947-022-00293-2. Abnormalities in myocardial perfusion in the absence of coronary artery disease (CAD) occur in patients with hypertrophic cardiomyopathy (HCM) [1][2][3][4]. While fixed perfusion defects can occur from extensive HCM-related myocardial fibrosis [5], there is also a high prevalence of abnormal myocardial blood flow (MBF) reserve during vasodilator testing [1,3]. In some studies, reduced flow reserve has been attributed to low hyperemic perfusion rather than from the high resting flow that can occur from hypertrophy-related increased wall stress and oxygen demand [1,6]. The presence of a relationship between inducible ischemia and the degree of either myocardial fibrosis or left ventricular outflow tract gradients in HCM is uncertain based on mixed results [1,3,4,7]. Less controversial is the association between inducible ischemia and adverse clinical outcomes [1,3]. Ischemia in the absence of CAD in HCM has been attributed to structural abnormalities such as microvascular rarefaction and arteriolar medial hyperplasia [8][9][10]. Functional abnormalities are also possible from exaggerated systolic flow reversal and delay in diastolic forward flow in the distal coronary arteries, or from epicardial-endocardial perfusion pressure loss [11][12][13]. These mechanisms could contribute to ischemia in non-hypertrophied segments which has been reported in HCM and even in high-risk gene carriers [14,15]. In the current study, vasodilator stress myocardial contrast echocardiography (MCE) perfusion imaging was performed in patients with septal-variant HCM to assess the prevalence of reversible ischemia in both the hypertrophied and non-hypertrophied regions; and to spatially characterize ischemia as transmural diffuse, patchy, or subendocardial in distribution. Using parametric analysis, abnormalities in MBF at rest or during vasodilator stress were classified as being attributable to an impairment in microvascular flux or from loss of functional microvascular units. In a small subset of subjects undergoing septal myectomy for symptomatic obstruction, repeat MCE was performed to assess for changes in myocardial MBF reserve based on the potential impact of improving vascular function by reducing late intraventricular systolic pressures. Roldan et al. Cardiovascular Ultrasound (2022) 20:23 Subjects The study was approved by the Investigational Review Board at Oregon Health & Sciences University and registered with ClinicalTrials.gov (NCT02560467). The study design was a prospective, non-blinded study of seventeen subjects between the ages of 19 and 70 with a diagnosis of septal variant HCM and fifteen age-matched subjects free of cardiac symptoms with no more than one CAD risk factor (lipid disorder, hypertension, diabetes, smoking) who were recruited to serve as normal controls. Subjects with HCM were recruited if they had a diagnosis made by echocardiography or cardiac magnetic resonance imaging (CMR) with a maximal septal thickness of 15 mm or greater, and also had undergone CMR with late gadolinium enhancement (LGE) imaging for quantification of fibrosis within the preceding 6 months. Subjects were excluded for known coronary or peripheral artery disease, significant valvular heart disease other than that caused by systolic anterior motion which could be no more than moderate in severity, history of resuscitated sudden cardiac death (SCD), left ventricular (LV) systolic dysfunction (ejection fraction [LVEF] < 50%), pregnancy, contraindications to regadenoson, allergy to ultrasound enhancing agents, or elite athlete status. Additional exclusion criteria for HCM subjects included prior septal reduction therapy (either surgical or alcohol ablation), pacemaker-dependent rhythm, presence of LV aneurysm, or treatment with cardiac myosin inhibitor. Symptom status and SCD risk Angina symptoms in subjects with HCM were determined by history. Risk for SCD was determined by an established risk prediction model based on subject age, echocardiographic indices, history of arrhythmia, symptoms, and family history [16]. Vasodilator stress myocardial contrast echocardiography Vasodilator stress MCE was performed in all subjects and was repeated at least 12 months after myectomy in those who were referred for surgical septal reduction. Subjects abstained from caffeine for 48 h. MCE perfusion imaging (iE33, Philips Ultrasound, Andover, MA) was performed at a centerline frequency of 2.0 MHz with multi-pulse amplitude-modulation imaging at a mechanical index of 0.12-0.16. Overall gain was adjusted to levels just under those that produced background myocardial speckle. Images were acquired in the apical 4-chamber, 2-chamber, and long-axis imaging planes. Lipid-shelled microbubbles with a gas core containing either sulfur hexafluoride (Lumason, Bracco Diagnostics, Monroe Township, NH) or octafluoropropane (Definity, Lantheus Medical Imaging, North Billerica, MA) were used. Lumason was reconstituted in 5 mL of normal saline while activated Definity was diluted to 30 mL total volume in normal saline. Infusion rates were kept constant for each individual at a rate of 1.0 to 1.5 ml/min. A 5-frame highpower (mechanical index > 0.9) sequence was applied to destroy microbubbles in the imaging sector through inertial cavitation, after which electrocardiographicallytriggered end-systolic frames were acquired until visual replenishment had occurred. MCE was performed at rest and during vasodilator stress produced by intravenous administration of regadenoson (0.4 mg). Heart rate and blood pressure were recorded at baseline and three minutes after injection of regadenoson. MCE analysis Analysis was performed by a reader blinded to MRI and clinical data, other than diagnosis of HCM which is readily apparent on the echocardiogram. Perfusion was qualitatively defined as abnormal if there was lack of complete microvascular refill within 5 s at rest, or within 2 s during vasodilator stress [17] (See Additional file 1) Perfusion abnormalities were categorized as being evenly transmural, subendocardial, or patchy in appearance. Quantitative perfusion analysis was performed using software developed for MCE perfusion imaging (iMCE, Narnar LLC, Portland, OR). For control subjects, data were averaged from transmural regions-of-interest placed over each perfusion territory of all three major coronary arteries. For HCM subjects, transmural regions-of-interest were separately drawn over the hypertrophic and nonhypertrophic control subjects in either the 4-chamber or 3-chamber view. Regions with obvious rib artifact or cavity attenuation were excluded. The first frame after inertial cavitation was digitally subtracted from all subsequent frames and background-subtracted time-intensity data were fit to the function: where y is signal intensity at time t, A is the plateau intensity reflecting relative microvascular blood volume (MBV), and β is the rate constant reflecting microvascular blood flux rate. Myocardial MBF was quantified by the product of MBV and β [18]. Echocardiography Echocardiography (iE33, Philips Ultrasound, Andover, MA) was performed to assess chamber dimensions, wall thickness, left ventricular function, and peak LVOT gradient according to guidelines published by the American Society of Echocardiography [19]. LV volumes, LVEF, and stroke volume in HCM subjects were calculated using the modified Simpson's method. Stroke volume in controls was calculated by the product of LVOT area and time-velocity integral measured by pulsed-wave spectral Doppler. LV stroke work index was calculated by: For HCM subjects, LVOT gradient was added to mean arterial pressure, although this approach overestimates actual work based on the end-systolic nature of the gradient. Myocardial work index was calculated by the product of stroke work index and heart rate. The LVOT gradient in subjects with HCM were measured both at rest and during vasodilator stress upon completion of MCE perfusion imaging. Assessment of fibrosis by CMR A standardized CMR protocol was performed on a 1.5 Tesla (T) scanner (Philips 1.5 Achieva or Integra) with multi-channel channel phased-array chest coils and electrocardiographic gating. Cine steady-state free precession imaging was performed covering the whole heart in 8 mm thick slices, though these data were not used for analysis. For LGE, a phase-sensitive inversion-recovery sequence was acquired 12-15 min after intravenous gadolinium contrast administration (0.2 mmol/kg). Distribution and extent of LGE was assessed both visually and quantitatively using the six standard-deviation threshold according to the Society for Cardiovascular Magnetic Resonance standards [20]. Statistical analysis Data were analyzed using Prism (version 9.0, GraphPad, San Diego, CA). For data determined to be normally distributed by the D' Agostino and Pearson omnibus test, differences were assessed by one-way ANOVA with posthoc comparisons made by paired or unpaired Student's t-test with Tukey's test to adjust for multiple comparisons. Unless otherwise described, normally-distributed data are expressed as mean ± standard deviation. Differences for non-normally distributed data were assessed with Friedman's test with post-hoc individual comparisons by Mann-Whitney U test for non-paired data or Wilcoxon signed-rank test for paired data. Non-normally treated data are expressed as median with interquartile range (IQR). Differences in proportions were compared using χ 2 analysis. Relationship between MCE perfusion data and other clinical data were determined using either Spearman's rank correlation coefficient (ρ) or Pearson correlation coefficient. Differences were considered significant at p < 0.05. 0.0136 × mean arterial pressure × stroke volume index Clinical characteristics Vasodilator stress MCE could be performed in all control subjects, although quantitative MCE analysis was deemed unreliable in two subjects because of poor image quality. Two HCM patients were excluded because of either severe symptoms with regadenoson requiring immediate reversal with aminophylline, or discovery of multivessel CAD on angiography after stress MCE and LGE revealed abnormalities in a pattern typical for CAD. Clinical characteristics of the final study group are shown in Table 1. For subjects with HCM, 40% had a history of angina. The number of risk factors for CAD and use of cardiovascular medical therapy tended to be greater in the HCM group, although the only individual feature that was significantly different between groups was the use of disopyramide and beta blockers. Key echocardiography variables are shown in Table 2. As expected, septal end-diastolic wall thickness, LVEF, and LVOT pressure gradient at rest were greater in HCM compared with normal control subjects. There were nonsignificant trends for smaller cavity dimensions in HCM. LV stroke work index and myocardial work index, which were calculated inclusive of end-systolic LVOT pressure gradients, were significantly higher in HCM than control subjects. Perfusion imaging Vital signs and hemodynamic measurements at rest and during vasodilator stress are shown in Table 3. An increase in heart rate and rate-pressure product during regadenoson stress was seen in control subjects but not HCM subjects, probably as a result of more frequent use of beta blockers and calcium channel antagonists in the HCM cohort. There was no major change in peak LVOT gradient from rest to vasodilator stress stage in the HCM cohort. On qualitative analysis of MCE, myocardial perfusion was normal at rest and during vasodilator stress in all but one of the control subjects who had mild global diffuse reduction contrast replenishment. In two HCM subjects (13%), patchy or diffuse hypoperfusion of the hypertrophied septum was observed at rest. During vasodilator stress, perfusion defects were observed in the majority of HCM subjects (Fig. 1). These defects were more common in the hypertrophied than non-hypertrophied segments. The spatial distribution of these perfusion defects was most commonly subendocardial or patchy rather than transmural and diffuse (Fig. 1). Defects in the nonhypertrophied territories were seen in over one-third of subjects in which case they tended to be transmural and diffuse. On quantitative analysis (Fig. 2), MBF in both the hypertrophied and non-hypertrophied segments in HCM patients was modestly lower than in normal control subjects despite echocardiographically-determined myocardial work being higher in those with HCM. Consistent with previous studies, regadenoson increased MBF primarily through an increase in microvascular flux rate (β) [21]. During vasodilator stress, differences in MBF between HCM and normal control subjects were greater than at rest, which was attributable primarily to slower microvascular flux rate, although MBV in hypertrophied segments in HCM was persistently lower than in nonhypertrophied segments or in control subjects. Perfusion abnormalities and HCM clinical variables On CMR, the median LV fibrosis expressed as a percent of LV area with LGE was 2.0% (95% CI: 0.0-6.0). The presence of myocardial fibrosis defined as > 2% myocardial area was present in six subjects and was found only in hypertrophied segments. All six subjects with fibrosis were graded as having abnormal vasodilator stress MCE perfusion imaging. Quantitative MCE at rest and during stress in the hypertrophic segment was not significantly different according fibrosis status, and there was no relationship between fibrosis area and microvascular perfusion (Fig. 3A to H). Yet, subjects with significant fibrosis, defined as > 5%, all tended to have low MBF at rest and during stress. For subjects with a history of anginal chest pain (n = 6), obstructive CAD had been excluded by angiography in half. Myocardial perfusion and parameters of microvascular flux rate and MBV during stress in the hypertrophic segment was not significantly different according to anginal status (Fig. 3I to K). There was no significant relationship between LVOT gradient at rest and myocardial perfusion at rest or during stress (Fig. 3L). There was also no significant relationship between septal thickness and myocardial perfusion at rest or during stress (p = 0.98). Changes in myocardial perfusion after myectomy Three subjects with HCM underwent surgical myectomy with or without papillary muscle realignment, two of whom had severe resting LVOT obstruction (> 100 mm Hg gradient), and one had severe obstruction (80 mm Hg) only during low intensity exercise. The post-myectomy LVOT gradient at rest was < 15 mm Hg in all three subjects, and the peak gradient during exercise was 23 to 52 mm Hg. Repeat vasodilator stress MCE performed more than one year after myectomy demonstrated a significant increase in hyperemic myocardial perfusion in both the hypertrophied and non-hypertrophied segments in two subjects (Fig. 4), both of whom had severe LVOT obstruction at rest before myectomy. In these individuals, improvement in MBF post-myectomy was attributable primarily to an increase in microvascular flux rate. A decrease in stress perfusion post-myectomy was seen in the subject that had a high LVOT gradient only during exercise who also had very high exercise perfusion pre-myectomy. Discussion Hypertrophic cardiomyopathy is a disease with a wide degree of phenotypic variability. Clinical studies demonstrating inducible ischemia in patients obstructive HCM were published four decades ago using non-quantitative radionuclide imaging [2,22,23]. Since then, quantitative perfusion imaging with positron emission tomography, CMR, and MCE in patients with HCM have confirmed that myocardial blood flow in the hypertrophied and non-hypertrophied regions is frequently reduced during exercise or vasodilator stress, and occasionally at rest [1,3,4,7]. In the current study, we demonstrated that vasodilator stress MCE with regadenoson can be used to identify abnormalities in perfusion at rest and during stress in patients with HCM, and that the spatial manifestations of perfusion defects are varied, with subendocardial or patchy abnormalities being more common than transmural diffuse defects in hypertrophied segments. Substantial reduction in hyperemic perfusion during vasodilator stress is commonly found in those with large amounts of fibrosis detected by LGE. We also demonstrated that improvement in hyperemic perfusion in both the hypertrophied and non-hypertrophied regions can occur late after septal myectomy. In HCM, reduced perfusion reserve in the absence of atherosclerotic CAD has been attributed, in part, to structural abnormalities of the vasculature. On histopathology, medial hyperplasia and lumen narrowing of small coronary arteries and arterioles, and a reduction in myocardial capillary density in hypertrophic regions has been described in HCM [8][9][10]. The latter feature indicates a failure in compensatory remodeling of the distal Fig. 2 Quantitative MCE perfusion data (mean ± SEM) at rest and during vasodilator stress from normal control subjects and in the non-hypertrophied and hypertrophied regions from HCM subjects. Data include: (A) microvascular blood flow, (B) microvascular blood volume, and (C) Microvascular flux rate (β) derived from time-intensity data circulation to address the increased LV mass, cellular hypertrophy, and increased LV work and wall stress in HCM. Because of the compensatory reserve in the capacity of arterioles to dilate and capillary units to recruit, resting perfusion can be preserved in most patients with HCM. Yet, partial exhaustion of reserve and increased resistance from arteriolar narrowing and reduced capillary density is expected to produce myocardial ischemia during hyperemic stress or increased metabolic demand. Functional abnormalities of the microcirculation in HCM have also been described. Extravascular compressive forces from high LV systolic and diastolic pressures in combination with the normal transmural pressure drop would be expected to reduce maximal flow in HCM, particularly in the endocardium. This mechanism has been proposed to explain reduced endocardial flow reserve in HCM, particularly in patients with high LV end-diastolic pressures or extreme septal hypertrophy [4,7]. Abnormalities in the phasic flow of coronary arteries can occur from altered hemodynamic forces in HCM. Studies using invasive coronary flow wires or noninvasive coronary wave intensity measurements have revealed a marked predominance of diastolic flow and more prominent retrograde systolic flow in distal coronary arteries in subjects with HCM, which can be further accentuated by inotropic stress [12,24]. Exaggerated retrograde flow combined with delayed or shortened diastolic relaxation can result in reduced antegrade discharge from small arteries or large arterioles that normally act as a type of "hydraulic capacitor" [12,25]. From a clinical perspective, this functional abnormality is likely to worsen as LV end-systolic pressure, myocardial diastolic pressure, and heart rate increase. In the current study, the spatial distribution of perfusion abnormalities during vasodilator stress, whether from structural or functional causes, was assessed by MCE. This technique provides parametric information on whether abnormalities in perfusion are secondary to reduced MBV, which can occur from either capillary rarefaction or functional non-patency of microvascular units [26]. It also measures microvascular flux rate which can be reduced from high resistance anywhere along the vascular network [27]. At rest, MCE revealed very modest reductions in MBV in both the hypertrophic and non-hypertrophic regions of patients with HCM despite these subjects having higher systolic wall stress and work. There is reason to believe that this abnormality was from abnormalities in phasic flow based on results from previous studies showing a high degree of cyclic video intensity at the LV apex, primarily from low systolic intensity, during resting MCE in patients with apical HCM [28]. Ordinarily, our finding of reduced perfusion at rest and increased work would be expected to result in ischemia. Yet these subjects were not symptomatic and LV systolic function was normal. This paradox could be related to compensatory mechanisms to increase oxygen delivery, even out of proportion to calculated work, based on studies using 11 C-acetate PET indicating that myocardial oxygen consumption is not reduced in subjects with HCM who have normal to high LVEF [29]. Perfusion abnormalities in those with HCM became much more prominent during vasodilator stress, primarily because of a deficit in the ability to appropriately augment microvascular flux rate. This finding is somewhat different from previous quantitative MCE studies that found that reduced MBF in HCM, both at rest and during stress, is attributable to abnormal MBV [7]. We believe differences between the two studies can be explained by much less severe LVOT obstruction in the current study. We found that the dominant spatial pattern for hyperemic flow deficits was subendocardial or patchy in distribution. These patterns do not indicate any one mechanism since they could occur from pre-capillary drop in resistance from arteriolar narrowing, microvascular rarefaction, or phasic functional abnormalities of coronary flow. We observed a marked improvement in hyperemic flow, including in non-hypertrophied territories, after septal myectomy in two subjects who had very high resting LVOT gradients and low hyperemic flow (approximately one-third of control subject average) Fig. 4 A Myocardial blood flow (MBF) in the hypertrophic and non-hypertrophic segments in patients with HCM during vasodilator stress showing individual data from the initial study (baseline), and after surgical myectomy (n = 3 subjects, data points in red). End-systolic images in the apical 4-chamber view during vasodilator MCE from a single for the study prior to myectomy B and more than one year after myectomy C. End-systolic images shown were acquired immediately after microbubble destruction (T 0 ) and at approximately two seconds (T 2 ) after replenishment and illustrate improvement in contrast enhancement in the hypertrophied and remote segments prior to surgical intervention. This finding suggests that abnormal flow from high systolic compressive forces in combination with delayed relaxation can affect global myocardial perfusion and is reversible late after correction of the high systolic gradient. There are several important limitations of the study. The total number of subjects studied and the number of subjects undergoing myectomy was low because of strict entry criteria, including the need for recent CMR and exclusion for treatment with a myosin inhibitor which was being investigated concurrently with recruitment for this study. Yet data indicating a potential beneficial effect of myectomy on perfusion can be used to justify a larger prospective study in that narrow population of patients. Although MCE can be used to calculate absolute MBF in mL/min/g, this analysis was not performed because the requisite calculation of absolute MBV is valid only if blood pool microbubble signal is below the upper limit of the dynamic range which generally requires lower contrast infusion rates and appropriate scaling. Perfusion data were also not expressed as MBF normalized to work because of limitations in using end-systolic pressures to reflect total systolic load. Instead, we simply concluded that perfusion deficits in HCM occurred despite greater workload based on high systolic LV pressures. It should also be noted that vasodilator stress rather than exercise stress was used. The latter would provide a better test for stress-induced deficits in MBV, although the level of stress induced would be difficult based on difficulties in determining true afterload in those with dynamic gradients. Similarly, we have not tested other vasodilator agents, such as NO donors that could produce different results based on their prominent effects on pre-load which would affect wall stress and myocardial work, and based differences in the circulatory network where NO acts. Finally, ischemia from CAD was excluded by angiography in only about half of the HCM subjects, all of whom had anginal symptoms. One patient was excluded from analysis based on the presence of severe CAD on angiography performed for a typical coronary distribution of perfusion deficits on stress MCE. Conclusions In summary, using vasodilator stress MCE we have demonstrated that hyperemic perfusion defects in HCM are common and manifest as a variety of different spatial patterns that vary according to whether they are present in the hypertrophied versus non-hypertrophied segments. Resting defects were primarily from abnormalities in functional MBV, whereas stress-induced defects were primarily from abnormalities in microvascular flux rate. Consistent with some other studies, there was not a strong relation between LVOT gradients and flow deficits. Yet those with the highest LVOT gradients all tended to have low hyperemic flow which is potentially treatable by myectomy.
2022-08-10T16:21:26.914Z
2022-09-19T00:00:00.000
{ "year": 2022, "sha1": "e37fe69776eafc72223f89b9f680a9dbdb1ae52b", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Springer", "pdf_hash": "d5ff6cf551e46ddf5eb50f6a54fe3e26fbed04ae", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
15004557
pes2o/s2orc
v3-fos-license
Seroprevalence of antibodies to measles, mumps, and rubella among Thai population: evaluation of measles/MMR immunization programme. Stored serum specimens, from four regions of Thailand, of healthy children attending well baby clinics and of healthy people with acute illnesses visiting outpatient clinics were randomly sampled and tested for IgG antibody to measles, mumps, and rubella (MMR). The immunity patterns of rubella and mumps fitted well with the history of rubella and MMR vaccination, seroprotective rates being over 85% among those aged over seven years. A high proportion of younger children acquired the infection before the age of vaccination. MMR vaccination should preferably be given to children at an earlier age. For measles, 73% seroprotective rates among children, aged 8-14 years, who should have received two doses of measles/MMR vaccine, were lower than expected. This finding was consistent with the age-group reported in outbreaks of measles in Thailand. The apparent ineffectiveness (in relation to measles) of MMR immunization of 1st grade students warrants further studies. INTRODUCTION The immunization programme in Thailand, commenced in 1980, currently provides vaccines to protect against 10 childhood diseases, such as tuberculosis, hepatitis B, diphtheria, pertussis, tetanus, poliomyelitis, measles, mumps, rubella, and Japanese encephalitis, through scheduled EPI sessions in hospitals and health centres around the country. The first dose of measles vaccination was incorporated into the national immunization programme for children aged nine months in 1984. The second dose of measles vaccine was added in 1996 for 1st grade students aged seven years. In 1997, the second dose of measles vaccine was replaced by measles-mumps-rubella (MMR) vaccine. Rubella immunization was first provided to 6th grade female students aged 12 years during 1986-1998 and, later, to 1st grade students of both the sexes during 1993-1996 before being replaced by MMR vaccine in 1997 as mentioned above (1). Ages in 2004 of the population under the measles and MMR immunization programme are provided in Table 1. Surveys indicated that the coverage of 1st dose of measles vaccine was 48% in 1987, 82% in 1991, and above 90% since 1996. From the last survey, in 2003, the coverage of 1st dose of measles vaccine was 96% (2). The coverage of MMR vaccine among 1st grade students was 94% surveyed in 2004 (3). As in other countries, the incidence of measles in Thailand has reduced dramatically since the introduction of live measles vaccine into the routine immunization programme (4)(5)(6). The number of reported measles cases reported in the National Disease Surveillance System has declined since 1984 with an outbreak peak every 3-4 years, and mortality due to measles has become extremely rare (Fig. 1). The last peak years were 2001 and 2002 (11.8-16.5 per 100,000 people) (6). The highest incidence was Seroprevalence of Antibodies to Measles, Mumps, and Rubella among Thai Population: Evaluation of Measles/MMR Immunization Programme observed in children who were too young for vaccination (7). Outbreaks of measles in children aged less than five years occurred exclusively in hard-toreach area where the coverage of vaccine was low. Nevertheless, outbreaks among urban and rural children aged 7-15 years still occur occasionally. In the case of rubella, MMR vaccine is administered to school-age children aiming at preventing congenital rubella syndrome (CRS) and reducing morbidity. The incidence of rubella in Thailand is declining (Fig. 1), the reported rubella morbidity rate in 2003-2006 being only 0.61-0.78 per 100,000 people (6). No outbreak of CRS has been noted in the last 10 years but the significance of this may be questionable as Thailand does not list CRS as a notifiable disease. The purpose of mumps vaccination in Thailand is to reduce its associated complications and morbidity. The disease-surveillance data show high outbreak peaks in 1995-1996 and, after that, the incidence declined. During 2003-2006, the incidence of mumps was 12.2-17.6 per 100,000 people (6). Although the epidemiological changes seen in the incidence of MMR in Thailand correspond well with immunization history and levels of coverage, the national immunization programme still needs to verify actual levels of immunity. Such information would guide vaccination strategies in both preventing future outbreaks and pursuing the more ambitious targets of elimination or eradication. Accordingly, the main objective of this study was to review the seroprevalence of IgG antibodies to MMR among the Thai population after many years of vaccination against these diseases. MATERIALS AND METHODS The Ethical Committee for Research in Human Subjects, Department of Disease Control, Nonthaburi, approved the study. There was no field serum specimen and evidence of data collection specifically for this study. The serum specimens were those remaining from the 2004 hepatitis immunity study (described below). Vaccination history was available only in relation to measles vaccination of children, aged less than five years, who had vaccination cards. Serum specimens The sera used in the study were taken from 6,213 (8,9). The samples were collected from people in four provinces: Chiang Rai, Udon Thani, Chon Buri, and Nakhon Si Thammarat, having populations broadly representative of those in the northern, northeastern, central and southern regions respectively (Fig. 2). year(s) were re-categorized to demonstrate better the seroprevelance in age-group before and after the measles and MMR vaccination age of the child. Sera were collected and kept at -70 °C until tested. We used commercially-available ELISA IgG assays for measles (RE56611; IBL Immuno-biological Laboratories, Hamburg, Germany), mumps (RE56641; IBL), and rubella (RE57081; IBL). The laboratory results were interpreted according to the instructions of each manufacturer, except for measles IgG. The positive cut-off point for mumps was at >12 U/mL, and for rubella, at >15 IU/mL. For measles IgG, we calibrated the test using the National Substandard of Anti-Measles-Serum, Human, 1/92 (calibrated against the 1st International Standard Anti-Measles-Serum, Human 66/202) provided by the Robert Koch Institute, Berlin, Germany. We set the positive cut-off point at 255 mIU/ mL (PRN titre=120) (10,11). The calibration was intended to translate the test results into international units and to avoid the specificity of the test being too high. It indicated that the original cut-off point provided by the manufacturer was equivalent to 601 mIU/mL. Analysis of data Data were analyzed by determining the geometric mean titre for each viral IgG by age-group and the seroprotective rate (which is the proportion of specimens with antibody level about the cut-off point for each viral IgG) by age-group and by province. Chi-square test (p=0.05) was used for assessing the statistical significance of the differences between proportions. Measles IgG In total, 1,092 serum specimens were tested for measles IgG. At a positive cut-off point of >255 mIU/mL, 81% of the specimens (95% confidence interval [CI] 78.8-83.5) had protective antibody levels. The seroprotective rates for those aged below 1, 1-7, 8-14, and 15-19 year(s) were 27%, 76%, 73%, and 82% respectively, which were lower than expected. For the age-group of 20 years and above, the seroprotective rates were 91-96% ( Table 2). The geometric mean titre (GMT) rose sharply in those aged 1-7 year(s) compared to those aged less than one year, but there was no increase in GMT in those aged 8-14 years compared to those aged 1-7 year(s) ( Overall, the seroprotective rates of measles were similar (p=0.19) among provinces: Chiang Rai-77.4%, Udon Thani-81.3%, Chon Buri-82.3%, and Nakhon Si Thammarat-84.4%. The seroprotective rates among females were higher than among males (85% vs 76%, p<0.001). Of 139 children aged less than five years with history of measles vaccination documented in a vaccination card, 77% had protective immunity (Table 3). Analyses by age, sex, and province among this group showed no significant differences. Rubella IgG In total, 899 serum specimens were tested for rubella IgG. At a positive cut-off point of >15 IU/mL, 89% (95% CI 86.8-91.0) had protective antibody levels. The seroprotective rates were over 85% among children aged over seven years and among adults ( Table 2). The GMT sharply increased at 8-14 years of age compared to the younger age-group (Table 2). Of those who were aged 20-24 years and 25-29 years, the seroprotective rates were higher among females than among males, but the differences (possibly reflecting rubella vaccination of 6 th grade female students during 1986-1998) were not statistically significant (96% vs 84%, p=0.14 and 89% vs 79%, p=0.34). The data indicate a higher seroprotective rate in Chiang Rai (95.1%) than in other provinces (Udon Thani-85.5%, Chon Buri-88.3%, and Nakhon Si Thammarat-87.8%, p=0.007). Mumps IgG In total, 911 serum specimens were tested for mumps IgG. At the positive cut-off point of >12 U/ mL in the testing manual, 82% (95% CI 78.9-84.0) had protective antibody levels. As with rubella IgG, the seroprotective rate was over 85% both among children aged seven years and above and among adults ( Table 2). The GMT sharply increased at 8-14 years of age compared to the younger age-group and seemed to be higher in the older age-groups ( Table 2). The differences in seroprotective rates between males and females were not statistically significant (80% vs 82%, p=0.66). The seroprotective rates were higher in Chiang Rai (89%) than in other provinces (Udon Thani-75.9%, Chon Buri-79.8% and Nakhon Si Thammarat-81.2%, p=0.001). DISCUSSION Results of this study could reasonably be considered broadly representative of the Thai population but is probably not indicative of the situation of minority populations living in particular locations (e.g. the hill-tribes). The samples were collected from both urban and rural areas and from provinces in different parts of Thailand. This might be expected to introduce a bias towards urban populations having better access to healthcare but, with regard to the immunization coverage, the major differences between urban population and rural population within the same provinces would not be expected. This study found that the immunity patterns of rubella and mumps were as would be expected from the rubella and MMR vaccination in Thailand. The effects of MMR and the previous rubella vaccination programme (from 1986 to 1998, targeting 6 th grade girls) was apparent in the age-group of 8-14 years and in women aged 20-30 years. The seroprotective rate of 46% for mumps and 75% of rubella among children below the age at which they would receive MMR vaccination suggests immunity acquired through continued transmission of wild virus in the community. To interrupt the transmission of virus in the future, MMR vaccine needs to be given to these younger children. Lowering the rubella IgG antibody cut-off point to 10 IU/mL has been proposed, following epidemiologic evidence that the 10 IU/mL antibody level is protective in the vast majority of persons (12). Using this new cut-off point raised the overall seroprotective rate of rubella from 89% to 93%. It also raised the seroprotective rate among children aged 0-7 year(s) (children below the age of vaccination) from 75% to 83%, suggestive of increasingly severe transmission of wild virus among children before the vaccination age. The higher seroprotective rates for mumps and rubella in the north compared to other regions were probably due to higher levels of virus transmission rather than a better vaccination programme. This conclusion reflects the finding that the higher seroprotective rates persisted across all age-groups, i.e. were not specific to the age-groups covered by the immunization programme. For mumps, this finding is consistent with the surveillance report showing outbreaks of mumps in the north in 1990-1991 and 1995-1996, peaking at 109.62 per 100,000 people (13,14). However, the surveillance report suggested that the incidence of rubella had been low in the north (especially in Chiang Rai where the incidence rate is 0.16-1.51 per 100,000 people) and in other regions throughout the last decade. These contradictory findings suggest possible underreporting of rubella and, perhaps, low awareness of the importance of detection of CRS. The level of measles antibody found in this study raised concerns. One factor which may explain the apparently low positive rate was an 'equivocal group' having antibody levels of 150-254 mIU/mL. A measles seroepidemiology study, conducted in seven countries of Western Europe, found that the proportion of this equivocal group was high in the vaccinated age-groups, which is probably explained by lower antibody titre after vaccination (or more rapid antibody loss) than after natural infection (15). We found that 5.3% of our serum specimens fell in this equivocal group. The proportion of equivocal cases among those aged less than 1, 1-7, 8-14 and 15-19 year(s) were 2.9%, 6.5%, and 5.7% and 4.8 respectively. The 77% positive and 6% equivocal rates among children, aged less than five years, who had documented a single dose of measles vaccine at 9-12 month, were in the acceptable range. Several hospital-based studies in Thailand found that almost 100% of infants had no antibodies to measles at nine months of age (16,17). After a single dose of measles/MMR vaccine at nine months of age, 81-91% seroconversion rates (antibody level cut-off points 320 mIU/mL in 2 studies) were obtained (16)(17)(18), which are comparable with the results of studies in other developing countries (19)(20)(21). However, the effectiveness of vaccine might vary as a result of field practice, which can be in the hands of relatively junior health officials. The ef-fectiveness of measles vaccine (when administered at nine months of age) was evaluated during several outbreaks in Thailand, with varying results. In 1994, for example, the effectiveness of vaccine was estimated to be 35-40% in a particular population living in a remote mountainous area (22). In 1995, it was estimated at 59-70% during one outbreak (23) but 85% in another (24). In 2002, one study reported 91% (25) while another reported 87% (26). Unexpectedly, a greater seroprotective rate or higher GMT for measles was not seen among children aged 8-14 years (supposed to have received the MMR vaccine at the age of 7 years) compared to those aged 1-7 year(s). We are unsure of the reasons for this but two hypothetical explanations seem to be plausible. The first, although not documented elsewhere, is a rapid waning of immunity after vaccination or secondary vaccine failure among school-age children. That is, immunity could have waned in children who were vaccinated but not subsequently exposed to circulating virus (27,28). The second possibility would be a potency problem with the measles component of MMR vaccine used in the programme. While some issues remain unresolved, the results of the present study do suggest explanations for Thailand continuing to have outbreaks of measles among school and college students. During 2006-2007, there were nine outbreaks of measles reported to the National Epidemiology Office (29). Of these, five occurred among the minority populations with low vaccine coverage, i.e. hill-tribes, refugees, and displaced persons. Another four outbreaks occurred among both urban and rural populations, with outbreak sizes ranging from three to 86 patients. The populations affected were secondary school students (86 and 7 persons) in two outbreaks, vocational school students (46 persons) in one outbreak, and young adults (3 persons) in one outbreak. The high proportion of measles-susceptible school and college students means that Thailand may yet, at some point, have a nationwide outbreak. 'Mop-up' and 'catch-up' campaigns have reportedly been successful in raising the immune proportion of the population in many countries without major adverse effects (30,31). It may be prudent to consider similar campaigns in Thailand before an explosive outbreak occurs. Our conclusion is that, at the time of this study, the majority of the Thai population aged over seven years was immune to mumps and rubella. MMR immunization for the 1 st grade students sharply reduced the number of susceptible children to mumps and rubella; however, a large proportion of younger children acquired the infection before the age of vaccination. To reduce the natural spreading of the diseases, MMR vaccination should be given to children at an earlier age. For measles, high susceptibility was found among children aged 8-14 years. This finding was consistent with the agegroup reported as affected during outbreaks of measles in Thailand. Ineffectiveness towards measles of MMR immunization administered to the 1 st grade students warrants further study.
2017-03-31T19:25:32.986Z
2009-02-01T00:00:00.000
{ "year": 2009, "sha1": "58b7f1a4186130a14ed748a0e3541651521e76b3", "oa_license": "CCBY", "oa_url": "https://www.banglajol.info/index.php/JHPN/article/download/3320/2784", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "9f5946579ec79f4c10dd96f29f94e96b6e53a7a6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
17951724
pes2o/s2orc
v3-fos-license
Evidences of +896 A/G TLR4 Polymorphism as an Indicative of Prevalence of Complications in T2DM Patients T2DM is today considered as world-wide health problem, with complications responsible of an enhanced mortality and morbidity. Thus, new strategies for its prevention and therapy are necessary. For this reason, the research interest has focused its attention on TLR4 and its polymorphisms, particularly the rs4986790. However, no conclusive findings have been reported until now about the role of this polymorphism in development of T2DM and its complications, even if a recent meta-analysis showed its T2DM association in Caucasians. In this study, we sought to evaluate the weight of rs4986790 polymorphism in the risk of the major T2DM complications, including 367 T2DM patients complicated for the 55.6%. Patients with A/A and A/G TLR4 genotypes showed significant differences in complication's prevalence. In particular, AG carriers had higher risk prevalence for neuropathy (P = 0.026), lower limb arteriopathy (P = 0.013), and the major cardiovascular pathologies (P = 0.017). Their cumulative risk was significant (P = 0.01), with a threefold risk to develop neuropathy, lower limb arteriopathy, and major cardiovascular events in AG cases compared to AA cases. The adjusted OR for the confounding variables was 3.788 (95% CI: 1.642–8.741). Thus, the rs4986790 polymorphism may be an indicative of prevalence of complications in T2DM patients. Introduction Type 2 diabetes mellitus (T2DM) is becoming a common worldwide disease with epidemic proportions in many populations [1]. Environmental changes promoting unhealthy behaviours and development of obesity and overweight around the world have been suggested as the principal causes [2]. In addition, related diabetes complications (i.e. chronic arterial disease of the lower limbs, carotid arterial diseases, ischemic heart diseases, neuropathies, nephropathy, chronic kidney failure) are responsible of an increased morbidity and mortality [1]. Thus, the knowledge of the pathophysiological mechanisms involved in the occurrence of T2DM and related complications is crucial for successful prevention and new therapeutic treatments. It is well-recognised that defective insulin secretion of pancreatic-cells and diminished insulin sensitivity in peripheral tissues characterise T2DM. In addition, recent evidence considers the occurrence of T2DM and its complications as the result of a state of chronic, systemic, and low grade of inflammation in accordance with metainflammation hypothesis [3,4]. Elevated levels of several circulating inflammatory molecules constitute a common feature in the natural course of diabetes [5,6]. Accordingly, pancreaticcells, under certain pathological condition, produce and release the proinflammatory cytokine interleukin-1 (IL-1 ). IL-1 can in turn impair -cell function and induce apoptosis [7,8]. In the recent years, it has been also proposed that T2DM may be the consequence of the stimulation of Toll-like receptors (TLRs), a family of pattern-recognition receptors able to detect microbial conserved components and trigger protective host responses, and implicated in mediating chronic inflammatory diseases, including obesity and diabetes [2][3][4]9]. Indeed, they also recognize endogenous ligands (i.e., endogenous damage-associated molecular patterns-DAMPs), such as saturated fatty acids and necrotic cell products [10,11]. Interestingly, the activation of TLR4, one of the best known TLR member, expressed in several tissue cells, such as cells of the pancreatic islets (i.e., -cells and resident macrophages), can induce both insulin resistance, pancreatic -cell dysfunction, and alteration of glucose homeostasis [2,[12][13][14]. The TLR4 activation seems also to be exacerbated by the low-grade of circulating endotoxemia (circulating lipopolysaccharide-LPS) correlated with the altered gut microbiota, which characterizes subjects with metabolic diseases, such as T2DM [2]. In particular, it has been recently demonstrated that LPS inhibits -cell gene expression of insulin in a TLR4-manner and via Nuclear Factor (NF)-B signaling in pancreatic islets [15]. This crucial role of TLR4 has been confirmed by data demonstrating that deletions or mutations in TLR4 gene (MIM: 603030) protect against fatty acid-induced insulin resistance and diet-induced obesity [16][17][18]. A lot of single nucleotide polymorphisms (SNPs) were described in the TLR4 coding region. The +896 A>G SNP (rs4986790) induces the substitution of Asp299Gly amino acids, modifying the normal structure of the extracellular region of the TLR4. Thus, different +896 TLR4 genotypes may be associated with decreased ligand recognition or protein interaction and decreased responsiveness to LPS [9,19]. Interestingly, a recent meta-analysis showed a significant association between +896 TLR4 SNP and T2DM and metabolic syndrome, in Caucasians [20]. A significant association was also reported between the Asp299Gly polymorphism of the TLR4 gene and early onset of diabetic retinopathy. T2DM patients carrying AG/GG genotypes showed an increased risk of developing retinopathy compared with patients carrying AA genotypes [21]. Overall, although Asp299Gly polymorphism of the TLR4 gene is a well-recognised genetic risk factor in some agerelated diseases [9], only few data have been reported for T2DM complications, such as neuropathy, retinopathy, ischemic heart disease, and coronary artery disease [22][23][24][25][26][27], and no data were reported on chronic kidney and other T2DM-related cardiovascular diseases, such as carotid arterial and cerebrovascular diseases and lower limb arteriopathy in Caucasian populations. In order to clarify the weight of +896 TLR4 A/G polymorphism as potential predisposing or protective genetic factor in the major T2DM complications (neuropathy, nephropathy, chronic kidney failure, chronic arterial disease of the lower limbs, carotid arterial diseases, and ischemic heart diseases) in the Caucasian population, we analysed 367 patients affected by T2DM and with complications for the 55.6%. Subject Populations. Three hundred and sixty-seven diabetic patients were enrolled. Informed consent was obtained from each subject. The study protocol was approved by the Ethics Committee of the INRCA Hospital. T2DM was diagnosed according to the American Diabetes Association Criteria [28]. Inclusion criteria were body max index (BMI) <40 kg/m 2 , age from 35 to 85 years, ability, and willingness to give written informed consent and to comply with the requirements of the study. Information collected included data on vital signs, anthropometric factors, medical history, and behaviours as well as physical activity. DNA was collected from participants providing consent to use genetic material (100 percent of the sample). The presence/absence of diabetic complications was evidenced as follows: diabetic retinopathy by fundoscopy through dilated pupils and/or fluorescence angiography; renal impairment, defined as an estimated glomerular filtration rate (eGFR) <60 mL/min per 1.73 m 2 evaluated using Cockcroft-Gault equation [29]; neuropathy established by electromyography; ischemic heart disease defined by clinical history and/or ischemic electrocardiographic alterations; peripheral vascular disease including atherosclerosis obliterations and cerebrovascular disease on the basis of history, physical examinations and Doppler velocimetry technique. Hypertension was defined as a systolic blood pressure >140 mmHg and/or a diastolic blood pressure >90 mmHg, measured while the subjects were sitting, which was confirmed in at least three different occasions. BMI was calculated as weight (kg)/height (m 2 ). All the selected subjects were Italian and consumed a Mediterranean diet. Overnight fasting venous blood samples of all subjects were collected from 8:00 to 9:00 a.m. in plain, EDTA, heparin, and citrate added tubes. The samples were either analyzed immediately or stored at −80 ∘ C for no more than 30 days. Laboratory Assays. Blood concentration of fasting glucose, low and high density lipoprotein (LDL and HDL) cholesterol, and triglycerides was measured using commercially available kits on a Roche/Hitachi 912 (Roche Diagnostics, Switzerland). Insulin, C-reactive protein (CRP), apoliprotein-A1, and-B100 (Apo-A1 and Apo-B) levels were assessed using immunochemical methods and an Access Analyzer (Beckman Coulter, CA, USA). Creatinine was measured by Jaffè method, fibrinogen by Clauss method, and urea by a colorimetric method. Glycosylated hemoglobin (HbA1c) levels were measured in all subjects using an HPLC auto-analyzer Adams HA 8160 (Menarini, Italy). All these determinations were performed according to the manufacturer's specifications, and quality control was within the recommended precision for each test. Assessment of Insulin Resistance. Insulin resistance was estimated using the homeostasis model assessment (HOMA-IR) as described by Matthews et al. [30] and validated by several authors for epidemiological studies [18]. HOMA-IR was calculated as the product of fasting glucose (mmol/L) and fasting insulin (mU/L) divided by 22.5. 2.4. Genotyping. DNA samples of 367 diabetic subjects were extracted from peripheral blood samples collected in tripotassium EDTA and purified by using a QIAamp Blood DNA Maxi kit (Qiagen, Dusseldorf, Germany). Samples were genotyped for TLR4 Asp299Gly (+896 A/G TLR4; rs4986790). The procedure for detecting the +896 A/G TLR4 SNP was based on Restriction Fragment Length Polymorphism-PCR (RFLP-PCR), restriction cleavage with NcoI (New England Biolabs, USA), and separation of the DNA fragments by electrophoresis, as previously described [31]. Statistical Analysis. Data were reported as mean (Standard Deviation) for continuous variables and as percentages ( ) for categorical variables. The skewed distributions (triglycerides, fasting insulin, high-sensitivity C-reactive protein, fibrinogen, and creatinine) were log-transformed before statistical analyses to achieve a normal distribution. Differences between patients without complication and those having at least one complication were compared using Student's -test with Bonferroni correction for continuous variables and 2 test for categorical variables. In order to create a dependent binary variable for the next logistic regression model, we considered neuropathy, lower limb arteriopathy, and the major cardiovascular events (MACE) (i.e., carotid arterial diseases, cerebrovascular, and ischemic heart diseases) complications together ("0" = no complications; "1" = at least one of the three complications). Logistic regression models were performed to estimate the adjusted risk of having at least one of three complications when AG carrier. Results were expressed as odds ratios (OR) with 95% CI. Two covariates, urea and LDL cholesterol, were strongly correlated, respectively, with creatinine and Apo-B (Pearson's > 0.5). They were removed as they had less explanatory power than the other two. Data were analyzed with SPSS/Win program (version 19.0; Spss Inc., Chicago, IL). Probability values lower than 0.05 were considered statistically significant. The reported values were two tailed in all calculations. Patient Characteristics. The 367 T2DM diabetics were characterized to have a mean (Standard Deviation) age of 66 (7.9) years, be males for the 56.7% (precisely 208), and they were affected by T2DM complications for the 55.6%, including neuropathy, nephropathy, chronic kidney failure, chronic arterial disease of the lower limbs, and MACE (carotid arterial diseases, cerebrovascular and ischemic heart diseases). In particular, we reported in the Table 1 the comparisons of anthropometric and biochemical characteristics between patients without complication and those having at least one complication (204 versus 163; 55.6% versus 44.4%, resp.). Thus, we observed that complicated patients were males for the major number than those without complications. In addition, they had an older age and showed higher values of biochemical variables, including fasting glucose, creatinine, urea, HbA1c, total and LDL cholesterol, and Apo-A1. Anthropometric and Biochemical Characteristics of T2DM Patients Stratified for the A/A and A/G TLR4 Genotypes. Genotyping the T2DM patients for +896 A/G TLR4 SNP, we observed that they predominantly had the A/A wild type genotype (91.5%; 336). The A/G genotype was observed only in 31 patients (8.5%), while nobody had the G/G genotype. Stratifying the T2DM patients according to these genotypes, no statistical significant differences were detected in their anthropometric and biochemical characteristics, with exception of total and LDL cholesterol values. Higher values of total and LDL cholesterol were assessed in patients with A/A genotype with a = 0.052 and = 0.044, respectively (Table 2), by evidencing a borderline association. The Role of TLR4 Genotypes in T2DM Complications. With the aim to evaluate the role of +896 A/G TLR4 SNP on the predisposition of T2DM complications observed in the population studied, we compared their prevalence in positive A/A TLR4 individuals versus those positive for A/G TLR4 genotype. A higher significant prevalence was detected for neuropathy, lower limb arteriopathy, and MACE in positive A/G TLR4 patients, when compared with those positive for A/A TLR4 genotype (see Table 3). In order to analyse the association between complications and TRL4 polymorphism we considered neuropathy, MACE, and lower limb arteriopathy complications together ("0" = no complications; "1" = at least one of these three complications). The association between complication in the previous three significant variables and TRL4 polymorphism was significant ( 2 test = 10.697; = 0.01). Thus, we calculated the crude risk to be complicated in patients with AG genotype obtaining the following results: OR = 3.403; 95% CI 1.577-7.344. In addition, we compared the mean values of the main studied parameters among complicated (at least 1 of 3 complications) and noncomplicated groups, using Student's -test with Bonferroni correction for continuous variables and 2 test for categorical variables. We found the following significant parameters: HbA1c, urea, creatinine, LDL cholesterol, Apo-A1, and Apo-B ( < 0.001). Furthermore, we applied a binary logistic regression model to estimate the adjusted risk of at least one of three complications when AG carrier (Table 4). From the model Azotemia and LDL variables were removed, being redundant Table 4). Discussion T2DM is today considered as world-wide health problem, as demonstrated by continuous increase of its incidence essentially linked to obesity and overweight in growing and constant augment in various populations, such as the Caucasian populations [1]. In addition, the T2DM individuals have an enhanced mortality and morbidity due to T2DMrelated complications, including particularly chronic arterial diseases of the lower limbs, carotid arterial diseases, cerebrovascular, coronary and ischemic heart diseases, neuropathies, nephropathy, and chronic kidney failure [1]. This implies the necessity to develop new strategies for prevention and therapy of both T2DM and its complications, even if the diet and physical activity represent until now the main basis in their prevention and management. This condition is leading different researchers to identify appropriate genetic and molecular factors as potential biomarkers and therapeutic targets, which might permit the early identification of athigh risk individuals for both T2DM and its complications. The attention has been particularly focused on inflammatory/immune pathways, including the TLR4 pathway, since the occurrence of T2DM and its complications is now considered as the result of a state of chronic, systemic, and low grade of inflammation in accordance with metainflammation hypothesis [2][3][4]. The focus on TLR4 pathway derives by different literature data. It has been demonstrated that dietary macronutrients (i.e., fats and sugars) are able to activate this pathway [2]. In addition, long-term intake of diets rich in fats and carbohydrates has been evidenced to provoke an exacerbated expression and activity of TLR4 in human monocytes along with increases in superoxide generation, NF-B activity, and proinflammatory factors and with a significant correlation with HbA1c levels [32][33][34][35][36][37][38]. Other studies performed in animal models showed that over-nutrition or pathogen infections induce an increased TLR4 expression in tissues and cell types modulating energy homeostasis and insulin action, including adipose tissue, pancreatic islets, muscle, gut, endothelial and smooth muscle cells of arteries, brain, kidney, and liver [2,[34][35][36]. As result, insulin resistance, pancreatic -cell dysfunction and alteration of glucose homeostasis, increased production of reactive oxygen species of polymorphonuclear leukocytes, and modulation of natural killer cell functions have been found [2,[12][13][14][37][38][39][40][41]. The immune dysfunctions observed seem to clarify the high susceptibility to infections of lower respiratory and urinary tracts, skin, and mucous membranes observed in T2DM cases [42]. In the complex, these conditions determine and feed as a vicious cycle a chronic systemic low-grade inflammation, which seems to be responsible for the onset of metabolic diseases, such as T2DM and its related complications [43]. In contrast, it has been demonstrated that insulin reduces LPSinduced TLR4 expression and activation and oxidative stress [44,45]. In addition, recent investigation supports the idea of involvement of intestinal bacteria in the onset of T2DM and its complications. Specific intestinal bacteria seem to operate as LPS sources mediating LPS release and/or bacteria translocation into the circulation due to vulnerable microbial barrier and the increased intestinal permeability and to play a role in systemic inflammation and onset and progression of T2DM. Pancreatic cells express significant levels of TLR4 which recognize LPS or intestinal bacteria [45,46]. Based on these recent evidences, TLR4 seems to have the role of hub in the chronic inflammation observed in T2DM complications, as currently affirmed by Dasu group [47]. In addition, its activity is modulated by genetic variations, principally SNPs, such as +896 A>G. This SNP determines a blunted immune response against viral and bacterial infections or other exogenous (fats and sugars) endogenous molecules characterized by a reduced production of proinflammatory cytokines [9,19]. A recent meta-analysis evidenced a significant association of AG/GG genotypes with decreased metabolic disorder risk [20]. In contrast, few and inconsistent literature data have been reported on its capacity to be a predisposing or protective genetic factor for T2DM-related complications, that is, neuropathy, retinopathy, ischemic heart disease, and coronary artery disease [22][23][24][25][26][27]. No literature data exist on TLR4 role in the T2DM-associated chronic kidney and other T2DMrelated cardiovascular diseases, such as carotid arterial and cerebrovascular diseases, lower limb arteriopathy, in Caucasian populations, although it is well recognised in other age-related diseases [9]. Thus, the key aim of the present study was to analyze the weight of +896 TLR4 A>G SNP as potential predisposing or protective genetic risk factor in the major T2DM complications evaluating a population of 367 patients affected by T2DM and with complications for the 55.6%, including neuropathy, nephropathy, chronic kidney failure, chronic arterial disease of the lower limbs, and MACE. Complicated T2DM patients were characterized to be prevalently males (62.3%), to have an older age (67.58 versus 64.03 in noncomplicated cases), and to show higher values of biochemical variables, such as fasting glucose, creatinine, urea, HbA1c, total and LDL cholesterol, and Apo-A1 (Table 1). In addition, 91.5% of cases had the of A/A TLR4 genotype and 8.5% had the A/G genotype, while nobody had the G/G genotype. No associations were observed between A/A and A/G TLR4 genotypes and their anthropometric and biochemical characteristics, with exception of total and LDL cholesterol values ( Table 2). In particular, a borderline association was evidenced ( Table 2). Evaluating the role of +896 A/G TLR4 SNP on the predisposition of T2DM complications, interesting data were, however, detected. In particular, diabetic carriers of AG genotype had a major susceptibility for neuropathy, lower limb arteriopathy, and MACE, as reported in Table 3. Their cumulative risk was significant ( = 0.01), with a threefold risk to develop neuropathy, lower limb arteriopathy, and major cardiovascular events in AG cases compared to AA cases (crude OR = 3.403; 95% CI: 1.577-7.344). In addition, we applied a binary logistic regression model to estimate the risk, adjusted for confounding variables (HbA1c, urea, creatinine, LDL cholesterol, Apo-A1, and Apo-B), of having at least one complication of three when AG carrier. The adjusted OR was 3.788 (95% CI: 1.642-8.741) as shown in Table 4. This underlines the remarkable role of this SNP in inducing T2DM complications independently from other biological risk factors known to favour the onset of these complications. Limitations The major limitations of the present study are the relative small sample size and the necessity to confirm and validate our data in larger populations of different genetic at least one of three complications when AG carrier. Despite these limitations, our study represents the first to have analyzed the weight of the TLR4 SNPs in of +896 TLR4 A/G polymorphism as potential predisposing or protective genetic risk factor in the major T2DM complications (neuropathy, nephropathy, chronic kidney failure, chronic arterial disease of the lower limbs, carotid arterial diseases, and ischemic heart diseases) in the Caucasian populations. However, further studies are required to obtain more conclusive results and to consider the rs4986790 TLR4 SNP a biomarker and the TLR4 pathway as target for new therapeutic treatment aimed to prevent or delay the T2DM complications. Conclusions In the light of the results obtained, a possible explanation of significant predisposition in the development of diabetic complications in AG versus AA genotype carriers is likely due to a compromised immune control against the infectious diseases. Supporting this hypothesis, we recently demonstrated that the genetic control of infectious diseases has a significant role in determining different trajectories to reach longevity in centenarians [48]. Thus, genetic background and consequently genetic factors might have a key role in both onset and progression of T2DM-related complications. As consequence, our results might open new perspectives for the analysis of susceptibility factors and prevention for T2DM-related complications. Actually, these findings might prompt studies on pharmacological strategies to prevent or delay the development of T2DM complications in predisposed subjects. In addition, they lead to considering the rs4986790 TLR4 SNP an optimal biomarker to identify atrisk individuals for T2DM and T2DM-related complications. Thus, it may be an indicative of prevalence of complications in T2DM patients.
2016-05-12T22:15:10.714Z
2014-04-02T00:00:00.000
{ "year": 2014, "sha1": "a4dbfa8477036d17db81e36a2b8d5bc4a2fb463c", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/mi/2014/973139.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c5f6631bcf0203eba1c4ffbccbad5f5d4e377ac6", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
87731457
pes2o/s2orc
v3-fos-license
The positive sentinel lymph node biopsy- can we predict which patients will benefit from further surgery? Introduction: Although sentinel lymph node biopsy (SLNB) is the gold standard for clinical and radiological negative axillae in breast cancer, the subsequent management of positive nodes is currently under scrutiny with the Z0011 trial publication causing much debate. Our aim was to determine whether there were any predictive factors in our cohort of patients that could assist in the decision of whether the patient required a further axillary lymph node clearance. Methods: A retrospective analysis of a prospectively maintained database of patients who underwent SLNB and their histology was performed. Univariate and multivariate analysis was performed to identify any predictive factors. Results: Our Breast Unit performed 457 SLNB over the three years. The mean age of patients was 61.9 years (range 31-89 years). Of these 457 SLNB, 122 (26.7%) were positive for metastatic involvement, and of these patients only 34% were found to have further lymph node involvement after axillary lymph node clearance (ALNC). Only 8% of our total patient cohort had non sentinel node involvement at completion ALNC. Using univariate analysis, lymphovascular invasion (p0.009), grade (p0.007) and size (p0.006) were all significant predictors for having a positive ALNC. Conclusion: Only 8% of the total number of patients had an additional positive lymph node found in their completion axillary clearance. Our results add to this existing knowledge on the subject, and show that there are certain factors which should be carefully considered pre-operatively in order to help inform patient and surgeon choice regarding management of the axilla. Open Access whether the lower axilla was included in the radiotherapy field for the breast irradiation after wide local excision. In addition it may be difficult to accurately extrapolate the results of this study to many of the patients treated in a UK Breast Unit, as many of these patients would not fall within the inclusion criteria for Z0011, especially those who have a mastectomy. An Open Access Publisher Patients in our unit who have clinically negative (no palpable axillary lymph nodes) or radiologically negative (ultrasound determined normal morphology or histologically normal following ultrasound guided core biopsy) undergo SLNB as per current United Kingdom guidelines; Association of Breast Surgeons [3] and National Institute for Clinical Excellence [4]. We wished to identify any factors that could be used to predict which patients with a positive SLNB would benefit from further ALNC, and which patients could avoid undergoing the procedure unnecessarily. Methods Using a prospectively maintained database of all the patients who had undergone SLNB between 1st January 2009 and 31st December 2011, we retrospectively analysed the histology (Haematoxylin and Eosin staining) results from Anglia Ice, our Trust's computerised results software. Patients who underwent Neo-adjuvant chemotherapy were excluded from this cohort of patients as during this time period, our unit was reviewing the management pathway of these patients. We compared SLNB results with subsequent results from the patients who underwent an ALNC. We used SPSS Version 20 for univariate analysis, with chi-squared analysis or Fisher's exact test depending on sample size, and logistic regression to identify whether any multiple factors were significant. Results Our Breast Unit performed 457 SLNB over the three years. The mean age of patients was 61.9 years (range 31-89 years). Of these 457 SLNB, 122 (26.7%) were positive for metastatic involvement. As expected, the original tumour histology reflected that of the general population with 76% having invasive ductal carcinoma, the remaining histological breakdown is shown in Table 1. Table 2 shows further patient and tumour characteristics. Ninety percent (110/122) of patients with a positive SLNB subsequently underwent an ALNC ( Figure 1), 34% of these had a positive LN found at histological examination of the axillary clearance (n37). Of the 12 patients who did not undergo further axillary surgery, for 5 it was an MDT decision, 4 patients decided that they did not want further axillary surgery, and 1 was treated with radiotherapy to the axilla. For one patient the positive SLN was extra axillary, located in the tail of the breast, with the axillary nodes appearing normal on detailed ultrasound scanning of the axilla. In the last case the patient had an ALNC previously. Figure 2 shows the original size of SLNB metastases in the patients who underwent a further ALNC, with the majority of patients (92%; n=34) with a subsequent positive ALNC, having macrometastases on their original SLNB. Our yield from ALNC had a mean number of 17 lymph nodes retrieved in positive ALNC (range 1-36) and 13 in negative ALNC (range 2-26). We also reviewed the patients whose SLNB histology showed extra-nodal spread, one of the exclusion criteria for Z0011. This was identified in 28 patients out of the 83 with macrometastases (33%). Eighteen (64%) of these had positive ALNC. Two patients did not undergo an ALNC. We further analysed the data to try and identify which patients with a positive SLNB were most likely to have further axillary lymph node disease. Using univariate analysis (Table 3), lymphovascular invasion (p0.009), Grade 2 (p0.007) and size of between 2 and 5cm (p0.006) were all significant predictors for having a positive ALNC, however age (p0.86), oestrogen receptor status (p0.38) and human epidermal growth factor receptor 2 (Her2) status were not significant (p0.84). Discussion There is no doubt that SLNB should be the primary axillary surgery in breast cancer patients with clinical and radiological node negative axillae, the NSABP B32 trial [6] has confirmed that there is no difference in overall survival and disease free survival between sentinel node negative patients undergoing SLNB compared to those that underwent completion ALNC in their 8 year follow up. The other question that needs answering is; what should be done with the patient who has micrometastases in their SLNB? There are trials which have tried to address this, the IBCSG 23-01 trial showed no significant difference in recurrence or 5 year disease free survival when patients with one or more SLN with a micrometastasis were randomized to ALNC or not [7]. Similarly, the AATRM study did not show any differences in disease free survival or cancer related deaths when patients with sentinel node micrometastases were randomized to ALNC or clinical follow up [8]. The St Gallen International Expert Consensus on the Primary Therapy of Early Breast Cancer 2013 also acknowledged that the Z0011 trial has shown that patients undergoing breast conservation therapy and whole breast radiation do not require ALNC for up to two SLN with macrometastases [9]. As such, the American Society of Clinical Oncology updated its guidelines in March 2014 [10] to recommend that if women are undergoing breast conservation therapy with whole breast irradiation, ALNC was not required for women with up to two positive SLNB. Systematic reviews of current randomised controlled trials and observational studies have shown no inferiority in overall survival and disease free survival [11] in patients undergoing SLNB alone for node positive SLNB, and no inferiority in axillary recurrence rate [12]. The reduction in morbidity for those patients solely undergoing SLNB is significant. The limitations of performing prospective studies and trials to address the node positive SLNB and further management is well documented, with low accrual rates and reduced disease events and deaths affecting sample sizes. Until there are more prospective randomised trials, retrospective cohort studies such as this study will help fill the gap in providing further strength and assisting multi-disciplinary team (MDT) decisions as to how best manage their patients with a positive SLNB, especially in the case of the patient with a micrometastasis or ITC. A large French [13] multicentre retrospective cohort study with over 8000 patients concluded that there was no difference in overall and disease free survival in patients who had micrometastases and ITC compared to those who had node negative SLNB, however there was an increased frequency of axillary recurrence rate. Similarly to our findings, although on a slightly smaller scale a Turkish group [14] found that tumour size and lymphovascular invasion were significant primary tumour related prognostic determinants. Alternatively, there are six nomograms that exist to aid prediction of lymph node involvement -Cambridge, Memorial Sloan-Kettering Cancer Center (MSKCC), Mayo, MDA, Tenon and Stanford [15,16]. They have been used in different populations and it is still unclear which is the best and would be most suitable to a UK population. The Stanford calculator [17] only uses three variables, lymphovascular invasion, grade and size of tumor, similar to our findings in this cohort of patients. Their accuracy rate was found to be 77%. Unfortunately, both the MSKCC and the Stanford calculators (the only online calculators) have limitations in that their usefulness is limited to patients with micrometastases and isolated tumor cells [18]. Currently these nomograms are not used routinely in many breast units. Our results add to the existing knowledge on this subject, and show that there are certain factors which should be carefully considered pre-operatively in order to help inform patient and surgeon choice regarding management of the axilla. However it is important to note that these results are dependent on accurate pre-operative axillary staging with ultrasound and ideally hollow needle core biopsy pre-operatively, although even fine needle aspiration (FNA) has been shown to be highly specific and sensitive [19]. Conclusion In our series 27% of the SLNBs undertaken were positive for metastatic spread, and of these patients only 34% were found to have further lymph node involvement after ALNC. When considering the whole cohort of patients only 8% had non sentinel node involvement at completion ALNC. Our results show that the majority of our patients having a completion ALNC do not benefit from this additional surgery; because in the majority of cases, the remaining lymph nodes were all free of malignancy. Lymphovascular invasion seems to be an independent predictive factor for positive ALNC, with size of tumour and grade also being significant. However, despite a large total population, the overall group sizes are small, leading to poor statistical results. This is not enough to base a pre-operative decision on, but may allow for selectively choosing cases that may benefit from intra-operative analysis (frozen section/one step nucleic -acid amplification (OSNA)) for those cases which have the predictive factors suspicious for requiring an ALNC, as well as allowing improved patient pre-operative information and increased theatre efficiency. This information can be used, together with emerging guidelines, to assist the MDT and patient, into making the appropriate decision.
2019-01-02T06:48:22.241Z
2015-02-01T00:00:00.000
{ "year": 2015, "sha1": "019dfb60eece85725e08e2e09132075f5197a5f9", "oa_license": "CCBY", "oa_url": "https://www.nobleresearch.org/Content/PDF/1/2052-4994.2015-3/2052-4994.2015-3.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "019dfb60eece85725e08e2e09132075f5197a5f9", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
4568274
pes2o/s2orc
v3-fos-license
Desquamative Gingivitis Desquamative gingivitis (DG) is characterized by erythematous, epithelial desquama‐ tion, erosion of the gingival epithelium, and blister formation on the gingiva. DG is a clinical feature of a variety of diseases or disorders. Most cases of DG are associated with mucocutaneous diseases, the most common ones being lichen planus, mucous membrane pemphigoid, and pemphigus vulgaris. Proper diagnosis of the underlying cause is important because the prognosis varies, depending on the disease. This chapter presents the underlying etiology that is most commonly associated with DG. The current literature on the diagnostic and management modalities of patients with DG is reviewed. Diagnosis It is very important to accurately diagnose diseases or disorders causing DG because the prognosis varies widely, depending on the cause. Although PV rarely occurs, it is a potentially life-threatening disease, so it is important to diagnose and treat it in its early stages. Airway obstruction due to laryngeal scarring and blindness due to conjunctival scarring would certainly deteriorate the quality of life for MMP patients. Early recognition and treatment of the lesions can prevent serious complications. Histopathological examination and direct immunofluorescence (DIF) testing of biopsied tissues are often required to determine the underlying etiology of DG [6][7][8]10]. For histopathological study, the biopsy site should be selected from an area of intact epithelium and include perilesional tissue. This may require two separate biopsies, one lesional and one non-lesional. The perilesional tissue or non-lesional biopsy site should show a nonspecific inflammatory response in suspected non-autoimmune disorders such as LP, erythema multiforme, foreign body gingivitis, factitious disorder, and contact stomatitis [1,7,10]. In contrast, the DIF test should be performed on normal-appearing tissue rather than perilesional sites in suspected autoimmune diseases such as MMP, PV, and chronic ulcerative stomatitis [1,7,10,46,47]. Since immune deposits in autoimmune bullous disease are present in all oral tissue, a positive result from DIF tests may be obtained from biopsies taken from distant normal mucosa [46]. The DIF test is considered to be the best diagnostic evidence for MMP, PV, chronic ulcerative stomatitis, and other autoimmune disorders; therefore, DIF testing is often essential in obtaining a final diagnosis since clinical features may be so similar [6-8, 10, 47, 48]. On the other hand, DIF findings are supportive but not diagnostic for LP, psoriasis, lupus erythematosus, and mixed connective tissue disease because the DIF features of these diseases can also be found in other conditions [6,10,48]. A negative result from DIF tests should be anticipated in biopsies of contact stomatitis [1]. Biopsy sites appearing to have an intact epithelial surface should be selected. If lesions are present at several mucosal sites, including the gingiva, it is usually best not to use the gingiva for the biopsy [1,49,50]. However, in approximately half of DG cases, the gingiva was the only site of involvement [50,51]. In these cases, the gingiva should be selected for the biopsy. Rees and Burkhart [1] described the six steps to be considered when a gingival biopsy is required in DG patients. They highlight the importance of careful site selection for gingival biopsies in order to obtain diagnostic tissue samples. An inadequate surgical site selection may easily lead to the loss of the gingival epithelium, since the biopsied gingival tissue is thin and tends to be fragile. The stab-and-roll biopsy technique is a procedure specially designed to prevent the epithelium from being removed from the biopsy specimen [1,46,52]. This biopsy technique prevents the occurrence of lateral shear forces. The operator applies gentle pressure on the gingiva with the tip of a #15 blade until the bone surface is reached and then the blade is rolled from the tip along the entire cutting edge. If a larger specimen is needed, the tip of the blade can be repositioned and the rolling stroke extended. The gingival epithelium was well maintained, and the relationship with the underlying connective tissue was diagnostic from the gingiva of DG patients using the stab-and-roll biopsy technique [1,46,52]. Lichen planus (LP) LP is a relatively common, T-cell-mediated chronic inflammatory disease of unknown etiology. LP commonly occurs in middle-aged and older people, and women are affected more frequently than men [53,54]. The lesions are found in multiple regions including the skin, genitalia, or oral mucosa, although they are confined to the gingiva alone in some cases [53][54][55][56] (Figures 2 and 3). In many instances, atrophic, ulcerative, and bullous forms are combined as erosive LP. The reticular, popular, and plaque-like forms of LP are often asymptomatic, whereas erosive forms may be quite painful when a patient is eating spicy foods or performing oral hygiene procedures [53][54][55]57] (Figures 4-6). For these reasons, erosive LP usually requires treatment. Histopathologically, specimens may demonstrate hyperortho-or hyperparakeratosis, degenerative changes to the basal cells, and band-like subepithelial infiltrate composed of lymphocytes [11] (Figure 7). When available, DIF testing is also valuable in establishing the diagnosis, although DIF findings are only suggestive, rather than diagnostic, of LP [6,10,48,58]. Characteristic DIF findings in oral LP include a linear pattern of anti-fibrin or anti-fibrinogen in the basement membrane zone and, to a lesser degree, the presence of IgM or IgG deposits in cytoid bodies [6,10,48,58] (Figure 8). Mucous membrane pemphigoid (MMP) MMP is an autoimmune, subepithelial blistering disease that affects mucous membranes. Insights into Various Aspects of Oral Health Multiple target antigens of MMP were identified in cell-to-basement membrane adhesion components by the presence of circulating autoantibodies in the patients' serum. These antigens include bullous pemphigoid antigens (BP180 and BP230), α6 β4 integrin, type VII collagen, and laminin 332 [62,63,66,67]. The loss of cell-to-basement membrane adhesion caused by these antibodies may result in subepithelial blistering. Histopathologically, MMP is characterized by subepithelial bulla formation [11] (Figure 14). During DIF testing, the linear deposition of complement component C3, IgG, or other immunoglobulin is observed in a linear pattern along the basement membrane zone [48,62] (Figure 15). Pemphigus vulgaris (PV) PV is an autoimmune blistering disease characterized by acantholysis in the epithelium. Most patients with PV are middle-aged and elderly [68][69][70][71]. The disease is equally common in men and women [71], and it is a potentially life-threatening disease [72]. Characteristics of the PV lesions are flaccid bulla formation, erosion, and ulceration in the skin or mucosa [1,68] (Figures 16-19). PV frequently begins with oral lesions and later progresses to involve the skin [73,74] (Figure 20). Oral lesions are the most common evidence and develop in almost all patients having PV [68,71]. Lesions may affect the gingiva, and occasionally, the gingiva is the only site of involvement in early lesions [69,[73][74][75]. Circulating PV autoantibodies in the serum are pathogenic, and they can cause acantholysis in the epithelium [76]. More than 50 proteins have been reported to specifically react with pemphigus IgG autoantibodies [77], but it has been determined that the principal autoantigens in pemphigus patients are desmogleins, which are the components of desmosomes in the epidermis and mucous membranes [78,79]. Almost all patients with PV lesions restricted to the oral mucosa have only anti-desmoglein 3 antibody in the serum, whereas patients with advanced cases involving the oral mucosa and skin may have both anti-desmoglein 3 and anti-desmoglein 1 antibodies [73,74]. Histopathologically, PV is characterized by acantholysis and a suprabasilar split in the epithelium [11] (Figure 21). Tzanck cells are often found in intraepithelial clefts [80]. In the DIF examination of PV patients, the deposition of IgG and/or C3 is found in the intercellular spaces of the epithelium [48] (Figure 22). Contact hypersensitivity reactions as cause of DG Localized or generalized DG is sometimes elicited by contact hypersensitivity reactions to various foodstuffs, preservatives, oral hygiene products, and dental restorative materials [11,25,[35][36][37][38][39]81]. Toothpaste hypersensitivity reactions may occur in various oral or perioral sites, but the gingiva was the most common site of onset [24,35,36,39,81] (Figure 23). Erythema has been expressed as a "velvet-like appearance of the gingiva" or "fiery red gingiva" [35]. Epithelial sloughing is the most common irritant effect associated with toothpastes and mouthwashes [1,2,35,82] (Figure 24). Allergy to dental restorative materials usually causes localized DG in gingival or other mucosal tissues directly contacting the allergen [1,11]. Gingival contact hypersensitivity lesions are usually not biopsied. However, if a biopsy is performed, these lesions present with non-specific histopathologic findings with submucosal perivascular inflammatory cell infiltration [11,35,36]. The existence of focal granulomatous inflammation and/or multinucleated giant cells in the deep layer of the lamina propria was also described in some cases studying contact hypersensitivity stomatitis [25,81]. DIF is not indicated because it is routinely negative [11]. To treat contact hypersensitivity reactions, the allergen should be identified and removed. To do so, patients should be questioned regarding the type(s) of oral hygiene products they use, and a 1-2-week food diary may help identify causative agents [35]. Patch testing may be required to identify the allergen or to confirm a specific allergen in a dental hygiene product or in a dental restoration. Patients are considered to have allergic reactions to a relevant allergen if their patch test results are positive [35,81]. However, diagnosis of contact hypersensitivity reactions may be confirmed simply by the discontinuation of the causative agent(s) resulting in the remission of clinical signs and symptoms [35,36,81]. Managing DG patients The specific disease or disorder causing DG, the severity of the gingival lesions, the presence or absence of extraoral involvements, and the medical history of the patient are the key factors in determining the selection of a topical or systemic immunosuppressive therapy [1,2,69,83]. The patients diagnosed as having an autoimmune disease should be closely followed because they may require immediate referral to other health care experts especially if they develop extraoral lesions. After MMP is diagnosed from DG or concomitant lesions, patients should undergo examination by medical specialists including an ophthalmologist and an otolaryngologist, and the presence or absence of extraoral lesions should be determined. PV patients with exclusively oral lesions should be followed closely and referred to other experts immediately if they develop lesions elsewhere on the body. Management of the specific disease or disorder causing DG may best be provided by a specialist in oral medicine, oral pathology, periodontics, or oral surgery, but the dentist may still be responsible for maintaining the dental and periodontal health of the patient. This is important because periodontal and dental considerations are often observed in DG patients, but the literature contains minimal information regarding the periodontal and dental management of these individuals. Plaque-induced gingivitis is almost universal in patients with symptomatic DG, and an effective therapeutic protocol should include non-surgical periodontal therapy consisting of oral hygiene instruction, scaling, and root planting [2,[84][85][86][87][88][89] (Figure 25). We believe that excessively vigorous scaling and root planting can be unnecessarily damaging to DG-affected lesions, and we prefer a sequential gingival management approach that features gentle supragingival and slight subgingival debridement which can be repeated at two-week intervals resulting in gradual improvement in periodontal status until an acceptable level of periodontal health has been achieved. The relationship between the existence of DG lesions and the progression of periodontal diseases is inconclusive, although some but not all studies demonstrated a correlation between compromised periodontal status and autoimmune bullous diseases affecting the mouth [90][91][92][93][94][95][96]. There are several reports on periodontal surgery or dental implant therapy performed on patients having DG [15-17, 73, 97-100]. Tissue sloughing and a lack of tissue elasticity caused by active autoimmune bullous disease can disturb the manipulation of the mucosal flap. Strict mucosal disease control prior to surgery may reduce the surgical complications [101]. Implant therapy is likely to enhance the quality of life in patients with systemic diseases and may help them maintain long-term masticatory function. Patients with DG are often unable to wear tissue-borne prostheses because of discomfort. This tissue irritation and oral pain can be increased if the appliances are ill fitting or damaged. A dental implant-supported prosthesis improves the stabilization of the prosthesis, resulting in a higher degree of comfort. Published case reports indicated that DG patients can be successfully managed with dental implants. These reports suggest that the degree of disease control may be more important than the nature of the disease itself in regard to the effects on osseointegration. Penarrocha et al. [98] reported that implants can be successfully placed and used to support dental prostheses in patients with recessive dystrophic epidermolysis bullosa. A total of 38 implants were placed in six totally edentulous patients. Only one implant failed to achieve osseointegration. The average follow-up from implant placement was 5.5 years. The implant-supported prostheses were associated with improvements in the patients' comfort and function, esthetics and appearance, taste, speech, and self-esteem. Altin et al. [99] presented a case of PV rehabilitation using a successful implant-supported prosthesis with a 32-month follow-up. They concluded that the implant treatment may be considered as a good alternative to a tissue-borne prosthesis in PV patients. Esposito et al. [100] reported implant retained overdentures for two patients with severe oral LP. The patients were often unable to wear tissue-borne prostheses because of the discomfort. There was good integration of the implants with no clinical or radiographic evidence of bone loss, and the soft-tissue/implant response was excellent. Lesions occasionally flaredup but were successfully treated with topical steroids. There was no evidence of potential implant failure as a result of these flare-ups. Although these descriptions of successful management using dental implants for patients with DG are promising, further studies are needed since these were individual case reports. Conclusion DG is a clinical manifestation that is common to several diseases or disorders. It is important to diagnose the diseases causing DG because the prognosis varies, depending on the disease. Histopathological examination and DIF testing are often required to establish the final diagnosis. The patients diagnosed with autoimmune diseases such as MMP or PV should be closely followed because they must be immediately referred to other experts when they develop lesions on parts of their body other than the oral cavity.
2018-04-04T06:53:23.187Z
2017-01-01T00:00:00.000
{ "year": 2017, "sha1": "6f331826effc572c361ce717e0806d5d65500649", "oa_license": "CCBY", "oa_url": "https://www.intechopen.com/citation-pdf-url/55772", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "a42097ad4f2ba50f9a2568ea6630b79409c7a630", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
85499798
pes2o/s2orc
v3-fos-license
Information Needs of Next-Generation Forest Carbon Models : Opportunities for Remote Sensing Science Forests are integral to the global carbon cycle, and as a result, the accurate estimation of forest structure, biomass, and carbon are key research priorities for remote sensing science. However, estimating and understanding forest carbon and its spatiotemporal variations requires diverse knowledge from multiple research domains, none of which currently offer a complete understanding of forest carbon dynamics. New large-area forest information products derived from remotely sensed data provide unprecedented spatial and temporal information about our forests, which is information that is currently underutilized in forest carbon models. Our goal in this communication is to articulate the information needs of next-generation forest carbon models in order to enable the remote sensing community to realize the best and most useful application of its science, and perhaps also inspire increased collaboration across these research fields. While remote sensing science currently provides important contributions to large-scale forest carbon models, more coordinated efforts to integrate remotely sensed data into carbon models can aid in alleviating some of the main limitations of these models; namely, low sample sizes and poor spatial representation of field data, incomplete population sampling (i.e., managed forests exclusively), and an inadequate understanding of the processes that influence forest carbon accumulation and fluxes across spatiotemporal scales. By articulating the information needs of next-generation forest carbon models, we hope to bridge the knowledge gap between remote sensing experts and forest carbon modelers, and enable advances in large-area forest carbon modeling that will ultimately improve estimates of carbon stocks and fluxes. Introduction Multiple research domains contribute to the scientific understanding of forest carbon accumulation and fluctuations, including: plant and tree physiology and genetics; plant, tree, and landscape dynamics; and atmospheric interactions between plants, landscape, and soil.Forests establish themselves through interacting with pedospheric and atmospheric conditions as well as other on-site vegetation.Trees and other vegetation in forests grow, die, and decompose, influencing their environment and their interrelationships.Forests are also disturbed.When combined, these processes and interactions define forest dynamics, and determine how much carbon accumulates at a site.None of the research domains related to forest dynamics has a complete understanding of forest carbon accumulation and variability [1][2][3][4].Moreover, some processes and interactions are not yet understood or well described [5].The very complexity of the forest carbon cycle is challenging for measuring, monitoring, and projecting changes to forest carbon.In addition, the geographic extent and spatiotemporal variability in the system prohibits a complete census of forest carbon [6].As a result, no single forest carbon model, or modeling approach, currently provides a complete assessment of carbon stocks and fluxes, at any scale.Instead, multiple modeling approaches exist to quantify and understand the different processes that contribute to forest carbon accumulation and fluxes at various spatiotemporal scales.New imperatives to commodify and account for carbon highlight the need to improve these models [7]. Regional to large-area forest carbon models range from models based on tree measurements and forest inventory data, to representations of known processes at scales varying from the leaf level to the global level.All of these models have limitations, chief among them being the need for the improved sampling of forest systems.Remote sensing can aid in alleviating some of these limitations by providing data that is spatially exhaustive, spatially explicit, and that captures change at a resolution that is commensurate with the human impact on the landscape [8].Our objectives herein are to summarize the current state of scientific knowledge regarding forest carbon and dynamics, describe current forest carbon models and their limitations, and articulate where contributions from remote sensing science can inform next-generation forest carbon models.Our overarching goal is to expand understanding in the remote sensing community regarding the information needs that are associated with carbon modeling in general, and next-generation forest carbon models more specifically, thereby encouraging collaborative contributions to estimates of forest carbon dynamics and forest carbon modeling science. Background: Forest Carbon Science Forests are the largest plant community on the Earth's land surface [9], and therefore the most important terrestrial primary producers; they absorb the most carbon from the atmosphere [10].Trees and plants convert light energy into chemical energy via photosynthesis.Stomata allow CO 2 into the plant, as well as water exchanges with the atmosphere, and are a central link between plants and their environment [11].Carbon, which is stored as carbohydrate molecules, is the currency for processes such as respiration, growth, defense, and reproduction.Much plant physiology research relies on tracking carbohydrates [12].Carbon accumulation and flux depend on the fine balance between two large central processes: photosynthesis and respiration.For example, the estimated carbon sink for the managed forests of Canada between 1990-2008, despite the large fluxes from major disturbances, was due to small differences between respiration and net primary productivity [13].Respiration releases CO 2 and water back to the atmosphere at the same spatiotemporal scales as photosynthesis, making the two central processes to ecosystem structure and function [14] difficult to estimate. Atmospheric CO 2 levels are rarely a limiting factor to carbon absorption; rather, temperature, radiation, and water interact to impose complex and varying limitations on vegetation productivity in different areas of the globe [15].Figure 1 depicts the global distribution of the limiting abiotic factors to primary production in terms of water, sunlight, and temperature on a global scale.Temperature (heat) controls the rate of plant metabolism (processes), which in turn determines the amount of photosynthesis and respiration that take place.Where temperatures are between 0-50 • C [16] and water and solar radiation are available for photosynthesis, productivity is influenced by species and stand structure [17], age and site nutrients [18], the physiological adaptation of plants [19], disturbance [20], and sometimes forest management practices [21].The level of productivity is further refined by forest phenology [22], biodiversity [23], forest stand and landscape dynamics [24], and by feedback between the changes occurring in forests and the global environment [25][26][27][28]. Figure 1.Potential limits to vegetation net primary production (NPP) based on the fundamental physiological limits of vapor pressure deficit, water balance, light, and temperature [15].NPP is the rate of carbon accumulation in plants after losses from plant respiration and other metabolic processes (which are necessary to maintain the plant's living systems) are taken into account [29]. The overriding processes of carbon fixation (photosynthesis, respiration, and heat/water exchanges) are present at the leaf-level.Knowledge of these driving processes of carbon cycle science are relatively well established: 1. Photosynthesis is a light-dependent process.It was described to the level of the physics of molecules in the 1980s [30], but despite this detailed description, our understanding of photosynthesis is still expanding with the advancement of quantum physics [31]; 2. Respiration is not as well defined with model choice dependent on the spatiotemporal scale under scrutiny [32][33][34]; and 3. Latent heat and water interact at the leaf level via evapotranspiration and hydrological processes in forests [35,36], with research still advancing our knowledge of this complex interaction [37,38]. Moving from leaf-level processes to tree-level and forest-level processes is not additive or multiplicative [39].The cumulative effect of fast and locally controlled processes, such as photosynthesis and respiration, combine with new processes as the spatiotemporal scale changes; trees distribute resources for the reproduction, defense, and production of fine roots, and allocate nitrogen following sometimes-undefined patterns; trees and plants change their facilitation and competition processes as they grow; forests have microbial and fungal communities that contribute to decomposition and respiration, and interact with the climate system [40,41].Even well-defined processes can exhibit different behaviours at different scales [42,43], while other processes are still under debate [1,5,44,[45][46][47]. Process interactions are extensively being studied [48][49][50][51], and entire fields of research that are determinant in forest carbon content and fluxes are evolving rapidly [1,52,53].Research is also showing that under changing climatic conditions, species are adapting, changing our knowledge of the plasticity of tree species traits [54].Collectively, these unknowns explain why there are currently no precise whole tree or whole forests carbon balances: we do not currently understand all the processes and their interactions [3].Furthermore, as species are adapting to a changing environment [55], process and interactions are also changing.In short, forest carbon science is still elucidating processes and their relationships across spatial scales.In contrast to other research fields such as physics, forest ecology does not have rules or laws that transcend scales and provide consistent parameters for modeling.Potential limits to vegetation net primary production (NPP) based on the fundamental physiological limits of vapor pressure deficit, water balance, light, and temperature [15].NPP is the rate of carbon accumulation in plants after losses from plant respiration and other metabolic processes (which are necessary to maintain the plant's living systems) are taken into account [29]. Forest Carbon Models The overriding processes of carbon fixation (photosynthesis, respiration, and heat/water exchanges) are present at the leaf-level.Knowledge of these driving processes of carbon cycle science are relatively well established: Photosynthesis is a light-dependent process.It was described to the level of the physics of molecules in the 1980s [30], but despite this detailed description, our understanding of photosynthesis is still expanding with the advancement of quantum physics [31]; 2. Respiration is not as well defined with model choice dependent on the spatiotemporal scale under scrutiny [32][33][34]; and 3. Latent heat and water interact at the leaf level via evapotranspiration and hydrological processes in forests [35,36], with research still advancing our knowledge of this complex interaction [37,38]. Moving from leaf-level processes to tree-level and forest-level processes is not additive or multiplicative [39].The cumulative effect of fast and locally controlled processes, such as photosynthesis and respiration, combine with new processes as the spatiotemporal scale changes; trees distribute resources for the reproduction, defense, and production of fine roots, and allocate nitrogen following sometimes-undefined patterns; trees and plants change their facilitation and competition processes as they grow; forests have microbial and fungal communities that contribute to decomposition and respiration, and interact with the climate system [40,41].Even well-defined processes can exhibit different behaviours at different scales [42,43], while other processes are still under debate [1,5,[44][45][46][47].Process interactions are extensively being studied [48][49][50][51], and entire fields of research that are determinant in forest carbon content and fluxes are evolving rapidly [1,52,53].Research is also showing that under changing climatic conditions, species are adapting, changing our knowledge of the plasticity of tree species traits [54].Collectively, these unknowns explain why there are currently no precise whole tree or whole forests carbon balances: we do not currently understand all the processes and their interactions [3].Furthermore, as species are adapting to a changing environment [55], process and interactions are also changing.In short, forest carbon science is still elucidating processes and their relationships across spatial scales.In contrast to other research fields such as physics, forest ecology does not have rules or laws that transcend scales and provide consistent parameters for modeling. The Current Carbon Modeling Continuum The continuum of large-area forest carbon models ranges from empirical models to models of known processes that drive forest carbon balances, with all the possible combinations of these two approaches in between.While empirical models focus on describing the statistical relationships between data, process models focus on accounting for the mechanisms or processes that determine carbon accumulation in forest ecosystems.Unlike empirical models, process models are sufficiently generalized so that they remain relevant when faced with new conditions, making them suitable for future projections.Modeling approaches along this continuum vary in their level of detail (spatial, temporal, and attributional) depending on the purpose for which they were generated, the data available, and the composition and expertise of the model development team.Here, we give an overview of the different model types. International obligations require signatory countries to report and monitor greenhouse gas balances from forests [56].Many carbon models use the same data that is used by forest management agencies and the forest industry to estimate forest carbon for reporting purposes [57].This includes forest inventory, land cover, land use, and ownership data, change and disturbance information, growth and yield estimates, biodiversity and wildlife management data, etc.These models have varying levels and types of ecological processes represented, with some also including simplified soil processes [58].These carbon models range from simple statistical relationships, such as emission factors used in systems where few data are available [59,60], to complex combinations of empirical models (equations), process models (deterministic equations), and assumptions that enable these models to be executed in a timely manner [57].It is common practice to use tree or stand-level measurements to estimate biomass using allometric equations in sampled locations, and then expand these estimates via statistical or machine learning methods to the region of interest.To obtain carbon estimates, the total biomass is then divided by a factor of two, regardless of the species or growing conditions [60,61]. Most process-based models support scientific understanding, and are often used for exploring potential responses under changing environmental conditions [62][63][64][65][66]. Basic sampling theory tells us that models based on observational data, such as statistical or machine learning models, are not suitable for the projection of future conditions, since the relationships and conditions that these observations represent are from the past, and these relationships and conditions are changing [67].Hence, process-based models are a more appropriate tool for projections of future conditions.Some efforts have attempted to bridge both empirical models and process-based models [68][69][70][71]; however, these approaches are often too onerous for carbon reporting and management applications.The current limits of process understanding, including their scale-dependent behavior as well as their complicated and cumulative interactions, are the main limits of our ability to model forest carbon via process-based models.Hence, current process-based models usually represent a limited suite of well-understood processes [72]. Forests interact and influence global conditions, representing a potentially important feedback to changing climatic conditions [73,74].However, forests are but one terrestrial biome contributing to global carbon balance estimates [75][76][77][78].In large-area carbon models based on biogeochemical measurements [3,64,[79][80][81], forest contributions are often calculated as the remaining balance from atmospheric and oceanic models instead of being modeled independently [82].Atmospheric and oceanic models need to be constrained by estimates of forest carbon stock and fluxes [3,83], because forest are an integral part of the hydrological, energy, and carbon budgets in regional models [37]. Limitations of Current Carbon Models Three ubiquitous limitations challenge current large-area forest carbon models: limited sample size, the population sampled relative to the population contributing to carbon storage and fluxes, and system understanding, particularly across spatial scales. Large-area forest carbon models suffer from data paucity [84,85].Managed forests are often the best-sampled land base in carbon modeling, wherein carbon estimates are derived from the height and diameter measurements of commercial tree species.However, those sub-populations also suffer from low sample sizes [86].Even if we assumed that sample sizes were adequate in managed forests, these measurements are applied to allometric equations (which are also a significant source of uncertainty [87,88]) to estimate aboveground biomass [89][90][91][92][93], and are then divided by a factor of two to estimate carbon.Since biomass measurements are very labor-intensive, the biomass equations themselves suffer from low sample sizes, with equations developed using limited samples (i.e., in the order of hundreds of trees) and applied to species distributed across a vast geographical extent [88,92,94].Moreover, the selection of the allometric models themselves can affect estimation [87].Further, the 0.5 ratio to estimate carbon content from biomass estimates [60,95] also introduces another source of unaccounted error, since the carbon content of biomass, even when considering dry weight, varies much more than the 0.5 ratio that is widely used [96,97].Process-based models, which rely on long-term measurements from sophisticated scientific equipment, typically have even lower observation data ratios, with data from a limited number of sites used to represent entire biomes [79]. Models that rely on forest inventory data for carbon estimates, such as the forest carbon balance models that are used for reporting, are generally built on a sample of a sub-population that is relevant to managed forests, such as specific commercial tree species in the productive forests that are managed for forest product extraction.Given the complexity of forest dynamics, it is unlikely that reliable carbon content and flux estimates can result from a poor sample in one strata of the forested landscape that targets specific tree species above a given diameter, even if those trees are the dominant feature of the forest biomass.Boreal forests alone, which are an ecosystem that is largely classified as unmanaged [98] and hence lacking detailed forest inventory data, are estimated to account for ~50% or more of world forest ecosystem carbon stocks, much of which is in their soils [58].Furthermore, most allometric equations do not include biomass from other vegetation such as small diameter trees, non-commercial species, shrubs, or bryophytes, despite these components' demonstrated contributions to forest productivity [99][100][101][102].Very few studies have quantified the carbon in non-commercial species, small-sized commercial species [103], shrubs and lichens [53,104,105], or mosses [106].The functional and dynamic nature of all these components add to the aforementioned incomplete understanding of forests dynamics and to the challenge of modeling or predicting the productivity and carbon levels of forested ecosystems [107][108][109]. Potential Remote Sensing Contributions to Carbon Modeling Remotely sensed data can aid in alleviating several of these aforementioned limitations and contribute to the next-generation of carbon modeling.Current research efforts in the remote sensing community in support of carbon modeling often assume that resulting products can be readily integrated into carbon balance assessments.However, current forest carbon models are not able to easily adapt to new data types or inputs.For example, much of the carbon-supporting research in the remote sensing community has focused on biomass estimates [110].However, given the complexity of forest carbon estimates, their spatiotemporal variation, and the current state of forest carbon science modeling, a static estimate of biomass at a single point in time alone is inadequate information for estimating the carbon balances of large forest areas.Given our current understanding of forest carbon cycles, existing models do not provide sufficiently accurate or consistent carbon estimates across spatiotemporal scales [111].Remote sensing science has already contributed to important improvements to large-area forest carbon modeling.However, developing the next generation of carbon models-models that are spatially explicit, that account for changes in situ, and that afford more consistent estimates of carbon over space and time-will require a closer integration of remote sensing and carbon science in models.Research advances in remote sensing science, combined with advances in both empirical and process modeling of large-area forest carbon, can lead to improved forest carbon estimates [111].There is an urgent need for more detailed observations to support the empirical modeling of forest carbon.Remotely sensed data can currently provide many useful inputs to both empirical and process-based forest carbon models: land cover, disturbance, leaf area index (LAI), biomass, phenology, age, height, and estimates of gross and net primary production, among others [112].These inputs come from a myriad of sensor types: optical, radar, lidar, hyperspectral [112,113]; however, the operational readiness of these sensors to support large-area, spatially explicit modeling is not uniform [84].The current proliferation of remote sensing missions and operational sensors at various spatiotemporal scales is poised to aid in addressing the sampling paucity and cross-scale issues that currently limit carbon modeling and system understanding.Remote sensing also seems like a logical avenue to improve the estimation and understanding of the light-driven processes at the core of carbon fixation, such as photosynthesis [114].Better sampling supported by the remote sensing expertise to explore empirical relationships will create the opportunity to improve understanding of forest carbon cycles and their role in the large global carbon cycle and build better models.Here, we identify those limitations where remote sensing science is poised to make the greatest contribution to improving large-area carbon modeling science and application. Increased Sampling in Space and Time Remote sensing, despite not directly measuring most of the variables that are traditionally used across the spectrum of present-day carbon models, can now provide a census of forest systems [115], which is something unprecedented in forest modeling.This is the primary reason why products derived from remotely sensed data will be at the core of the next generation of forest carbon models.The variability resulting from the fast, locally controlled processes of carbon exchange (photosynthesis and respiration), layered with the increasing complexity of added processes with scale, demands that we acquire more information for modeling purposes.Remotely sensed information permits spatially-explicit modeling, capturing some of the spatial variability of these processes, which is becoming the minimum requirement for the next-generation models. Remote sensing gives different observations than those that carbon modelers are used to: for example, energy from either the Sun or the sensor itself interacts with forest targets, and can be further interpreted to infer forest attributes [116].These observations and their derived inferences are neither better nor worse than field data; both are surrogate measures for forest attributes, and both require due diligence and preferably cross-validation to increase confidence in their inference [117].In both cases, forest attributes are themselves surrogates to biomass, which is a surrogate to carbon, adding two layers of modeling and their associated errors and uncertainties to the estimation of carbon.Carbon estimates from both field-based and remotely sensed observations often ignore these additional sources of uncertainty.A notable advantage of the remotely sensed information is that they can be extracted over different spatiotemporal scales and frequencies [118].Repeated measures provide the opportunity for change measurement [116], which has been notoriously difficult to measure in forest systems, but is essential for change monitoring, system understanding, and projections of carbon fluxes. There are vast forested landscapes that have no forest inventory, such as the northern forests of Canada [13].Carbon estimates in these forests must rely on remotely sensed information with field validation.The few existing estimates for these areas have also combined with process modeling [119].In these northern systems, additional challenges exist in the ambiguity of ecotones between wetlands, peatlands, and forests [120], which is another level of variability that remote sensing can help alleviate by better defining these ecotones and/or identifying their distinguishing components.To date, unmanaged forest carbon estimates are for scientific purposes, but since these northern forests are thought to contain large amounts of stored carbon [58,121], potentially released under changing environmental conditions, it follows that complete carbon stocks and flux estimates for these areas will eventually be required for climate change policy development. Upscaling of Estimates and Improved System Understanding across Spatial Scales Increasing the frequency and extent of observations and developing inferences about forest attributes will in itself contribute to advancing our understanding of processes and their cross-scale interactions.Never before have so many observations about forests been available.Of particular usefulness is the potential that remotely sensed information offer for the scaling of various processes across spatiotemporal scales [116].Information about forest composition, structure, productivity, and disturbances are useful for the more practical applications of carbon modeling, and are also useful for tracking changes in forests, and therefore for studying and understanding the forest system.With a proliferation of remote sensing satellites offering an increasing number and variety of observations [122], the potential for virtual constellations expands [123], further increasing the observation capacity for forests. The rapid evolution of remotely sensed data products and methods provides at least two further opportunities: the use of machine learning algorithms, and the advancement of the cross-scale tracking of processes.In the previous state of data paucity, machine learning techniques were of little use in forest modeling.However, remote sensing provides an increasingly overwhelming observational capacity, permitting the use of these powerful tools [124], and with them, possibilities of increasing the available information and associated understanding of our forested landscapes.Although these tools do not replace statistical inference, they do increase the amount of information that is available, and can be the basis for further research that can lead to statistical inference, and eventually to more complete process modeling.The expertise for exploring the potential of these tools and their application to the plethora of remotely sensed data available lies in the remote sensing community, outside the realm of forest carbon expertise but that is essential for modeling and carbon science improvements. One example of the process scaling that is brought by remotely sensed observations is with newly available observations of solar-induced fluorescence, which is a detectable measurement of light released during photosynthesis [125].Photosynthesis is driven by light and is the central carbon accumulation process; it follows that observations of energy via remote sensors may have a much closer link to this driving process than tree or localized gas exchange measurements, and may possibly be even more trackable across spatial scales.This may be instrumental in our scientific understanding of the global carbon cycle, and in parallel, of forest carbon [126], since this level of productivity tracking has never been possible before. Remotely sensed data also offer the capacity to improve the linkages between measurements acquired at different spatial scales [127,128].Terrestrial laser scanning provides data that can improve sample sizes for allometry [129] or provide direct estimates of plot-level biomass [130].Airborne laser scanning can provide additional data to augment ground plots [104] and link to Landsat-scale time series observations [115].Moreover, the recent launch of two spaceborne lidars, the Advanced Topographic Laser Altimeter System (ATLAS) instrument onboard IceSat-2 [131,132] and the Global Ecosystem Dynamics Investigations (GEDI) full waveform light detection and ranging (lidar) onboard the International Space Station [133,134], provide further data for the spatial scaling of forest attributes, which is essential for carbon models [135,136].In turn, insights enabled by cross-scale comparisons may provide further insights to modelers [137].Exploring the potential of measuring forest or tree traits with remote sensing and modeling forest productivity also shows promise [64,[138][139][140].In addition to the aforementioned sensors and platforms, numerous hyperspectral satellite missions are currently in various stages of development [141], and small satellites-microsats and cubesats-have now demonstrated viability and affordability as platforms for Earth observation [142]. Discussion Forest are complex.Our understanding of the forest system is not yet complete, and therefore, our models are likewise deficient, and our estimates are consequently variable with unknown accuracy and large uncertainties.Models, by default, push forward assumptions that are-as of yet-not supported by carbon-science findings [143].Models often ignore scale issues for practicality, and this applies equally to those models that use remotely sensed data and those that do not.Field data do not provide "true" carbon estimates any more than remotely sensed data do.Biases and errors need to be estimated with both data types and preferably cross validated.Declaring any exactitude in carbon estimates ignores the efforts of scientists in remote sensing science, carbon science, and the field of forest biometrics that strive to improve data products and refine models, and does not enable movement toward better science.Moreover, sensors vary in their usefulness depending on biomes [144]; biomass equations have large errors [88], tree-level biomass estimation presents many challenges [145], and errors need to be explicitly explored for different sensors [113,117,146]. There is likely no universal approach to forest carbon modeling; rather, different modeling solutions may be required in different environments, depending on the information needs, the purpose of the modeling, and the data that is available.Next-generation large-area forest carbon models will require cross-disciplinary teams to provide the appropriate specialized knowledge that is required from all of the involved disciplines.Such knowledge will enable the generation of useful remotely sensed outputs that can be incorporated into next-generation carbon models in a meaningful way [117,144].The cross-pollination of carbon modeling and remote sensing can lead to improved model inputs, informed bias corrections [147], and more accurate estimates that account for scaling and uncertainty.Data sharing and open science is also a vital component of next-generation forest carbon models (e.g., http://forest-observation-system.net/), and indeed of all science [148]. To fully characterize the contribution of forest ecosystems to the global carbon balance, all forests need to be modeled, rather than just productive forest areas that are managed for forest products.Remote sensing science will be essential in providing data for these unproductive and unmanaged forestlands.Moreover, those components of productive forests that are not currently modeled also present particular challenges for carbon modeling, and require the support of remotely sensed observations.Further exploration of the link between the energy observations from remote sensing and the energy-based processes that drive forest productivity (light and heat) will move both fields toward better understanding and better science. Conclusions Carbon is the "end-product" of forest dynamics, and remote sensing is well poised for quantifying present and changing conditions in forests and therefore in carbon.The next generation of forest carbon models will include remotely sensed information, and evolve with it as well.Field data needs will remain a strong component of forest carbon modeling, since neither field nor remotely sensed observations are direct measures of carbon or carbon fluxes.However, the methods by which that field information is acquired are also evolving as a function of changing technologies.Repeated measures, both in situ and remotely sensed, are essential in this changing environment if our goal is to develop reliable models.All data is necessary to further the representation and understanding of the forest system.Models are needed that can use all levels of information, are flexible, and can adapt to new data types.That requires open science, open data, and broader collaborations.Examples of such approaches are already emerging [149].Although forest carbon modelers are typically not prescriptive in the data sources or methods by which their model inputs are generated, they can more clearly articulate their information needs to the remote sensing community.By better understanding the information needs, the remote sensing community in turn can respond with innovative solutions and information products that are rigorously and transparently generated.Herein, we have offered some context on current forest carbon models and their challenges and limitations, highlighting the information needs of next-generation forest carbon models.From a remote sensing perspective, opportunities to acquire observations of the globe's forests at increasingly refined spatial, spectral, and temporal resolutions have never been greater.The challenge for both remote sensing and forest carbon scientists and modelers is how to best integrate and manage these large and diverse data flows in the interest of defining and enhancing the next-generation forest carbon models, while also advancing scientific understanding of the underlying mechanisms and processes that regulate forest carbon dynamics. Figure 1 . Figure1.Potential limits to vegetation net primary production (NPP) based on the fundamental physiological limits of vapor pressure deficit, water balance, light, and temperature[15].NPP is the rate of carbon accumulation in plants after losses from plant respiration and other metabolic processes (which are necessary to maintain the plant's living systems) are taken into account[29].
2019-03-04T11:59:35.907Z
2019-02-23T00:00:00.000
{ "year": 2019, "sha1": "be88e8baf2bc3987e7cce8e1e33308add07ea4db", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-4292/11/4/463/pdf?version=1550918056", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "be88e8baf2bc3987e7cce8e1e33308add07ea4db", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Computer Science", "Environmental Science" ] }
266279374
pes2o/s2orc
v3-fos-license
Sub-10 μm-Thick Ge Thin Film Fabrication from Bulk-Ge Substrates via a Wet Etching Method Low-defect density Ge thin films are crucial for studying the impact of defect density on the performance limits of Ge-based optical devices (optical detectors, LEDs, and lasers). Ge thinning is also important for Ge-based multijunction solar cells. In this work, Ge wet etching using three acidic H2O2 solutions (HF, HCl, and H2SO4) was studied in terms of etching rate, surface morphology, and surface roughness. HCl–H2O2–H2O (1:1:5) was demonstrated to wet-etch 535 μm-thick bulk-Ge substrates to 4.1 μm with a corresponding RMS surface roughness of 10 nm, which was the thinnest Ge film from bulk-Ge via a wet etching method to the best of our knowledge. The good quality of pre-etched bulk-Ge was preserved, and the low threading dislocation density of 6000–7000 cm–2 was maintained after the etching process. This approach provides an inexpensive and convenient way for accurate Ge substrate thinning in applications such as multijunction solar cells and sub-10 μm-thick Ge thin film preparation, which enables future studies of low-defect density Ge-based devices such as photodetectors, LEDs, and lasers. INTRODUCTION −3 Sicompatible optical interconnects are in great need to address this issue.Up to date, most of the Si-compatible optical components required for on-chip optical interconnects, such as modulators, 4 photodetectors, 5 and waveguides, 6 are already available.The last missing piece is a light source, especially a laser.Although InAs/GaAs quantum dot lasers monolithically grown on Si have been realized, due to material contamination issues, it may take a long time and high cost for those III−V materials to enter mainstream Si-processing facilities. Ge is an indirect-band gap semiconductor, which is inferior in light emission.However, it is the most Si-compatible semiconductor.Ge is already used in mainstream Si fabrication facilities and plays an important role in Si photonics, such as detectors and modulators.−9 However, early Ge laser's performance was far below suitable commercial standards. Theoretical studies on epitaxial Ge (epi-Ge)-on-Si lasers indicated that with design optimization, threshold current densities and wall-plug efficiencies of epi-Ge lasers could be greatly improved. 10,11A 3 dB bandwidth of 33.94 GHz at a biasing current of 270.5 mA was predicted after Ge laser structure optimization with a defect-limited carrier lifetime of 1 ns. 12So far, the main obstacle to achieving this potential lies in the poor epitaxial Ge quality on Si substrates due to the lattice mismatch between Ge and Si.The thread dislocation density (TDD) in Ge grown on Si is typically in the range of 10 6 −10 9 cm −2 .High TDD results in a short minority carrier lifetime, high lasing threshold, poor reliability, and low efficiency, which greatly limit the performance of lasers fabricated on epi-Ge wafers.In comparison, bulk-Ge crystals, such as Ge wafers, have the highest material quality, and the TDD for bulk-Ge wafers is commonly below 10 4 cm −2 . What is the ultimate performance potential of Ge lasers?Are Ge lasers a feasible technology solution to the long-existing Sicompatible laser problem?Our long-term objective is to answer these questions experimentally using the highest-quality Ge, bulk-Ge, which has not been used for Ge laser preparation before.To make a Ge laser from bulk-Ge, the first step is to obtain Ge thin films of micron scale from bulk-Ge wafers of a few hundred μm thickness, which is the goal of this work.−23 Bulk-Ge-based thin films are also helpful to study how the defect density in Ge can impact the performances of these devices as well. While a smart-cut method was proposed to obtain a Ge thin film on the Si substrate, 24,25 solution-based methods such as wet etching to thin Ge are much cheaper and more accessible to get Ge thin films, especially in the early R&D stage.With a high-quality bulk-Ge crystal, it is possible to get a high-quality Ge thin film for potential optoelectronic applications discussed above. As the pioneering transistor material, Ge's first wet etching study dates back to 1955, when Paul. R. Camp studied the etching rates of Ge with solutions composed of H 2 O 2 , HF, and water as a function of etchant composition, crystal orientation, and impurity. 26More etchants for Ge wet etching were studied in the following years, and the related etching rates of Ge for different solutions are well summarized in the literature. 27,28owever, there are only three reports on the preparation of Ge thin films from bulk-Ge via wet etching, which are summarized in Table 1. The goal of this work is to develop wet etching recipes that can obtain low-defect Ge thin films from bulk-Ge wafers within 7 days of etching.The desired Ge thin films should have the following: 1.1.Thickness Less than 10 μm.It was reported that the direct-gap photoluminescence (PL) is difficult to be observed in thick bulk-Ge samples due to the reabsorption of the emitted photons. 30Decreasing the thickness to less than 10 μm is needed to decrease the reabsorption from Ge. 31 1.2.Surface Roughness Less than 10 nm.The common surface roughness of epitaxial Ge thin films is in the nm scale.The choice of 10 nm as the upper limit is to match the roughness of epitaxial Ge films.Rough surfaces increase surface recombination and lower minority carrier lifetimes, which are not desired. EXPERIMENTS The beginning substrates were 4-inch n-type (0.173−0.25 Ohms*cm at 295 K) double side polished (100) Ge Czochralski wafers that were obtained commercially.The surface roughness of the pre-etched Ge wafer is 1.6 nm.The wafers were diced into 1 cm × 1 cm pieces before the etching process.All the Ge pieces were cleaned sequentially with acetone, isopropyl alcohol, and deionized (DI) water and dried with N 2 gas.All the wet etching was done in a wet bench with good ventilation in a class 10,000 cleanroom with the temperature controlled at 21 °C. Etchant and Etch Recipe Selection. As the initial thickness of the Ge wafer is 535 μm, the minimum etching rate should be 50 nm/min to obtain a sub-10 μm Ge thin film in 1 week under a uniform etching rate.Based on the etch rate reported from the literature, 25 To simplify the description of the etch solutions, we use the acid name plus a volume ratio (X: Y: Z) to denote an etch solution made of X parts of the acid product specified, Y parts of H 2 O 2 , and Z parts of DI water.For example, an HCl (X:Y:Z) solution means a solution consisting of X parts of HCl solution (37%, J.T. Baker), Y parts of H 2 O 2 (30%), and Z parts of water. Etch Recipe Optimization for Better Surface Morphology. To produce a Ge thin film with a thickness of ≤10 μm, the surface morphology is a crucial factor.A thin film with high roughness before and during wet etching is more prone to breaking into pieces before reaching the desired thickness due to the preferential etching near defects such as surface scratches and dislocations.To check how the volume ratio X:Y:Z influences the postetching morphology, all three types of solutions were prepared with ratios of 1:0:0, 1:1:1, 1:1:5, 1:1:10, and 1:1:20.To conduct the wet etching, each Ge piece (1 cm × 1 cm) was placed on the bottom of a beaker and the etchant solution with a volume of ∼35 mL was added into the beaker for a certain etching time (25 min for HF-based solutions due to the fast etch rates, 24 h for HCl and nanostripbased solutions).This sample placement method resulted in single-sided etching.The thickness before and after the etching process was checked with a Beslands micrometer. Because surface morphology plays an important role in optoelectronic devices, in this work, we inspected the postetching Ge morphologies with a Nikon ECLIPSE LV150 optical microscope.This was chosen instead of an electron microscope because a larger imaging area of 3.5 mm 2 was preferred to represent the overall morphology. The 3D optical images were taken with an optical interferometer (Filmetrics Profilm3D optical surface profiler) to evaluate the surface roughness after etching.The optical interferometer is a good metrology tool for quantitatively measuring roughness for a large area (≥400 μm × 300 μm).If the measurement area is limited to the μm or nm scale, the results can be misleading.For example, one sample can be smooth in the μm or nm scale but rough in the sub-mm scale.The roughness in the sub-mm scale can result in sample etching or fracture.The volume ratio generating the lowest surface roughness for each solution was selected for Ge thin film preparation and characterizations, which was HCl (1:1:5) and nanostrip (1:1:10) (details in 3.2 and 3.3).HF solutions were eliminated due to high surface roughness, as discussed in 3.1. Thin Film Preparation and Characterizations. In this step, Ge wafers were thinned to ≤10 μm using the two selected recipes: HCl (1:1:5) and nanostrip (1:1:10) and double-sided etching.Ge was placed vertically in the beaker on a small Teflon stand with double sides being etched at the same time to shorten the required etching time.The remaining thickness of the Ge thin film was checked with optical microscopy for the cross-section. The surface roughness was evaluated with an optical interferometer, and the etching pit density (EPD) measurements were performed to obtain the TDD before and after etching using an etching solution.The EPD etchant which was a mixture of 100 mL of CH 3 COOH (≥99%, J.T.Baker), 40 mL of HNO 3 (70%, J.T.Baker), 10 mL of HF (49%, J.T.Baker), and 30 mg of I 2 (≥99.99%,Sigma-Aldrich) was selected according to the literature. 33An optical microscope was used to observe, count the etch pits, and calculate the etch pit density (EPD) with more than three positions being checked.The crystal quality before and after the etching was checked with high-resolution XRD (Bruker D8).The reflectance before and after the etching was measured with a film thickness measurement instrument F20 model by Filmetrics.The optical image before the wet etching process is shown in Figure 1a, and the related surface roughness measured (Figure 1b) indicated that the unetched Ge had a roughness of approximately 1.6 nm with some minor polishing traces on the top.Owing to the high etching rates for HF-based solutions, the initial etching time was controlled to be 25 min for the recipe optimization.The postetching results for different ratios are shown in Table 2.The HF-only solution could not thin Ge down but was able to clean the Ge surface to obtain a low surface roughness of 1 nm. 34The HF solution (1:1:10) generated the lowest surface roughness.However, when the etching time was extended from 25 min to 4 h for the HFbased solution (1:1:10), the surface roughness increased drastically, which could be seen in Figure S1 with obvious cracks on the surface.Therefore, HF-based solutions were eliminated in the further thin film preparation. Optimization of HCl Solutions. The etching results for HCl solutions are shown in Table 3.After 24 h of etching with HCl (1:1:1), the thickness was reduced by 140 μm. Table 3. Optical Images, 3D Optical Images, Surface Roughness, and Thickness Removed after 24 h of Etching from the HCl-Based Solution with Different Ratios, Scale Bar = 500 μm Table 4. Optical Images, 3D Optical Images, Surface Roughness, and Thickness Removed under 24 h of Etching from the Nanostrip-Based Solution with Different Ratios, Scale Bar = 500 μm Scratches and voids showed up with the surface roughness increased to 7.6 nm.As the ratio changed from 1:1:1 to 1:1:5, the surface roughness decreased to 6.3 nm and the etching rate reduced slightly to 130 μm/day.However, when the ratio increased to 1:1:10, the number of etching pits and the surface roughness increased sharply to 27.8 nm.The surface etched by HCl (1:1:20) became quite rough with a matte appearance under the optical microscope and was not able to be evaluated with the optical interferometer.Based on these observations, HCl (1:1:5) was selected for thin film preparation. Optimization of Nanostrip Solutions. On the high H 2 SO 4 limit, nanostrip (1:0:0) with no H 2 O 2 or water, the etched Ge sample showed obvious scratches on the surface, with the surface roughness increased slightly to 2 nm.There was no obvious change of the thickness after 24 h of etching, indicating that Ge was roughened with little thickness loss.With the ratio change to 1:1:1, the etchant had a strong oxidative effect on the surface with oxidized particles on the surface.The surface was too rough to be measured using an optical interferometer.For the nanostrip (1:1:5) solution, obvious holes could be seen after etching, making the surface too rough to be measured with the optical interferometer in a vertical scanning interferometry mode.The etch recipe that generated the best surface quality was the nanostrip (1:1:10), and the etched surface is flat with minor voids on the surface with the lowest surface roughness of 3.8 nm.For the nanostrip (1:1:20), the surface roughness increased to 8 nm.According to these results, the nanostrip (1:1:10) was selected for thin film preparation (Table 4). Ge Thin Film Preparation by Double-Sided Etching with HCl (1:1:5) and the Nanostrip (1:1:10). As discussed, HCl (1:1:5) and the nanostrip (1:1:10) were used to achieve the thinnest Ge films possible.To exclude the influence of the potential sediment during long-time etching and half the etching time, double-sided etching was used, where Ge was placed vertically in the beaker on a small Teflon stand (Figure S2).The surface morphology stayed almost the same for single-sided etching and double-sided etching (Figure S3), but the required etching time was shortened due to the etching on both sides.Ge film thicknesses were checked with optical microscopy. The final results of the thin films prepared are shown in Figure 2.Both the nanostrip (1:1:10) and HCl (1:1:5) were able to fabricate <10 μm Ge thin films.As shown in Figure 2a, double-sided etching by the nanostrip (1:1:10) for 57 h resulted in a thickness of 9.2 μm.The picture of the samples is shown on the top right.The thickness of Ge after HCl (1:1:5) 53 h etching was 4.1 μm (Figure 2b).A mirror-like surface was still kept for the HCl (1:1:5)-etched sample with the reflection of a tweezer seen (Figure 2c).The reflectance curves before and after the etching are shown in Figure 2d.More than 80% of the reflectance was preserved at the short wavelength side below ∼970 nm, with over 77% in the range between 970 and 1000 nm. 3.4.1.Surface Roughness of the As-Etched Ge Thin Film.After nanostrip (1:1:10) double-sided etching for 57 h, the optical images showed a lot of hemispherical holes on the top (Figure 3a), and the surface roughness (Figure 3d) increased from 3.8 nm from the 24 h single-sided etching to 60 nm with surface holes of different sizes.This could be improved by an agitation (300 rpm) during the etching process where the surface etching hole sizes decreased (Figure 3b) and the surface roughness (Figure 3e) dropped to 32 nm.The HCl (1:1:5)-etched thin film had fewer etching holes and a flatter surface (Figure 3c), and the surface roughness (Figure 3f) was approximately 10 nm, much better than those etched by the nanostrip (1:1:10).This also explains the high reflectance of the HCl (1:1:5)-etched surfaces. 3.4.2.Crystal Quality Before and After Wet Etching.The threading dislocation density before and after the etching processes was also checked with the EPD method discussed in 2.3, and the etch pits are shown in Figure 4a−c.The etching time was 90 s to get large enough pits for counting.The etching pit densities before and after the etching processes remained almost the same level of 6000−7000 cm −2 . The crystal quality was measured by HRXRD as shown in Figure 4d.Both the unetched and the 53 h-HCl (1:1:5)-etched thin film had a sharp Ge peak, indicating a good crystalline quality.The full width at half maximum (fwhm) of the Ge peak of HCl (1:1:5)-prepared thin film was 0.0269°, which increased a little bit from the 0.0192°of the unetched Ge.However, it was much better compared with epitaxial Ge on Si, which was reported to be 0.0736°in the literature. 15The Ge peak position stayed the same, and the peak shape was similar before and after the etching process, which also demonstrated that no strain or obvious lattice damage was introduced for the Ge thin film.The key results and comparisons are summarized in Table 5 given below. Absorbance of the Ge Thin Film. As mentioned before, one of the advantages of using a Ge thin film is the reduction in reabsorption as thickness decreases.This reduction is favorable for Ge's light-emitting properties, as it allows more photons to escape from the material.To confirm this, we examined the absorbance of Ge with varying thicknesses, as shown in Figure 5.It can be seen that the absorbance decreased with the reduced thickness.Moreover, the absorption edge consistently shifts toward shorter wavelengths with smaller thicknesses.Initially, it was around 1667 nm for a bulk-Ge wafer, but it shifts to approximately 1550 nm when the Ge thickness is down to four microns.It is worth noting that Ge is an indirect-band gap material, and both the absorptions from the indirect band gap (0.66 eV, 1879 nm) and the direct band gap (0.8 eV, 1550 nm) contribute to the absorption spectrum.The transition at the indirect band requires the assistance of the phonon with the required momentum to bridge the offset between the conduction band minimum and valence band maximum. 35his mechanism results in a much lower probability of absorption compared to the direct band transition.Consequently, the absorption coefficient at the indirect band gap (1879 nm) is significantly smaller than that at the direct band gap (1550 nm) as mentioned in the ref 36. However, it is essential to note that the contribution from the indirect-band gap absorption in bulk-Ge remains significant due to its substantial thickness, typically exceeding 500 μm.As the thickness decreases, the influence of the indirect-band gap absorption diminishes, resulting in a shift toward the wavelength associated with the direct band gap.When the thickness is reduced to less than 10 μm, the absorption from the indirect band gap is effectively suppressed, creating more opportunities for Ge to exhibit enhanced photoluminescence at 1550 nm. 3.6.Possible Mechanism for the Wet Etching of the Acidic H 2 O 2 Solution.Ge wet etching with acidic H 2 O 2 has been adopted for a very long time, and the related mechanisms both on the nanoscale and the atomic scale have been extensively studied for the research associated with Ge surface passivation, 37 Ge surface cleaning, 38 and Ge wet etching processes. 3832,39There are two different natural oxides, GeO and GeO 2 , on the surface of Ge. 40 When H 2 O 2 was applied, the oxide which is primarily water-soluble GeO 2 will regrow.This oxide dissolves slowly in H 2 O and can be removed more rapidly by acids.With the continuous oxidation from H 2 O 2 and oxide removal from acid, Ge could be thinned down.It should be noted that an anion like Cl − may play a more important role in the etching process than proton concentration. 32,41nlike prior studies which etched germanium with a short time and a limited depth, this work thinned Ge from the original thickness of 535 μm to a thickness of ≤10 μm.Therefore, it could be difficult to do the nanoscale mechanism investigation because the surface changed dramatically for such a long etching time (≥50 h).In this study, we only focused on the microscale morphology evolution of Ge wet etching for the long-time wet etching process. 3.6.1.Roles of the Acid and H 2 O 2 .As shown in Tables 2−4, acids (HF, HCl, and H 2 SO 4 ) were not able to thin Ge down without H 2 O 2 , which was consistent with the reports from the literature.The H 2 O 2 solution (30%) alone was able to etch Ge with an etching rate of 2.5 μm/hs, but the surface roughness increased dramatically to 11 nm after the etching process (Figure S4).Adding acid into H 2 O 2 increased the etching rate and improved the surface roughness.Thus, the etching process was considered to be a two-step process: (1) Ge was oxidized by H 2 O 2 and (2) oxides were removed by H 2 O and acid. Two types of defects could be seen after the wet etching: scratches and hemisphere voids.The scratches came from the original polishing traces (Figure 1b), which grew during the etching process as shown in Figure 6b.In addition to H 2 O 2 , a high-concentration H 2 SO 4 -like nanostrip-based solution (1:0:0) could also oxidize the surface, leaving scratches on the surface (Table 4).As for the hemisphere voids, these were more likely due to the O 2 bubbles from the decomposition of H 2 O 2 .The O 2 bubble is absorbed on the surface of Ge, gradually oxidizing Ge into oxides.After the oxides were removed by acid, hemisphere voids were generated, as shown in Figure 6c. To confirm this, one drop of HCl (1:1:5) was put on the surface of Ge, and a video of the optical microscopy view was taken to check how the surface was changing during the etching process.One can see that some bubbles are absorbed on the surface and some move freely in the solution (Video 1).After 10 min of etching, the surface was cleaned with DI water, dried, and checked with the optical microscope (Figure S5).The surface roughness increased to 1.9 nm with the scratches deepened and some voids generated on the surface.With 30 min of etching, the surface was packed with voids and the surface roughness increased dramatically (Figure S5c and S5d). It should be noted that the real etching process (35 mL) was slightly different from the one-drop etching due to the larger volume. 3.6.2.Role of the Volume Ratio.For a certain acidic H 2 O 2 aqueous solution, say HCl-based solution, as the ratio changed gradually from 1:1:1 to 1:1:5 to 1:1:10 to 1:1:20, the etching rate decreased (Table 3), which could be attributed to the decreasing concentration for both H 2 O 2 and acid.The surface etched with a high concentration solution (1:1:1) was rough, which could be due to more bubbles generated under a higher decomposition rate of H 2 O 2 (Video 2).As the concentration of both acid and H 2 O 2 decreased (ratio to 1:1:5), the surface roughness also decreased.However, as the concentration continued to drop to the ratios of 1:1:10 and 1:1:20, the surface roughness increased.A moderate concentration of the acid and H 2 O 2 (such as HCl 1:1:5) might be preferred to realize a balance between the oxidation process (H 2 O 2 ) and oxide removal process (acid). Role of Different Acids. When we compared the function of HCl and HF for the wet etching, with the same ratio of 1:1:1, the HF-based solution could reach a much higher etching rate.Therefore, the overall etching rate was controlled by the oxide removal rate for the HCl-based solution.A moderate oxide removing rate might favor the surface roughness, considering the long etching result for HCl (1:1:5). HCl-based and nanostrip-based solutions exhibited a similar etching rate for ratios of 1:1:5, 1:1:10, and 1:1:20.The nanostrip-based solution showed lower surface roughness after 24 h of etching compared with the HCl-based solution under the same ratio.However, the thin film prepared by the nanostrip (1:1:10) had a much higher surface roughness (70 nm) than that of HCl (1:1:5), which might be due to the poor diffusion of H 2 SO 4 in the solution.This was confirmed with the result that agitation could improve the surface roughness for the nanostrip (1:1:10)-prepared thin film (Figure 3a,3b,3d,3e).However, agitation did not improve the surface roughness etched by HCl (1:1:5) (Figure S3e and S3f), which demonstrated a good dispersion of ions in the HCl-based solution.This also indicated that the passivation of the Cl − on the surface of Ge may be helpful for a uniform Ge etching process. 3.6.4.Benefit of the Double-Sided Etching Setup.The double-sided etching could decrease the surface roughness after the etching process (Figure S3a, S3b, S3c, and S3d) because the bubbles were observed to attach to the Teflon stand surfaces, which decreased the nonuniformity from the bubbles on Ge surfaces (Figure S2). were able to wet-etch 535 μm-thick bulk-Ge substrates to 9.2 and 4.1 μm Ge films, respectively, which were the thinnest Ge films from bulk-Ge via a wet etching method to the best of our knowledge.The corresponding RMS surface roughness for the HCl-based solution-prepared thin film was 10 nm.The low threading dislocation density of 6000−7000 cm −2 was maintained in the process of wet etching without introducing extra defects.The good quality of the starting bulk-Ge was preserved after the etching process according to the HRXRD results.The etching mechanism and its implications were also thoroughly examined and discussed.This approach offers a cost-effective and convenient solution for precise Ge substrate thinning, making it suitable for various applications, including multijunction solar cells.Additionally, it facilitates the preparation of sub-10 μm-thick Ge thin films, thereby enabling further investigations into low-defect density Ge-based devices including photodetectors, LEDs, and lasers. 4.2.Future Work.Ge will be bonded on a substrate and undergo the wet etching thinning process.A polishing process may also be applied for a bonded Ge thin film on a handle substrate to obtain a lower surface roughness for future device (LEDs and lasers) fabrication. Morphology of the 4 h-HF (1:1:10)-etched sample, photo of the Ge sample standing on a Teflon stand in the solution, optical images and 3D optical images of 24 h-HCl (1:1:5)-etched samples with single side etching, double-side etching, and double-side etching with agitation, and morphologies of 24 h-H 2 O 2 solution (30%)-etched sample and one-drop HCl (1:1:5)-etched sample (PDF) Bubbles absorbed on the surface and some moving freely in the solution (MP4) Bubbles clearly seen in HCl (1:1:1) during the etching process (MP4) Figure 1 . Figure 1.Optical images of (a) unetched virgin Ge wafer surface and 3D optical images of (b) unetched sample with Sq = 1.6 nm.Table 2. Optical Images, 3D Optical Images, Surface Roughness, and Thickness Removed under 25 min Etching from the HF-Based Solution with Different Ratios, Scale Bar = 500 μm Figure 6 . Figure 6.Illustration of Ge thinning mechanisms with acid and H 2 O 2 solutions.(a) Ge cannot be thinned by a diluted acid alone, (b) scratch evolution with a diluted H 2 O 2 and acid solution, and (c) bubble-induced void formation in a diluted H 2 O 2 and acid solution. Table 1 . Existing Studies for Ge Thin Films from Bulk-Ge via Wet Etching Table 5 . Key Results and Comparison Absorbance of Ge with different thicknesses.
2023-12-16T16:35:01.364Z
2023-12-12T00:00:00.000
{ "year": 2023, "sha1": "f0fac5d30930241beb1f1636dcc276a6d5abd157", "oa_license": "CCBY", "oa_url": "https://pubs.acs.org/doi/pdf/10.1021/acsomega.3c07490", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ff2e4468d72cf3114391fde4fb1a69f65258948f", "s2fieldsofstudy": [ "Materials Science", "Engineering", "Physics" ], "extfieldsofstudy": [ "Medicine" ] }
134084441
pes2o/s2orc
v3-fos-license
Tree Association with Pometia and its Structure in Logging Concession of South Papua Forest Part of forests in Papua is still as logging concession. Pometia spp. are target species, but there is still a lack of information regarding the ecological condition of those species. Thus, the objectives of this research were to describe what tree species (small and large individuals) associated with Pometia, how logging and soil properties influence the association and to analyze the structure of Pometia in term of diameter distribution. Canonical correspondence analysis (CCA) was applied to describe the association and its relationship with environmental factors (soil and litterfall). The results showed that association of small and large individuals of trees with both Pometia showed a different pattern in which the small individuals had a positive association and had certain tree species as a community. This association resulted from logging activity leading to the change in ecological conditions. Conversely, the association between large tree species with Pometia acuminata Radlk. and Pometia pinnata J. R. Forst. & G.Forst. showed negative pattern and tree species correlated with both Pometia were different. C content of litterfall had a positive correlation with large Pometia acuminata and its community from environmental factors. Furthermore, the small individuals of Pometia were dynamic as a response to logging in which a number of the small individuals of Pometia tended to increase after logging. Introduction Forest of south Papua is characterized as lowland area and some parts of them are still intended as logging concession (Murdjoko 2013;Kuswandi 2014;Kuswandi & Murdjoko 2015). During logging activities, ecological conditions change in term of understory species composition, tree structures, soil conditions, and microclimatic circumstances (Arbainsyah et al. 2014). In this area, tree regeneration is natural since there is no plantation program in this logged forest because the plantation is only done in ex-skid trail and ex-log yard. Therefore, understory species especially seedlings resulted from natural processes as seedling establishment (Murdjoko 2013). Trees as target species are selectively logged in this concession area and the rest are left as remaining trees (Sandor & Chazdon 2014;. One of the target species is from genus Two species that Pometia. have been taxonomically identified are Pometia acuminata Radlk. and J.R.Forst. & G.Forst. (Kuswandi Pometia pinnata et al. 2015;Murdjoko et al. 2016). Both species are less studied concerning population and distribution. Hence, we singled out both species as the focus of this study. Furthermore, the population dynamics of in forests Pometia is generally as an impact of abiotic and biotic factors where the logging leads to alteration of conditions. Thus, the pattern of understory establishment would possibly differ from condition in the primary forest where the condition remains stable (Win et al. 2012). Besides that, edaphic factors like soil properties are responsible for providing place for growing. Parent materials have contributed to soil characteristic during the forming of soil through decay process since long time ago. Decomposition of organic matter also takes place in the soil. Furthermore, nutrients and water are stored in soil (Khairil et al. 2014;Chiti et al. 2015) Those processes can be facilitated when climatic factors supported by creating suitable circumstances. Climatic factors in the tropical rainforest can be described as microclimate and macro climate. The microclimate is mainly as a result of the condition of tropical rainforest such as moisture understory (Cicuzza et al. 2013;Sawada et al. 2015). Biotic factors can also influence dynamics of the stand where flora and fauna have functioned as either facilitation or competition (Velho et al. 2012). In the tropical rainforest, many trees grow in the same area resulting in competition to get a place. Hence, density and basal area of trees can affect growth and even mortality of trees. The presence of other trees can also be seen as a factor that affects trees (Ruslandi et al. 2012). Even though fauna also plays important role in trees, the contribution of fauna will not be taken into account during this study. As described above, the presence of other trees in selectively logged over forest play a crucial role in the dynamic of trees. Hence, this study described trees Pometia that had an association with in unlogged and logged Pometia forest. Besides that, the structure of based on its Pometia density between unlogged and logged forest was compared. In this study, primary forest is seen as the approach of trees before logging in which tree descriptions in both conditions were analyzed whether the tree description based on the density was similar or not. Moreover, hypothetically, after several years logged over forest condition in term of stem density, species composition, and abiotic conditions will be close to primary forest (Ding et al. 2012;Rutten et al. 2015). To analyze that condition, canonical correspondence analysis (CCA) will be applied to figure out tree communities in both conditions The CCA is useful to take into account . tree species, plot distribution and environmental factors as integrated calculation (Ter Braak 1986;Ter Braak 1987). Over the time, alteration of tropical rainforest can take place resulting from natural and anthropogenic causes (Huth et al. 2004). In this study, the focus of alteration is as a result of selective logging as the anthropogenic cause which can change the condition of tropical rainforest. The biotic and abiotic situation of this forest before selective logging will differ from the condition after selective logging. In brief, Pometia trees in both situations will be certainly affected. For that reason, to what extent the trees Pometia influenced by selective logging is, therefore, interesting to be investigated. The objectives of this research were: to describe what tree species (small and large individuals) associated with , how logging and soil properties Pometia influence the association, and to analyze the structure of Pometia trees in the stands in term of its diameter distribution. S has an annual rainfall ranging from about 3000-4000 mm and daily moisture was on the average between 75 and 85% % (Petocz 1989). Families of , Dipterocarpaceae Lauraceae and dominate this area . This study area is a Myrtaceae logging concession of PT Tunas Timber Lestari where the forest is isolated by Muyu and Uwim Merah River in the west and Fly River in the east, while in the northern part is mountainous area and in the southern part is an ex-timber concession. Data were collected in the unlogged forest as primary forest and logged forest consisting of one-year, fiveyear, ten-year, and fifteen-year logged forest. Methods Data collection and sampling Tree species in this forest were collected that were divided into four phases as seedlings, saplings, poles, and trees. Seedlings are typified to have a height less than 1.5 m. Saplings were characterized with a height greater than 1.5 m and diameter of less than 10 cm. Poles were characterized to have a diameter between 10 and 20 cm. Trees were typified to have a diameter greater than 20 cm (Forestry Department 1989). Seedlings and saplings were then grouped as small individuals while poles and trees were classified as large individuals. Plots were placed systematically using nested sampling where seedling was 2 m × 2 m, sampling was 5 m × 5 m, the pole was 10 m × 10 m and tree was 20 m × 20 m. In the primary forest, 46 plots were placed while in logged forest 120 plots were established. Data in each plot consisted of species of individuals -each was identified according to scientific name; -Number of individuals in each Number of individuals species per plot were documented; Diameter at Diameterbreast height (DBH) or 20 cm above the buttress was measured for trees or individuals > 5 cm in diameter. Edaphic factors-Soil property was soil organic matter (SOM) while litterfall was also collected in a plot with 1 m × 1 m in size for each plot to analyze C content and dried weight. Analysis of soil and litterfall to obtain the estimates of C content and dried weight was done in Laboratorium Balai Pengkajian Teknologi Pertanian Yogyakarta. Data analysis To analyze tree association with , Pometia canonical correspondence analysis (CCA) was applied as multivariate analysis (MVA) to see the distribution of tree species corresponding to locations of both logged forest and Scientific Article ISSN: 2087-0469 Figure 1 Location of research in South Papua. JMHT Vol. 22, (3): 180-191, December 2016EISSN: 2089-2063 primary forest. In this analysis, the variable of importance value index of tree species is the value of species as a row (m) while the column is as plot (n). Then, those were expressed as matrix m × n. Environmental factors used in this analysis were abiotic factors namely organic matter, litterfall and time after logging. The environmental factor is defined as matrix m × q. The computation was performed using R statistical program with VEGAN package version 3.3.1. (R Development Core Team 2005; Oksanen 2013). After et al. getting CCA graph, tree species positively associated with either or were obtained from P. acuminata P. pinnata Euclidean distance between tree species in the same quadrant and either or The Euclidean distance P. acuminata P. pinnata. between them was calculated as average and confidence interval of 95% was applied to decide the positive association. The structure of Pometia was analyzed by means of plotting density against the class diameter of DBH where the density was the number of individuals of Pometia species per -1 hectare (trees ha ). The relationship can be described mathematically as where y = f(DBH) where y is a number of -1 individuals of Pometia per hectare (tree ha ), DBH is diameter at breast height (m) and f is a function of the relationship. The function was determined using either linear or nonlinear equations. Furthermore, bias (E) and the 2 adjusted coefficient of determination (R adj) were used to be criteria to choose the best equation. Results and Discussion Association between pometia and tree species in unlogged and logged forest The tree species in both unlogged and logged forests showed a pattern of species association. This research grouped tree species association as seedlings and saplings as small individuals while poles and trees as large individuals. In this research, 176 tree species were recorded in both unlogged and logged forest. Then, 159 tree species were categorized as small individuals and 127 tree species were classified as large individuals (Appendix 1). The species of were and . By Pometia P. acuminata P. pinnata means of CCA, tree species and edaphic factors were plotted in Figure 2. At that time, the associations were based on the distance between other individuals of other species and both Pometia acuminata Pometia pinnata and . In small individuals (Figure 2 A), a total of 24.5% was explained as the variance of both axes in which the first axis (CCA1) was 13.4% and a second axis (CCA2) was 11.1%. The tree species were distributed in four directions of the quadrant. The both small individuals of Pometia species were on the lower left quadrant. The tree species that had a positive association with P. acuminata (the blue boxes with dashed line) were 29 tree species and the tree species that had a positive association with P. pinnata (the blue boxes with solid line) were 7 species (Table1). The Euclidean distances of those species were below 2.18 with P. pinnata. The large individuals of tree species positively associated with and (Figure 2. B.) were P. acuminata P. pinnata distributed in opposite directions, which were on upper left quadrant and lower right quadrant, respectively. The both axes of CCA explained 34.8 % of the variation in which the first axis showed 17.6 variation and second axis showed % 17.2 variation. The tree species close to (the % P. acuminata blue boxes with dashed line) were 13 species (Table 2). Those tree species had Euclidean distance under 0.99 with P. acuminata . Moreover, 47 tree species (the black boxes with solid line) had a positive association with ( Table P. pinnata 2). The Euclidean distances of those species were less than 1.33 compared with . P. pinnata Perturbation of edaphic condition after logging Soil and litterfall conditions after logging tended to have opposite directions with a period of logged forest (Figure 2A and Figure 2B.). SOM which symbolizes soil organic matter (%) and LF_DW which denotes dried weight (g) went to upper right quadrant while time that is a period of logged forest move to lower left quadrant. On the other hand, LF_C which is C content in litter fall (%) went to upper left quadrant. The longer period of logged forest led to the gradual decrease of SOM and dried weight of litterfall. In contrast, the C content of litterfall tended to increase steadily in that period. Structure of Pometia in unlogged and logged forest The individuals of Pometia in the unlogged forest were distributed from small class diameter to large class diameter (green bar). In logged forest, some individuals were absent in certain class diameter. In general, the number of individuals of Pometia in unlogged forest differed from in logged forest. In one year logged the forest, the number of individuals of Pometia declined especially the small individuals. Afterward, the number of small individuals in 5 and 10 years logged forest rose about fourfold compared to the unlogged forest. Then, in 15 years logged the forest, the number of individuals with a diameter below 10 cm were absent while the number of individuals with a diameter between 10 cm and -1 -1 20 cm appeared by about 60 ind ha and 20 ind ha , respectively. From Table 3, only equations of 1 year logged forest was not significant (P > 0.05), but the other four equations were significant (P < 0.05) for power equations. Therefore, the equations were used to obtain patterns of the structure of Table 1 Small individuals of tree species had a positive association with both Pometia acuminata Radlk. and Pometia pinnata J. R. Forst. & G. Forst. in which there were 29 tree species associated with P.acuminata and 7 tree species with P. pinnata. In general, the structure of followed reverse J-Pometia shaped distribution from small individuals with a diameter below 5 cm (unlogged forest, five years logged forest and ten years logged forest) and a diameter between 10 cm up to 20 cm. The highest number of small individuals of was Pometia in ten years after logging, then followed by the number of individuals of in five years logged forest and Pometia unlogged forest. Afterward, the number of individuals of Pometia was seemingly similar for all forest types starting from class diameter 20−24 cm to 45−49 cm, only in fifteen years after logging that the individuals were present in class diameter up to 55−59 cm. Environmental alteration, tree communities and pometia structure during post-selective logging The pattern of association was different between small and large individuals. The small individuals of P. acuminata and P. pinnata tended to be close each other along with other tree species, which have positive association (Figure 2 A). In contrast, large individuals of Pometia itself were not distributed in the same quadrant (Figure 2 B). It suggests that those large individuals of both P. acuminata and P. pinnata showed negative association. In this forest, the both P. acuminata and P. pinnata were not close together to grow (Murdjoko et al. 2016) as they had negatively conspecific JMHT Vol. 22, (3): 180-191, December 2016EISSN: 2089-2063 association. Therefore, the tree species associated with both Pometia were different. In the tropical rainforest, individuals of same species showed nonspecific associations whether positive or negative (Nichols et al. 1999;Bagchi et al. 2010;Howe 2014;Sawada et al. 2015). The Pometia had a negative association between small and large individuals in this forest (Murdjoko et al. 2016). In logged forest, there is no plantation and enrichment program. Thus, regeneration of the logged forest is natural processes (Murdjoko 2013;Kuswandi & Murdjoko 2015). Furthermore, seeds produced by mature trees were spread out to a particular area. That process was the beginning of association development in this forest. Therefore, seedling establishment after germination grew in the different tree composition. Some seedlings survived and then benefit from certain ecological circumstances, resulting in the positive association. In contrast, some seedlings were suppressed to the environment, leading to the negative association (Zambrano et al. 2014). Thus, small individuals of both Pometia showed a positive association, suggesting that both seedlings were able to share area to grow. On the other hand, the other tree species located in upper right quadrant were a negative association with both Pometia as they have opposite direction (Figure 2 A). The positive association of seedlings was presumably established dynamically in which after logging seedling composition changed as environmental factors altered such as light availability and nutrients in the soil (Corrià-Ainslie et al. 2015;Toriyama et al. 2015;Shen et al. 2016). That can be seen in the structure of Pometia altered as respond to the circumstance change. Thus, the number of small individuals of Pometia increased in five and ten years after logging (Figure 3). Later on, the small individuals grew in fifteen years after logging. Therefore, the number of small individuals of Pometia increased during post-logging. In large individuals, both Pometia has negative association, bringing about certain tree species associated with either P. acuminata or P. pinnata. The association of large individuals has been established before logging period. This can be said that the association of tree species with large individuals of either P. acuminata or P. pinnata was original association in this tropical forest. The tree species on the upper right and lower left quadrant (Figure 2 B) were not associated with large individuals of either P. acuminata or P. pinnata. The change of edaphic factors as a result of logging did not affect the association since the soil organic matter and amount of litterfall have upper right quadrant as direction (Figure 2 B). It is a presumption that the edaphic change in this logged forest probably affected the growth of both Pometia and other tree species that had a positive association with one of them. Therefore, research on dynamics of individuals after logging would be necessary to find out the effect of logging on remnant trees, especially Pometia. In general, distribution of both had a similar Pometia pattern of distribution of individuals in tropical rain forest in which a number of small individuals were more abundant than a number of large individuals (Murdjoko 2013;Kuswandi & Murdjoko 2015). As a result, the natural Scientific Article ISSN: 2087-0469 Scientific Article ISSN: 2087-0469 regeneration in this forest occurred continuously. However, there is no positive conspecific association between small and large individuals of (Murdjoko et al. 2016). As Pometia a consequence, this can be a consideration in term of enrichment planting program that artificial plantation of Pometia should be close the tree species that have a positive association with . Pometia The structure of individuals showed a different Pometia number between a condition in unlogged and logged forest (Figure 3). The number of small individuals (diameter less than 20 cm) were higher after logging especially in five and ten years after logging. At that time, seedling establishment of benefited from the opening of canopy gap Pometia resulting from logging activities where irradiance could reach understory. Most of the early seedling establishment in tropical rainforests require the irradiance to grow (Duah-Gyamfi et al. 2014;Goodale et al. 2014;Whitfeld et al. 2014). On the other hand, the absence of larger individuals of Pometia in logged forest indicated that the logged forests are still recovering from logging impact. Some studies in tropical rainforest addressed that to recover from logging affect; it would take more than 40 years (Gourlet-Fleury et al. 2013;. Hence, based on the structure of fifteen years after logging the condition Pometia, of logged forests have not recuperated. Conclusion Association of small and large individuals of trees with both P. acuminata and P. pinnata showed a different pattern in which the small individuals had a positive association. The small individuals of P. acuminata and P. pinnata tended to grow closely. Hence, small tree species positively associated with both Pometia were similar. In contrast, the association between large tree species with P. acuminata and P. pinnata was different where the association showed a negative pattern. Thus, tree species correlated with both Pometia were different. The different pattern of small individuals was a result of logging impact in which ecological circumstance changed resulting in alteration of microclimate. Of environmental factors, only C content of litterfall had a positive correlation with large P. acuminata and its community. Based on the distribution of individuals of Pometia, the small individuals of Pometia were dynamic as a response to logging in which a number of the small individuals of Pometia tended to increase after logging.
2019-04-27T13:02:24.236Z
2016-12-31T00:00:00.000
{ "year": 2016, "sha1": "8ff3d4106ff394a13707218470b5e9c67d518e3e", "oa_license": "CCBY", "oa_url": "http://journal.ipb.ac.id/index.php/jmht/article/download/15466/12265", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "a59fbd8e9e97ab63bd6c92dc93a254fbb9c03380", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Geography" ] }
232478935
pes2o/s2orc
v3-fos-license
NodeSRT: A Selective Regression Testing Tool for Node.js Application Node.js is one of the most popular frameworks for building web applications. As software systems mature, the cost of running their entire regression test suite can become significant. Selective Regression Testing (SRT) is a technique that executes only a subset of tests the regression test suite can detect software failures more efficiently. Previous SRT studies mainly focused on standard desktop applications. Node.js applications are considered hard to perform test reduction because of Node's asynchronous, event-driven programming model and because JavaScript is a dynamic programming language. In this paper, we present NodeSRT, a Selective Regression Testing framework for Node.js applications. By performing static and dynamic analysis, NodeSRT identifies the relationship between changed methods and tests, then reduces the regression test suite to only tests that are affected by the change to improve the execution time of the regression test suite. To evaluate our selection technique, we applied NodeSRT to two open-source projects: Uppy and Simorgh, then compared our approach with the retest-all strategy and current industry-standard SRT technique: Jest OnlyChange. The results demonstrate that NodeSRT correctly selects affected tests based on changes and is 250% faster, 450% more precise than the Jest OnlyChange. I. INTRODUCTION With the continuous growth of web applications, Node.js has become one of the most popular frameworks for web application development [1]. For critical online services, performing regression testing and integration testing is important. However, since JavaScript is a loosely typed, dynamic language, test selection on JavaScript projects is hard. Besides, modern web applications are usually composed of more than one component; running unit tests only does not judge the overall behaviour of the web application [2]. There are two phases involved in SRT. The first phase is to select tests based on the test dependency graph and changes. The second phase is to run selected tests. Test selection techniques operate at four levels of granularity: statement, method, file, module. The common two are method-level and file-level. File-level granularity analysis builds a relationship between tests and files in the system and selects tests that reflect changed files. The method-level analysis builds a relationship between tests and methods then selects tests that are affected by changed methods. Since method-level selection is more complicated than file-level selection, file-level selection runs faster in phase one. However, file-level selection selects more tests than needed. Therefore it is less precise than method-level selection and runs slower in phase two [3]. Jest OnlyChange is the current industry-standard SRT technique that operates at file-level granularity. And it can reduce tests executed without skipping tests that might expose failures. Plus it is the most light-weighted approach. Although fast, this approach may not be precise enough for some test suites. Therefore, our research starts from a question:"Can we find a more effective test selection technique for Node.js Applications?" To evaluate the effectiveness of SRT techniques, Rothermel proposed four metrics: Inclusiveness, Precision, Efficiency, Generality [4]. Inclusiveness measures the extent to which SRT technique chooses tests that are affected by the change. Precision measures the ability that the SRT technique omits tests that are not affected by the change. Efficiency measures the time and space required. Generality measures its ability to function in a comprehensive and practical range of situations. We say a selection technique is safe if it achieves 100% inclusiveness. Our intuition for reducing the total running time is to improve the granularity of the selection technique to improve precision so that fewer tests are required to run when the regression suite is executed. We also evaluated our selection technique by performing an empirical study on two opensource Node.js projects with different sizes and code coverage. II. APPROACH OVERVIEW To mitigate the challenge of performing test selection on JavaScript programs, our tool uses a combination of static and dynamic analysis, then performs a modification-based test selection algorithm at method level. The modification-based approach works by analyzing modified code entities to select tests based on modifications. This strategy can guarantee safety while being relatively simple. NodeSRT consists of five parts: dynamic analysis, static analysis, change analysis, test selector, and selected test runner. The Static Analysis module performs static analysis on the original codebase to generate file dependency graph on each test by identifying and resolving require and import in JavaScript files. The Dynamic Analysis module generates a dynamic call graph by injecting code to the original generated AST. NodeSRT uses HTTP requests to collect runtime information of the application. Since web applications usually consist of different modules, code in different modules may be running in different runtime environments, for example, the server-side code runs in the Node.js environment. Client-side code runs in the browser environment. The code injector in the Dynamic Analysis module injects code that sends logging messages to the logging server, which collects all logging messages and generates call graph in JSON format. The runtime information we collected includes the function name, file name, and the number of parameters. These entities are used to create a dynamic call graph. When the codebase becomes large, code analysis result should be store in a database to ensure performance [5], [6]. The Change Analysis module compares the ASTs of the changed files, then generate a list of changes in JSON format. Since NodeSRT uses function-level granularity, change analysis module finds the closest function name of each different AST node based on their ancestors. If the function is anonymous, NodeSRT will generate a unique name for it based on its parent function name, class name, and file name. This approach is similar to the approach for Chianti handling anonymous class in Java [7]. With call graph, file dependency graph, and JSON representation of changes, The Test Selector selects tests based on the list of changes and the call graph. To handle changes outside functions, the test selector selects tests that depend on the changed files based on file dependency graph to guarantee safety. Finally, Test Runner runs selected tests. Our tool can also be used to select end-to-end tests since NodeSRT uses HTTP requests record runtime information and build dynamic call graph. III. EMPIRICAL EVALUATION To evaluate NodeSRT, we performed an empirical evaluation on two open-source Node.js projects. We chose these two projects because our empirical study requires systems that have to be well-maintained and have reasonable amount of tests. By using the method mentioned in [8], we selected Uppy and Simorgh. Uppy has 112k lines of code, 216 unit tests, and 9 end-to-end tests, achieves 20% of code coverage. Simorgh has 698k lines of code. It includes 2801 unit tests, achieves 97% of code coverage. The experiment ran on a 4 core x86-64 CPU with 16 GB of RAM, AWS cloud Linux server. Due to the fact that the internet speed and computing speed is not unchanged, we use the percentage of tests selected and the percentage of SRT full process running time to represent the result. We performed test selection on a total of 588 commits from the two subjects. For each commit, we generate a diff patch from the previous commit to serving as input to NodeSRT. Table I compares NodeSRT and Jest on average for selected tests and total running time. As we can see, given file dependency graph and call graph, the selection step for both projects is less than 5% of total running time. Comparing to Jest OnlyChange, NodeSRT selects much fewer tests for both projects. NodeSRT selects 1.5 times fewer tests in Uppy, 5.3 times fewer tests in Simorgh. Although NodeSRT selects less tests in Uppy, Jest OnlyChange runs faster than NodeSRT. This is because Jest OnlyChange makes use of Jest's own jest-hastemap module and customized file system module: watchman. Future works can be done for NodeSRT in this part. For project with high code coverage: Simorgh, NodeSRT selected fewer tests and is 2.7 times faster. IV. RELATED WORK AND CONCLUSION There are several techniques proposed for standard desktop applications. These techniques first classify programs into different entities such as functions, types, variables, and macros, then utilize comprehensive static analysis and dynamic analysis to build entity-tests relationships to reduce test suite (e.g., [4], [5], [6], [7], [9], [10], [11], [12]). For studies focusing on JavaScript applications, Mutandis is a generic mutation testing approach for JavaScript that guides the mutation generation process [13]. It works by leveraging static and dynamic program analysis to guide the mutation generation process apriori towards parts of the code that are error-prone or likely to influence the program's output. Tochal is a DOM-Sensitive change impact analysis tool for JavaScript. Through dynamic code injection and static analysis, it incorporates a ranking algorithm for indicating the importance of each entity in the impact set. This approach focused on frontend DOM changes rather than the frontend backend interaction [14]. Conclusion. We present NodeSRT, a novel approach for performing SRT on Node.js applications at method level. Using a change-based selection technique, obtaining a function call relationship with dynamic analysis, collecting file dependency with static analysis, NodeSRT reduces regression tests in short running time and high inclusiveness and precision. Empirical evaluation showed that our approach outperformed Jest OnlyChange in precision and total running time. Future work can be done in integrating our technique with unit testing frameworks to improve its performance further.
2021-04-02T01:15:36.858Z
2021-03-31T00:00:00.000
{ "year": 2021, "sha1": "3e662d52523e14604ddeed0f009dc2b848326266", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2104.00142", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "3e662d52523e14604ddeed0f009dc2b848326266", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
52979226
pes2o/s2orc
v3-fos-license
Unnecessary Frills: Communality as a Nice (But Expendable) Trait in Leaders Although leader role expectations appear to have become relatively more compatible with stereotypically feminine attributes like empathy, women continue to be highly underrepresented in leadership roles. We posit that one reason for this disparity is that, whereas stereotypically feminine traits are appreciated as nice “add-ons” for leaders, it is stereotypically masculine attributes that are valued as the defining qualities of the leader role, especially by men (who are often the gatekeepers to these roles). We assessed men’s and women’s idea of a great leader with a focus on gendered attributes in two studies using different methodologies. In Study 1, we employed a novel paradigm in which participants were asked to design their “ideal leader” to examine the potential trade-off between leadership characteristics that were more stereotypically masculine (i.e., agency) and feminine (i.e., communality). Results showed that communality was valued in leaders only after meeting the more stereotypically masculine requirements of the role (i.e., competence and assertiveness), and that men in particular preferred leaders who were more competent (vs. communal), whereas women desired leaders who kept negative stereotypically masculine traits in check (e.g., arrogance). In Study 2, we conducted an experiment to examine men’s and women’s beliefs about the traits that would be important to help them personally succeed in a randomly assigned leader (vs. assistant) role, allowing us to draw a causal link between roles and trait importance. We found that both men and women viewed agentic traits as more important than communal traits to be a successful leader. Together, both studies make a valuable contribution to the social psychological literature on gender stereotyping and bias against female leaders and may illuminate the continued scarcity of women at the very top of organizations, broadly construed. INTRODUCTION It has been argued that stereotypically feminine traits like communality will define 21st century leaders, and women and men with these attributes will rule the future (Gerzema and D'Antonio, 2013). However, despite the embracing of so-called feminine management, women continue to be highly underrepresented in top executive roles (Catalyst, 2018), and bias against female leaders persists (Eagly and Heilman, 2016;Gupta et al., 2018). We posit that one reason for this disparity is that, whereas communality is appreciated as a nice "add-on" for leaders, it is stereotypically masculine attributes related to agency, such as competence and assertiveness, that are valued as the defining qualities of the leader role, especially by men (who are often the gatekeepers to these roles). We examined this premise in two studies in which we assessed men's and women's idea of a great leader with a focus on gendered attributes. Although leadership is associated with masculine stereotypes (Schein, 1973;Koenig et al., 2011), this association appears to have weakened somewhat over time (Duehr and Bono, 2006). For example, a meta-analysis that examined the extent to which stereotypes of leaders aligned with stereotypes of men revealed that the masculine construal of leadership decreased significantly between the early 1970s and the late 2000s, as people increasingly associate leadership with more feminine relational qualities (Koenig et al., 2011). One reason for this change is the slow but noticeable surge during this period in the number of management roles occupied by women. It is possible that the raising presence of women in management roles may have reduced the tendency to associate leadership with men, given that women tend to lead differently than men (Eagly et al., 2003), and exposure to counterstereotypic individuals tends to reduce implicit biases (Dasgupta and Asgari, 2004;Beaman et al., 2009). Another reason why leadership perceptions may over time have become more androgynous (i.e., involving more stereotypically feminine in addition to stereotypically masculine qualities) is that the organizational hierarchy has flattened over time (Bass, 1999) and has come to require less directive, top-down approaches to leadership (Eagly, 2007;Gerzema and D'Antonio, 2013). Effective leadership, which can be highly contextual (Bass, 1999), is thought to be generally participative and transformational (Bass and Riggio, 2006). Transformational leadership styles, which involve motivating, stimulating, and inspiring followers (Burns, 1978;Mhatre and Riggio, 2014), are associated with increased morale and performance at various organizational levels (Wang et al., 2011). They are also associated with female leaders somewhat more so than with male leaders (Eagly et al., 2003;Dezso and Ross, 2011;Vinkenburg et al., 2011) and tend to be viewed as relatively more feminine than autocratic or transactional styles (Stempel et al., 2015). Indeed, there is evidence that transformational leaders tend to blend masculinity and femininity and are overall more androgynous (Kark et al., 2012). Management scholars thus recognize that effective leadership combines both agency-related and communal behaviors and traits (Bass, 1999;, which are consistently associated with men and women, respectively (Burgess and Borgida, 1999;Prentice and Carranza, 2002). Given a trend toward ever more collaborative work environments in the digital age (Bersin et al., 2017), traits and behaviors typically associated with women such as cooperation and sensitivity to others' needs (Prentice and Carranza, 2002) are sometimes praised as the future of leadership (Gerzema and D'Antonio, 2013). However, even though leader role expectations may be relatively more feminine today than 40 or 50 years ago (Koenig et al., 2011), women who aspire to top leadership positions continue to be at a considerable disadvantage. For example, women tend to be overrepresented in support and administrative roles (Blau et al., 2013;Hegewisch and Hartmann, 2014), but continue to occupy less than half of management positionsthey comprised about 34.1% of general and operations managers in 2017 according to the Current Population Survey (U.S. Department of Labor Bureau of Labor Statistics, 2018). The proportion of women is lower in executive positions that confer major decision-making power: They occupied only 28% of chief executive roles in 2017 (U.S. Department of Labor Bureau of Labor Statistics, 2018), and a mere 5% when considering S&P 500 companies, the largest, most profitable firms in the United States (Catalyst, 2018). Although these patterns are likely to result from a variety of factors, including gender differences in interests, goals, and aspirations (Reskin et al., 1999;Diekman and Eagly, 2008;Schneider et al., 2016), there is substantial evidence that at least some of this disparity is due to gender bias (Rudman, 1998;Heilman et al., 2004;Heilman and Okimoto, 2007;Phelan et al., 2008;Rudman et al., 2012). Bias against female leaders is likely multiply determined. On one hand, it may reflect social conservatism and antifeminist attitudes (Forsyth et al., 1997;Rudman and Kilianski, 2000;Hoyt and Simon, 2016) and a tendency to maintain the traditional status quo where women serve primarily as caretakers (Rudman et al., 2012). For example, different attitudes toward the role of women in society predict liberals' and conservatives' disparate levels of support for female job candidates (Hoyt, 2012). On the other hand, bias against female leaders has also been connected to the perceived relative incongruity (Eagly and Karau, 2002) or lack of fit (Heilman, 1983(Heilman, , 2001(Heilman, , 2012 between the traits typically associated with women and the traditional female gender role and the traits ascribed to the leader role. This low perceived correspondence between feminine stereotypes and leader roles makes women appear unsuitable for authority positions. Moreover, when women demonstrate the kinds of attributes that are deemed requisite for effective leadership (e.g., agency) they sometimes elicit penalties for violating gender role expectations (Heilman and Okimoto, 2007;Rudman et al., 2012;Williams and Tiedens, 2016). The effect of gender stereotypes can make it difficult for women to thrive in leadership roles (Vial et al., 2016), and can compound over time and slow women's advancement in organizational hierarchies (Agars, 2004). The persistence of bias against female leaders (Eagly and Heilman, 2016) appears in direct conflict with the increased valorization of more androgynous leadership styles that draw from communal, traditionally feminine traits and behaviors (Eagly, 2007;Gerzema and D'Antonio, 2013). This apparent contradiction is the focus of the current investigation, in which we test the idea that communal traits are appreciated in leaders primarily as an accessory or complement to other, more agentic qualities that tend to be viewed as more essential and defining of the leader role. We examined the trade-off that people make when thinking about agency and communality in relation to the leader role, testing the prediction that communal traits are valued in leaders only after reaching sufficient levels of agentic (i.e., more stereotypically masculine) traits. As such, even when leader role expectations may also comprise communal traits (Koenig et al., 2011), agentic traits might still be considered the hallmark of leadership-necessary and sufficient to lead. Communal attributes, in contrast, may be appreciated as nice but relatively more superfluous complements for leaders. Moreover, even when more communal leadership styles may be increasingly appreciated (Eagly, 2007;Gerzema and D'Antonio, 2013), we propose that the people who most value it happen to be women, who are typically not the gatekeepers to top organizational positions of prestige and authority. There is meta-analytic evidence that the masculine leadership construal tends to be stronger for male versus female participants (Boyce and Herd, 2003;Koenig et al., 2011). Furthermore, compared to women, men evaluate female leaders as less ambitious, competent, intelligent, etc. (Deal and Stevenson, 1998;Vial et al., 2018), and are less likely to select female job candidates (Gorman, 2005;Bosak and Sczesny, 2011;Koch et al., 2015). Thus, the concentration of men in top decision-making roles such as corporate boards and chief executive offices (Catalyst, 2018) may be self-sustaining because men in particular tend to devalue more communal styles of leadership (Eagly et al., 1992;Ayman et al., 2009). In contrast, given that communal traits are more strongly associated with their gender in-group (Burgess and Borgida, 1999;Prentice and Carranza, 2002), women may show more of an appreciation for these traits compared to men (e.g., Dovidio and Gaertner, 1993). In the current studies, we compared men's and women's preferences for communality and agency in leaders. As stated earlier, the underrepresentation of women in top leadership roles is likely to stem not only from bias against female leaders (Heilman and Okimoto, 2007) but also from women's relatively low interest in pursuing these roles in comparison to men (Diekman and Eagly, 2008;Lawless and Fox, 2010;Schneider et al., 2016). Stereotypes linking leadership with men and communal roles with women might have a negative impact on women's sense of belongingness and self-efficacy in leadership roles (Hoyt and Blascovich, 2010;Hoyt and Simon, 2011). For example, women report lower desire to pursue leadership roles after being exposed to stereotypic media images (Simon and Hoyt, 2013). If communal traits are overall seen as "unnecessary frills" in leaders, as we propose, and if women place higher importance on being communal when they occupy a leadership role relative to men, such mismatch might discourage women from pursuing top leadership positions (Heilman, 2001). Thus, in addition to investigating whether men and women value agency and communality differently in leaders, we also considered how much they would personally value such traits if they were to occupy a leadership role. OVERVIEW OF RESEARCH We conducted two studies to assess men's and women's idea of a great leader with a focus on gendered attributes. In Study 1, we examined the attributes that men and women viewed as requisite (vs. superfluous) for ideal leaders. In Study 2, we conducted an experiment to examine men's and women's beliefs about the traits that would be important to help them personally succeed in a randomly assigned leader (vs. assistant) role. In both studies, we measured trait dimensions related to gender roles and leadership including competence and assertiveness (i.e., agency) as well as communality. Agency and communality represent two basic dimensions of person perception and judgments of the self, others, and groups (Fiske et al., 2007;Abele et al., 2016). Agency is typically perceived as more self-profitable than communality, which is more often viewed as benefitting others and, as a result, communality tends to be more valued in others versus the self, whereas the reverse is true for agency (Abele and Wojciszke, 2007). Thus, it is possible that people value communality relatively more when evaluating others (vs. the self) in leadership roles. Here, we investigated how much men and women valued agency and communality when thinking about another in a leader role (Study 1) and when thinking of the self as a leader (Study 2). Study 1 examined the notion that communal attributes are viewed as highly desirable in leaders-but only after more basic requirements have been met, which map strongly onto stereotypical masculinity (i.e., agency). Past research has examined the extent to which various attributes were seen as relevant to the leader role-either generally characteristic of leaders or typical of successful leaders (e.g., Schein, 1973;Powell and Butterfield, 1979;Brenner et al., 1989;Boyce and Herd, 2003;Sczesny, 2003;Sczesny et al., 2004;Fischbach et al., 2015). However, in those studies, participants rated traits one at a time and in absolute terms (e.g., "please rate each word or phrase in terms of how characteristic it is," on a 5-point scale; Brenner et al., 1989, p. 664). These absolute ratings may mask the potential trade-offs between different traits when evaluating a specific person, whose traits come in bundles (Li et al., 2002). Specifically, the importance of communal characteristics for leaders may depend on levels of other traits (Li et al., 2002(Li et al., , 2011Li and Kenrick, 2006), and participants considering such traits in isolation might assume acceptable levels on other desirable attributes (e.g., agency). For example, although communality might make someone desirable as a leader, communality might be considered irrelevant if a leader is insufficiently agentic. We investigated these potential trade-offs in Study 1. In addition to agency and communality, we included traits that were negative in valence and stereotypically masculine (e.g., arrogant) and feminine (e.g., emotional) in content. Past investigations suggest that negative masculine stereotypes, which map onto a "dominance" dimension and are related to status attainment (Cheng et al., 2013), are strongly proscribed for women (Prentice and Carranza, 2002;Hess et al., 2005). Moreover, a number of investigations have revealed that dominance perceptions play a crucial role in bias against female leaders, who are often viewed as domineering and controlling (Rudman and Glick, 1999; see also Williams and Tiedens, 2016). Similarly, a recent review suggests that negative feminine stereotypes about the presumed greater emotionality of women relative to men (Shields, 2013) are closely linked to bias against female leaders (Brescoll, 2016). For example, men in general tend to be described as more similar to successful managers in emotion expression than are women in general (Fischbach et al., 2015). Thus, in Study 1, we examined participants' interest in minimizing these negative traits when designing their ideal leader. In Study 2, we examined whether people's leader role expectations differ when they think of themselves occupying that position. Many past investigations have compared perceptions of men and women in general with perceptions of successful managers (Schein, 1973;Powell and Butterfield, 1979;Heilman et al., 1995;Schein et al., 1996;Powell et al., 2002;Boyce and Herd, 2003;Duehr and Bono, 2006;Fischbach et al., 2015). Other studies have documented perceptions of successful male and female managers (Dodge et al., 1995;Heilman et al., 1995;Deal and Stevenson, 1998). We extend this prior work by directly assigning men and women to a leader role (versus an assistant role) and testing which kinds of attributes they view as important for them to be personally successful in that role. The random assignment of men and women to a leader role allowed us to draw a causal link between occupying a leadership role and differentially valuing communality and agency. In both studies, we compared the responses of men and women, seeking to better understand how their leader-role expectations differ (Koenig et al., 2011). Past work suggests that individuals may generally prefer the kinds of attributes that are viewed as characteristic of their gender in-groups (Dovidio and Gaertner, 1993), and women compared to men have been found to possess less masculine leader-role expectations (Boyce and Herd, 2003;Koenig et al., 2011) and to value female leaders more (Kwon and Milgrom, 2010;Vial et al., 2018). Thus, we were interested in testing whether women might show higher appreciation for communal attributes in leaders in comparison to men. STUDY 1: REQUISITE AND SUPERFLUOUS TRAITS FOR IDEAL LEADERS We tested the notion that communal traits are viewed as desirable in leaders-but only after more basic requirements have been met, namely, agency. We examined participants' preferences for the kinds of traits that would characterize the ideal leader by using a methodology that was originally developed to study mate preferences (Li et al., 2002). This method essentially compares the extent to which different traits are desirable as choices become increasingly constrained, helping distinguish the attributes that are considered truly essential or fundamental in a mate (or in our case, a leader), from traits that are considered luxuries. "Luxury" traits might ultimately be superfluous if the essential attributes (or "necessities") are not met. Conceptually, traits that are viewed as necessities tend to be favored when choices are constrained. As constraints are lifted, fewer resources are devoted to traits that are considered necessities, and more resources are allocated to luxuries. This approach is apt to reveal the perceived trade-offs between more stereotypically feminine (i.e., communal) and masculine (i.e., agentic) leadership characteristics. By directly examining these trade-offs and identifying necessities and luxuries, we hope to clarify the seeming conflict between the increased valorization of more androgynous leadership styles that draw from traits and behaviors traditionally associated with women Gerzema and D'Antonio, 2013) and the persistence of male bias (Eagly and Heilman, 2016). We predicted that compared to communal traits, agentic traits would be rated as more of a necessity for an ideal leader, or, in other words, that communality would be treated as more of a luxury than agency. We measured two facets of agency separately, namely competence and assertiveness (Abele et al., 2016). Following Li et al. (2002), we assigned participants increasingly smaller budgets that they were instructed to use to "purchase" different traits to design their ideal leader. Participants made tradeoffs first between traits denoting competence and communality, and then between traits denoting assertiveness and communality. We expected that as people's budgets got smaller, they would prioritize competence and assertiveness over communality. Finally, to examine the kinds of attributes that people may find intolerable in leaders, we also included negative traits, which map onto relaxed proscriptions (Prentice and Carranza, 2002) for men (e.g., arrogant, stubborn) and women (e.g., emotional, weak). We anticipated that participants might be especially interested in minimizing negative traits that people more commonly associate with men than with women (such as arrogant) as these traits align with the culturally prevalent idea that "power corrupts" (Kipnis, 1972;Keltner et al., 2003;Inesi et al., 2012). In contrast, negative feminine stereotypes, while generally undesirable (Prentice and Carranza, 2002), are not seen as typical of those in top positions, and thus people may be less concerned with curbing these attributes when thinking about an ideal leader. Therefore, we expected to find that participants' responses would reflect a priority to minimize negative traits more stereotypically associated with men over negative traits stereotypically associated with women. We also considered whether participants would show more of an appreciation for positive traits that are stereotypically seen as characteristic of their gender in-group than positive stereotypes of a gender out-group (e.g., Dovidio and Gaertner, 1993). Thus, we expected female participants to rate communal traits as more necessary than male participants, whereas male participants were expected to see agentic traits (competence and assertiveness) as more necessary than female participants. These predictions also align with past research suggesting that women endorse less masculine leader stereotypes than men (Boyce and Herd, 2003;Koenig et al., 2011) and are more supportive of female leaders (Kwon and Milgrom, 2010;Vial et al., 2018). Additionally, participants were expected to show less of an aversion for negative traits that are stereotypical of their gender in-group than negative stereotypes of a gender out-group-that is, we expected female participants to see it as more of a priority to reduce negative traits commonly associated with men than male participants, whereas male participants were expected to prioritize minimizing negative feminine stereotypes more so than female participants. Participants Power analysis performed with G * Power 3.1 (Faul et al., 2007) indicated the need for at least 162 participants to have adequate power (1−β = 0.80) to detect small to medium effect sizes (f = 0.175) for the main effects of budget, participant gender, and their interaction for each of three lists of traits. In total, 281 participants took part in the study via Amazon Mechanical Turk (Mturk). The study was described to potential Mturk participants (i.e., those with at least 85% approval rates) as a short survey on work-related attitudes and impressions of other people, in which participants would be asked to read some materials and answer some questions about their experiences, beliefs, and attitudes. The study took approximately 5 minutes and participants were compensated $0.55. Eight participants (2.8%) indicated that some of their answers were meant as jokes or were random. We report analyses excluding these 8 participants (n = 273; mean age = 35.94, SD = 11.73; 57.5% female; 76.2% White). One participant did not indicate gender (0.4%). Procedure Participants were asked to think about the attributes that would make someone an ideal leader. We asked them to design their ideal leader by purchasing traits from three different lists, and we gave participants a set budget of "leader dollars" that they could spend at their discretion. Each of the three lists contained 10 traits in random order, and participants could spend up to 10 dollars on each trait. For each list of traits, participants were first asked to allocate 60 leader dollars between the 10 traits. Then, participants were asked to do this exercise again two more times, first with a budget of 40 leader dollars, and then with a budget of 20 leader dollars. All stimuli are reported in full in Appendix A. The first list of traits included five agentic/competence traits (capable, competent, confident, common sense, intelligent) and five communal traits (good-natured, sincere, tolerant, happy, trustworthy). The second list included five agentic/assertive traits (ambitious, assertive, competitive, decisive, self-reliant) and an additional five communal traits (cooperative, patient, polite, sensitive, cheerful). The third list included five negative masculine stereotypes (arrogant, controlling, rebellious, cynical, stubborn) and five negative feminine stereotypes (emotional, naïve, shy, weak, yielding), as classified by Prentice and Carranza (2002). The instructions for the third list were slightly different from the first two lists, as participants were asked to indicate how much they would pay so that their ideal leader would not possess each of the 10 negative traits. At the end of the study, all participants were asked basic demographic questions (e.g., age, race), and received a debriefing letter. In both studies, prior to debriefing, we asked participants to indicate whether any of their answers were random or meant as jokes ("yes" or "no"). We reassured participants that they would receive full compensation regardless of their answers to encourage honest responding. Analytic Strategy We first computed the proportion of each overall budget that was allocated to agency/competence versus communality, agency/assertiveness versus communality, and negative masculine versus feminine stereotypes. For the first two, we combined the amounts allocated to agentic traits (competence or assertiveness) for each budget and computed the total proportion such that higher scores indicated a larger proportion of the budget was allocated to agency (competence or assertiveness) versus communality. We followed the same procedure for the negative traits, where higher scores indicated a larger proportion of the budget allocated to eliminate negative traits stereotypically associated with men over those associated with women. As the budget expands, people allocate an increasingly smaller proportion of their extra income to necessities and spend a larger proportion of income on luxuries. In order to investigate which trait categories were seen as necessities and which were seen as luxuries, we followed Li et al.'s (2002) analytic strategy and compared participant allocations in the low budget (i.e., 20 leader dollars) with how they allocated their last 20 leader dollars. We computed the allocation of the last 20 dollars by subtracting the amount purchased in the medium budget (40 dollars) from that of the high budget (60 dollars), and then divided by 20. This strategy is similar to asking participants how they would allocate an additional 20 leader dollars after they have already spent 40. We submitted the proportion scores for the first 20 and the last 20 leader dollars as repeated measures in three separate Analysis of Variance (ANOVA) tests, one for each trait category (i.e., competence/communality, assertiveness/communality, and negative masculine/feminine stereotypes), with participant gender as between-subjects factor. Results We examined the bivariate associations between the proportion of budgets allocated to the different sets of traits at the three budget levels. Across budgets, the proportion spent to gain competence (vs. communality) was significantly positively associated with the proportion spent to gain assertiveness (vs. communality) (correlations ranging from r = 0.47 to r = 0.39, all ps < 0.001, depending on budget.) Additionally, across budgets, the proportion spent to gain competence (vs. communality) was significantly negatively associated with the proportion spent to minimize negative traits that are more stereotypically masculine (vs. feminine) (correlations ranging from r = −0.33 to r = −0.15, all ps < 0.001, depending on budget). The same pattern emerged even more strongly for the association between the proportions spent to gain assertiveness (vs. communality) and the proportions spent to minimize negative traits that are more stereotypically masculine (vs. feminine) (correlations ranging from r = −0.47 to r = −0.41, all ps < 0.001, depending on budget). In other words, these bivariate correlations suggest that a stronger preference for agency (competence or assertiveness) over communality was associated with a weaker desire to reduce negative masculine traits over negative feminine traits. (Partial correlations controlling for participant gender revealed the same patterns). Competence Versus Communality There was a significant effect of budget for the competence/communality traits list, F(1,270) = 2780.21, p < 0.001, η p 2 = 0.911, such that the difference in the proportion allocated to competence relative to communality was higher for the first 20 dollars ( Figure 1A. As can be seen in the figure, for all three budgets, male as well as female participants spent more on competence traits than on communal traits, and this difference became larger as options became more constrained (i.e., as the budget became smaller). While men's and women's allocations were more similar for the high and medium budgets, when the budget became smaller, men's preference for competence over communality (62% vs. 38% of the budget) was stronger than women's (57% vs. 43% of the budget). In other words, the tendency to view competence as more of a necessity than communality was apparent in both men and women, and men valued competence over communality more strongly than women when choices were constrained. Assertiveness Versus Communality There was also a significant effect of budget for the assertiveness/communality traits list, F(1,270) = 1428.82, p < 0.001, η p 2 = 0.841, such that the difference in the proportion allocated to assertive over communal traits was significantly higher for the first 20 dollars ( Figure 1B. As can be seen in the figure, as the budget became smaller, participants spent slightly but reliably more on assertive traits than on communal traits. This pattern is consistent with participants viewing assertiveness as more of a necessity and communality as more of a luxury. Figure 1C. As can be seen in the figure, for all three budgets, male as well as female participants spent higher proportions of their budgets to minimize negative masculine stereotypes than to minimize negative feminine stereotypes, and this difference became larger as options became more constrained (i.e., as the budget became smaller). While women and men's allocations were more similar for the high and medium budgets, when the budget became smaller women's interest in minimizing negative masculine stereotypes relative to negative feminine stereotypes (66% vs. 34% of the budget) was stronger than men's (57% vs. 43% of the budget). In other words, the tendency to see it as a necessity to curb negative masculine (vs. feminine) stereotypes was apparent in both men and women, and women devalued negative masculine (vs. feminine) stereotypes more strongly than men when choices were constrained. Discussion The goal of Study 1 was to examine the attributes that men and women view as requisite (vs. superfluous) for ideal leaders. As predicted, leader agency was seen as more of a necessity relative to leader communality, which was viewed as more of a luxury. We found that when people's budgets were constrained, both men and women were more likely to give up communality in favor of both competence and assertiveness. It is worth noting that, when participant choices were only minimally constrained (i.e., in the high budget condition), the relative preference for assertiveness over communality appeared to reverse. In other words, when they could choose rather freely, participants in this study favored a communal leader over an assertive one. Such reversal is in line with the increased valorization of more androgynous leadership styles that draw from traditionally feminine traits and behaviors Eagly, 2007;Gerzema and D'Antonio, 2013). However, the methodology employed clearly indicates that communal traits do not hold the same value as assertiveness in relation to idealized leadership, as communal traits were only valuable once agentic attributes had been sufficiently met. We found that participants devoted a larger proportion of their budgets to minimizing negative masculine stereotypes, such as arrogant and controlling, than negative feminine stereotypes, such as emotional. This preoccupation with negative masculine stereotypes in particular may reflect a general view that power corrupts (Kipnis, 1972;Keltner et al., 2003;Inesi et al., 2012), as well as an attempt to keep those deleterious effects of power at bay in ideal leaders. In contrast, minimizing negative feminine stereotypes became of interest only after negative masculine stereotypes were sufficiently reduced. Although both men and women ultimately preferred agency to communality, the results suggest that, compared to men, women prefer leaders who show more of a balance between competence and communality (whereas men more strongly favor competence), and who can keep traits like arrogance or stubbornness in check. In line with our expectation that participants would be more tolerant of negative stereotypes of their gender in-group than negative stereotypes of a gender out-group, we found that women in particular prioritized minimizing masculine negative stereotypes when thinking about an ideal leader. Men seemed more tolerant of these negative traits, which are generally seen as more typical in their gender in-group than the gender out-group (Prentice and Carranza, 2002). Instead, men spent relatively more of their budgets to curb negative feminine stereotypes in leaders. A potential limitation in Study 1 is that, in the absence of a qualifier, participants might have thought primarily about a male individual when designing their ideal leader-given that these roles historically have been (and continue to be) disproportionally occupied by men (Blau et al., 2013), and given a general tendency to think of men as category exemplars, as reviewed recently (Bailey et al., 2018). Rather than asking participants to design their ideal "female" or "male" leader, which may arouse socially desirable responses, we again examined which traits people think are necessary for leadership in Study 2 by having male and female participants imagine themselves in a leadership (or assistant) role, and then asking them to rate what traits they believe are important to succeed in that role. STUDY 2: IMPORTANT TRAITS TO SUCCEED IN LEADER VS. ASSISTANT ROLES In Study 2, we had participants imagine themselves in either a leadership or assistantship role and examined the extent to which they believed they would need to act in agentic and communal ways in order to be successful in that role. To our knowledge, the present study was the first one to examine adult men's and women's beliefs about the traits they would need to be successful in a randomly assigned leader role. As such, this study is particularly well suited to establish a direct causal link between occupying a leadership role and differentially valuing agentic and communal traits. We expected that agentic traits, including competence and assertiveness, would be rated as more important to succeed in a leader role, but as less crucial for assistant roles. In contrast, we expected participants to see communal traits, such as patient and polite, as more important to be a successful assistant than a successful leader. Moreover, although previous research has shown that agency is more desirable than communality in the self (as compared to in others) (Abele and Wojciszke, 2007), we predict that the role will influence the extent to which people find agentic traits desirable in the self. Specifically, whereas we expected that agency would take precedence over communality for participants in the leader role, we expected to find the reverse for those in the assistant role, for whom communality would take precedence over agency. We anticipated that both male and female participants would rate agentic traits (like competence and assertiveness) as more important to succeed as a leader than communality, similar to past investigations (Koenig et al., 2011). However, we also anticipated an interaction between role and participant gender, such that women compared to men would rate communal traits as more important to succeed as a leader. This is because people tend to favor traits and attributes that are characteristic of their in-groups (versus attributes that are not, or that characterize an outgroup) (Dovidio and Gaertner, 1993), and because women compared to men have been found to possess less masculine leader-role expectations (Boyce and Herd, 2003;Koenig et al., 2011) and to value female leaders more (Kwon and Milgrom, 2010;Vial et al., 2018). Participants The study employed a 2×2×3 mixed design with participant gender (male vs. female) and role condition (leader vs. assistant) as between-subjects factors and trait category (competence, assertiveness, and communality) as a within-subjects factor. We enrolled 252 MTurk participants with a HIT completion rate of 95% or higher, who were compensated $0.55. The study took approximately 10 minutes and was described to potential participants as a research study about personal experiences, feelings, and attitudes. Three participants (1.2%) indicated that some of their answers were meant as jokes or were random. We report analyses excluding these 3 participants (n = 249; mean age = 32.55, SD = 11.88; 42.6% female; 71.9% White). One participant (0.4%) did not indicate gender. A sensitivity power analysis using G * Power 3.1 (Faul et al., 2007) showed a sample of this size (n = 249) is sufficient to detect a small interaction effect between within-and between-factors, i.e., f (U) = 0.169 with power = 0.80 and f (U) = 0.208 with power = 0.95 (assuming α = 0.05, four groups, and 3 repeated measures). Procedure Participants first read a short vignette asking them to imagine that they were part of a team working on an important project. The full text of the vignette is presented in Appendix B. Half of participants were randomly assigned to a role condition in which they imagined being the team leader, and the other half were assigned to a role condition in which they imagined being the assistant to the leader. All participants were asked to indicate how important each of a series of attributes was to be successful in their role. Specifically, for each trait, they read "As [a leader/an assistant] it is important to be [trait], " and indicated their answer from 1 (not at all) to 7 (extremely so). The list of traits, all of which were used in Study 1, included eight agentic traits, three of which measured competence (i.e., competent, confident, capable; α = 0.75), and five of which measured assertiveness (i.e., ambitious, assertive, competitive, decisive, self-reliant; α = 0.78), and eight communal traits (i.e., cheerful, cooperative, patient, polite, sensitive, tolerant, goodnatured, sincere; α = 0.83). 1 Finally, all participants were asked basic demographic questions (e.g., age, race), and received a debriefing letter. Results We conducted a mixed-model ANOVA with participant gender and experimental role condition as between-subjects factors, and trait category (competence, assertiveness, and communality) as a repeated measure. As expected, we found a significant interaction between role and trait category, F(2,243) = 32.31, p < 0.001, η p 2 = 0.210. The interaction between participant gender and trait category was not significant, F(2,243) = 1.85, p = 0.159, nor was the 3-way interaction between trait category, role, and participant gender, F(2,243) = 1.19, p = 0.306. All means are represented in Figure 2. Discussion The goal of Study 2 was to examine men's and women's beliefs about the traits that would be important to help them personally succeed in a randomly assigned leader (vs. assistant) role. As expected, results supported our general predictions. In line with past work (Koenig et al., 2011), people rated competence and assertiveness as more necessary for success as a leader (vs. assistant), and communality as more necessary for success as an assistant (vs. leader). Although competence was seen as relatively more important for leaders than for assistants (as would be expected for a high-status professional role; e.g., Magee and Galinsky, 2008;Anderson and Kilduff, 2009), competence emerged as the most important trait to succeed in both types of roles. Moreover, as we had anticipated, even though people tend to value agency over communality when thinking of the self (Abele and Wojciszke, 2007), role assignment had the effect of reversing this pattern for participants in the assistant role (at least in terms of assertiveness, which assistants rated as less important for them to succeed than communality). Even though we had expected to find that women (vs. men) would value communal traits to a higher extent (Boyce and Herd, 2003;Koenig et al., 2011), women were just as likely as men to see these traits as relatively unimportant for them personally to be successful in leader roles, and we failed to find any participant gender effects either in the leader or assistant role. This null interaction effect-which stands in contrast to the gender differences we observed in Study 1might reflect the power of role demands to change self-views (Richeson and Ambady, 2001) and to override the influence of other factors such as category group memberships (LaFrance et al., 2003). Moreover, it is possible that, even if women valued communality more so than men when thinking about other leaders, they may nevertheless feel as though acting in a stereotypically feminine way and behaving less dominantly than a traditional male leader would place them at a disadvantage relative to men (Forsyth et al., 1997;Bongiorno et al., 2014). Such self-versus-other discrepancy might explain why the expected gender difference in the appreciation of communality relative to agency-assertiveness emerged in Study 1, when participants were thinking of ideal leaders, but was not apparent in Study 2, when participants were asked to think about themselves in a leader role. GENERAL DISCUSSION The main goal of this investigation was to examine people's beliefs about what makes a great leader with a focus on gendered attributes, given that more stereotypically feminine leader traits (i.e., communality) appear to have become more desirable over time (Koenig et al., 2011), and that some have claimed that these attributes will define the leaders of the future (Gerzema and D'Antonio, 2013). The results of the two studies reported here were generally in line with our predictions that men's and women's idea of what it takes to be successful in leadership roles is essentially agency, which is a stereotypically masculine attribute. Communality is appreciated in leaders, but only as a non-vital complement to the fundamentally masculine core of the leader role. Whereas past investigations have reached similar conclusions (e.g., Koenig et al., 2011), the current studies contribute to this body of work in important ways. This investigation was the first that we know of to examine the potential trade-off between agentic and communal traits in leaders. The results of Study 1 supported the proposed view that communality is valued in leaders only after meeting the more stereotypically masculine requirements of being competent and assertive. Importantly, the methods in Study 1 revealed that communal traits are indeed valued in leaders when choices are unconstrained. These results indicate that when participants rate traits independently from one another, as in past studies (e.g., Schein, 1973;Powell and Butterfield, 1979;Brenner et al., 1989;Boyce and Herd, 2003;Sczesny, 2003;Sczesny et al., 2004;Fischbach et al., 2015), their responses might unduly inflate their true appreciation for communal leader attributes. When choices were constrained, participants in Study 1 showed a clear preference for agentic leader traits (i.e., competence and assertiveness). Other investigations have similarly revealed how subtle differences in the measurement of group stereotypes may change the overall conclusions (Biernat and Manis, 1994). We hope that the methods in Study 1 may be adapted in future investigations to further examine gender leader-role expectations and preferences. Moreover, the random assignment of men and women to a leader (vs. assistant) role in Study 2 allowed us to establish a direct causal link between occupying a leadership position and differentially valuing agentic and communal traits, extending past investigations (e.g., Heilman et al., 1995;Boyce and Herd, 2003;Duehr and Bono, 2006;Fischbach et al., 2015). We found that men and women were largely in agreement; both indicated that it would be more important for them to possess agentic rather than communal traits in order to be a good leader. These results underscore women's internalization of stereotypically masculine leader role expectations, which could discourage women from pursuing leadership roles (Bosak and Sczesny, 2008;Latu et al., 2013;Hoyt and Murphy, 2016). Furthermore, if women tend to internalize a stereotypically masculine view of leadership, it follows that women who have an interest in and attain leadership roles might have a strong tendency to behave in line with those role expectations-for example, by displaying assertiveness, which could elicit backlash and penalties for violating gender prescriptions (Rudman and Glick, 1999;Phelan et al., 2008). Alternatively, it is possible that, even though women may value communality in leaders more so than men, as Study 1 revealed, they may nevertheless feel as though enacting these characteristics would make them appear less effective as leaders or place them at a disadvantage relative to male leaders (Forsyth et al., 1997;Bongiorno et al., 2014). For example, past investigations suggest that female leaders who behave in relatively less agentic ways are perceived to be less likable and less influential than similar male leaders (Bongiorno et al., 2014). This differentiation between the traits that women value in leaders and the traits they feel as though they must exhibit to be successful in that role (perhaps to be taken seriously by others in that role ;Yoder, 2001;Chen and Moons, 2015) may explain why we did not find the predicted interaction with participant gender in Study 2. LIMITATIONS AND REMAINING QUESTIONS Although the random assignment of men and women to a leader (vs. assistant) role in Study 2 allowed us to extend past investigations by drawing causal links between roles and trait desirability, a potential limitation in our approach is that the role manipulation may also conceivably lead to a difference in psychological feelings of power across conditions (Anderson and Berdahl, 2002;Schmid Mast et al., 2009). Given the large conceptual overlap between leadership and "power" (commonly defined as asymmetric control over resources; Keltner et al., 2003), it is possible that the results of Study 2 reflect at least in part the way men and women feel when they are in a position of power, independently from their role as leaders or assistants. Future investigations may address this issue by measuring felt power (Anderson et al., 2012) to examine whether participants value similar traits as they did in Study 2 over and above felt power. For example, it is conceivable that individuals in leadership roles that foster stronger (vs. weaker) feelings of power might value communality to a lower extent, and behave more dominantly overall (e.g., Tost et al., 2013). Another potential limitation in Study 2 is that participants assigned to the assistant role condition might have assumed that the team leader was male-consistent with the notion that people think "male" when they think "manager" (Schein, 1973). Therefore, it is unclear whether the traits that they thought would help them be a successful assistant would be contingent on the assumption that they would be assisting a male-led team. Future investigations may probe whether people believe that it takes different attributes to successfully work for a female versus a male leader, and how those beliefs impact their support for male and female supervisors. For example, if men think that a female leader would expect more cooperation from subordinates than a male leader, this expectation may partly explain their reluctance to work for women. It is also worth noting that, in both studies, we did not specify the context under which leadership (and, in Study 2, assistantship) was taking place. It seems likely that participants were thinking of some traditionally male-dominated domain (as businesses typically are). However, one important next step for future work is to examine whether the leadership domain affects which traits people value in leaders, and which traits they would find valuable for them, personally, to be a successful leader. Leaders tend to be considered particularly effective in industries and domains in which the gender composition is congruent with the gender of the leader (Ko et al., 2015; see also Eagly et al., 1995). It is conceivable that being the leader of a team that is working in a traditionally feminine domain (e.g., childcare, nursing, or even a business that caters primarily to women, such as maternity-wear or cosmetics) might change people's perception of which traits are most important. Whereas our investigation was focused on the general dimensions of agency and communality (Abele et al., 2016), future research might adapt the methodology of Study 1 to examine the potential tradeoffs between other kinds of leader attributes. For instance, past research has examined task-oriented versus person-oriented trait dimensions (Sczesny et al., 2004), traits related to activity/potency (e.g., forceful, passive; Heilman et al., 1995), "structuring" versus "consideration" behaviors (Cann and Siegfried, 1990;Sczesny, 2003), and transformational leader traits (Duehr and Bono, 2006), to name a few. In particular, given that transformational leadership styles tend to be quite favorable in contemporary organizations (Wang et al., 2011), and are more closely associated with femininity (Kark et al., 2012;Stempel et al., 2015), it would be especially interesting to examine whether such transformational leader attributes are also considered "unnecessary frills" (much like communal attributes in Study 1). As mentioned earlier, the context of leadership (more male-vs. more female-dominated) may be an important moderating factor worthy of consideration (Ko et al., 2015). For example, male followers appear to react more negatively to transformational leadership styles compared to female followers (Ayman et al., 2009). Thus, it is possible that the tradeoff between more and less transformational leadership attributes may partly depend on the specific industry or domain. Similarly, whereas we examined two sub-dimensions of agency (i.e., competence and assertiveness) following Abele et al. (2016), we did not distinguish different facets within the dimension of communality. Specifically, research suggests that communality may be broken into sub-dimensions of warmth or sociability (e.g., friendly, empathetic) and morality (e.g., fair, honest) (Abele et al., 2016), a distinction that may be meaningful and consequential in the evaluation of leaders. It has been argued that morality in particular, more so than warmth/sociability, plays a primary role in social judgment (Brambilla et al., 2011;Brambilla and Leach, 2014;Leach et al., 2017), and moral emotions are implicated in bias against agentic female leaders . Thus, future investigations may examine how the tradeoff between agency and communality explored in our research might change when the morality facet of communality is considered separately from the warmth/sociability facet. Additional research may extend the current investigations by adapting the methodology we employed in Study 1 (which we, in turn, adapted from Li et al., 2002) in various ways to further examine leader-role expectations and preferences for communality and agency in leaders (both in others and in the self). Whereas we did this in the current investigation by testing the potential tradeoffs between ideal levels of communal and agentic traits (Study 1) and the extent to which men and women viewed those traits as personally important to succeed in a leader (vs. assistant) role (Study 2), it would be worthwhile to merge these two paradigms in the future. For example, men and women in leadership roles might be asked to think about the traits they would need to be successful and then to "purchase" various amounts of those traits for themselves. Similarly, participants could be asked to purchase traits to design the ideal leader versus the ideal subordinate (e.g., the perfect assistant). IMPLICATIONS AND CONCLUSION The findings from this investigation may illuminate the continued scarcity of women at the very top of organizations, broadly construed (Eagly and Heilman, 2016;Catalyst, 2018). Overall, across studies, both women and men saw communality as relatively unimportant for successful leadership. These traits, however, make women particularly well suited to occupy low status positions (Study 2), which may contribute to gender segregation (Blau et al., 2013) via women's self-selection into low status roles (Diekman and Eagly, 2008;Schneider et al., 2016). On a more positive note, our results also suggest that women may be more supportive than men of leaders who exhibit more feminine leadership styles. We found as we had expected that women showed higher appreciation for communal attributes in leaders in comparison to men (Study 1). Furthermore, in Study 1 we also examined participants' interest in minimizing negative traits stereotypically associated with men and women when designing their ideal leader. Rather than desiring leaders to possess lower amounts of negative traits that are more stereotypically feminine (such as emotional; Shields, 2013), participants desired leaders to lack negative traits more commonly associated with men (like arrogance; Prentice and Carranza, 2002), and this preference was stronger among women compared to men. Whereas many studies have assumed to some extent that descriptive gender and leader stereotypes are similarly shared by men and women (see review by Rudman and Phelan, 2008), our results suggest that this assumption needs to be reconsidered, particularly with respect to gender traits that are relevant to leadership. Even when men and women agreed on the attributes they would personally need to be successful leaders (Study 2), Study 1 showed that women ideally prefer leaders who are more communal relative to men, and that they feel more negative than men about certain aspects believed to characterize both men and leaders (arrogance). These subtle gender differences in leader-role expectations dovetail past investigations showing patterns consistent with gender in-group favoritism effects (Tajfel et al., 1971;Greenwald and Pettigrew, 2014) on evaluation of female and male authorities (Eagly et al., 1992;Norris and Wylie, 1995;Deal and Stevenson, 1998;Ayman et al., 2009;Kwon and Milgrom, 2010;Bosak and Sczesny, 2011;Paustian-Underdahl et al., 2014;Vial et al., 2018). For example, past studies have revealed that women have more positive attitudes toward female authorities compared to men, whether implicit (Richeson and Ambady, 2001) or explicit (Rudman and Kilianski, 2000). Similarly, a recent investigation revealed that female employees working for female supervisors tend to respect those supervisors more so than male employees and engage in positive work behaviors more frequently than male employees when working for a woman . Overall, the two studies reported here further suggest that women might be relatively more supportive of leaders with more communal leadership styles compared to men. Thus, while it may be too soon to tell whether these stereotypically feminine traits will indeed define the leaders of the 21 st century (Gerzema and D'Antonio, 2013), our investigation suggests that women might be more willing than men to embrace this trend. ETHICS STATEMENT This study was carried out in accordance with the recommendations of the American Psychological Association's Ethical Principles in the Conduct of Research with Human Participants. The protocol was approved by the Institutional Review Board at Yale University. All subjects gave written informed consent in accordance with the Declaration of Helsinki. AUTHOR CONTRIBUTIONS AV wrote the first draft of the manuscript. JN provided feedback and edits. Both authors worked collaboratively on study design and data collection and analysis. APPENDIX A STUDY 1 STIMULI Participants first read the following preliminary instructions: Please take a moment to think about the characteristics that would make someone your ideal leader. By "leader" we mean someone within a group who: Controls group resources. Hires, promotes, and fires group members. Determines what needs to be done in order to achieve the group's goals. Assigns tasks to group members. Evaluates group members' performance. Ultimately, a leader is responsible for the group's outcomes. In this study, we will ask you to "design" your IDEAL LEADER by purchasing traits from a predetermined list. We will give you a budget of "leader dollars" which you can spend at your discretion. Participants then saw three lists of traits, one at a time. The first two lists were prefaced by the following instructions: Please design your ideal leader using the traits listed below. How many leader dollars would you spend for your ideal leader to possess each of these traits? For each trait, drag the bars to indicate how many leader dollars you would be willing to spend for your ideal leader to possess the trait in varying amounts. For example, if your ideal leader would be highly creative, you may want to spend $9-10 leader dollars on that trait. In contrast, if your ideal leader would be only a little extroverted, you may want to spend $0-1 leader dollars on that trait. Participants then saw the list of traits, including a budget specification (e.g., "Your total budget is $60. You may not exceed this budget when designing your ideal leader.") After rating all traits on a given list, participants were prompted to do this again with a different budget: Now we would like you to try this again, only this time you have fewer leader dollars to spend on your ideal leader. For each trait, drag the bars to indicate how many leader dollars you would be willing to spend for your ideal leader to possess the trait in varying amounts. These instructions were accompanied by a new budget specification (e.g., "Your total budget is $40. You may not exceed this budget when designing your ideal leader.") The task instructions were the same for the two lists containing positive traits (e.g., competence/communality and assertiveness/communality). Finally, the instructions for the third list, which contained negative masculine and feminine stereotypes, read as follows: Now we are interested in which characteristics you would not want your ideal leader to possess. How many leader dollars would you spend for your ideal leader not to possess each of these traits? For each trait, drag the bars to indicate how many leader dollars you would be willing to spend for your ideal leader not to possess the trait in varying amounts. For example, if you would strongly prefer that your ideal leader not be lazy, you may want to spend $9-10 leader dollars to avoid that trait. In contrast, if you have only a modest preference that your ideal leader not be forgetful, you may want to spend $0-1 leader dollars to avoid that trait. These instructions were followed by budget specifications. APPENDIX B STUDY 2 STIMULI Participants first read the following instructions, customized to condition. In the leader role condition, the text read: Imagine you are leading a team on a special and important new project. As the leader, you are in charge of putting together a team of people to assist you in completing the project. You also determine what needs to be done in order to achieve your goals, and you assign tasks to your team members as you consider appropriate. As the leader, you also make sure team members follow your instructions and deliver in a timely manner, without missing any important deadlines. Ultimately, you are responsible for the final product, and it is your job to lead the team effort to realize your vision and complete the project successfully. In the assistant role condition, participants read the following: Imagine you are assisting a leader on a special and important new project. As an assistant, your job is to provide support to the team leader in completing the project. The team leader determines what needs to be done in order to achieve the team's goals, and assigns tasks to you as appropriate. As an assistant, you follow the leader's instructions, and you must deliver in a timely manner, without missing any important deadlines. Ultimately, the leader is responsible for the final product, and it is your job to help realize the leader's vision and support and assist the leader to complete the project successfully. After reading these role instructions, all participants read the following instructions prior to rating a series of traits: Below is a list of traits and attributes. Please indicate how important each of them is to be successful in your role as (team leader / team assistant). In other words, consider how much each of these traits would help you fulfill your role as (team leader / team assistant).
2018-10-15T13:06:23.255Z
2018-10-15T00:00:00.000
{ "year": 2018, "sha1": "c51147af26335562137f9482b32656f45b15bd0a", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2018.01866/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c51147af26335562137f9482b32656f45b15bd0a", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
7904855
pes2o/s2orc
v3-fos-license
Towards Operational Detection of Forest Ecosystem Changes in Protected Areas : This paper discusses the application of the Cross-Correlation Analysis (CCA) technique to multi-spatial resolution Earth Observation (EO) data for detecting and quantifying changes in forest ecosystems in two different protected areas, located in Southern Italy and Southern India. The input data for CCA investigation were elaborated from the forest layer extracted from an existing Land Cover/Land Use (LC/LU) map (time T 1 ) and a more recent (T 2 , with T 2 > T 1 ) single date image. The latter consist of a High Resolution (HR) Landsat 8 OLI image and a Very High Resolution (VHR) Worldview-2 image, which were analysed separately. For the Italian site, the forest layer (1:5000) was first compared to the HR Landsat 8 OLI image and then to the VHR Worldview-2 image. For the Indian site, the forest layer (1:50,000) was compared to the Landsat 8 OLI image then the changes were interpreted using Worldview-2. The changes detected through CCA, at HR only, were compared against those detected by applying a traditional NDVI image differencing technique of two Landsat scenes at T 1 and T 2. The accuracy assessment, concerning the change maps of the multi-spatial resolution outputs, was based on stratified random sampling. The CCA technique allowed an increase in the value of the overall accuracy: from 52% to 68% for the Italian site and from 63% to 82% for the Indian site. In addition, a significant reduction of the error affecting the stratified changed area estimation for both sites was obtained. For the Italian site, the error reduction became significant at VHR (±2 ha) in respect to HR (±32 ha) even though both techniques had comparable overall accuracy (82%) and stratified changed area estimation. The findings obtained support the conclusions that CCA technique can be a useful tool to detect and quantify changes in forest areas due to both legal and illegal interventions, including relatively inaccessible sites (e.g., tropical forest) with costs remaining rather low. The data obtained through CCA intervention could not only support the commitments undertaken by the European Habitats Directive (92/43/EEC) and the Convention of Biological Diversity (CBD) but also satisfy UN Sustainable Development Goals (SDG). Change Detection Techniques The present study discusses the application of the Cross Correlation Analysis (CCA) change detection technique, which can detect changes at different spatial scales using a Land Cover/Use (LC/LU) map and a sole recent image for forest monitoring.The approach proposed may reduce both computational and imagery costs as well as require fewer in-field campaigns. Earth Observation (EO) data and techniques are most promising for monitoring and quantifying forest changes at multiple scales and high frequencies [1][2][3][4][5].These techniques can provide new products and services for a wide user community including ecologists and decision makers such as those involved in the commitments of Natura 2000 site conservation [6][7][8][9]. Change in forest extent and forest fragmentation has been identified as one of the most important drivers of forest ecosystem services loss and is thus one of the most critical factors to monitor.The selection of the most appropriate change detection technique will depend on the data available about the study area.Actually, when Land Cover/Land use (LC/LU) maps at time T 1 and T 2 , (with T 2 > T 1 ) are not available, the techniques chosen are usually based on direct comparison of calibrated and co-registered image pairs acquired at T 1 and T 2 [10].In some applications, pixel spectral values in the two images or derived spectral features (e.g., normalized difference vegetation index (NDVI)) have been used in the comparison.In such cases, no semantic information about specific class transitions/modifications, useful for identifying pressures on the area [5,9], can be automatically extracted.In other applications, spectral features are used, involving a higher processing image level.This approach allows detection of changes that may have real-world meaning such as a decrease/increase of the NDVI vegetation index.Whether using the former or the latter technique, in-field campaigns and/or visual inspection of VHR images or aerial orthophoto are to be used to identify specific class transitions/modifications. Otherwise, when two thematic maps (e.g., LC/LU or habitats maps) independently produced at time T 1 and time T 2 are available, the well-known Post Classification Comparison (PCC) approach [10][11][12][13][14] can be used.The degree of success of this technique depends upon the reliability of the input thematic maps [15,16] since the quality of the output change image is related to the product of the accuracies of the two maps being compared. Although the two mentioned change detection techniques are very useful, their scale of application depends upon the investigation purposes.For regional decision making, data and maps from high spatial resolution sensors, such as the Landsat series or the European Space Agency's (ESA) Sentinels optical and radar, appear to be the most appropriate since they can cover greater area and allow more frequent coverage.For more local decision-making, finer scale data and maps such as those provided by airborne/spaceborne Very High Resolution (VHR) sensors (e.g., QuickBird, GeoEye, Worldview-2/3) are being employed.However, since image acquisition must be tasked, operational change detection at local scale can be seriously hampered by lack of their repeated acquisition [6][7][8][9]. Change Detection with VHR Images Due to the complexity of class description at fine scales and the huge image information content, change detection based on VHR images is viewed as more difficult to automate compared to coarser spatial resolution data.Thus object based classification techniques are generally suggested [13,17].Nevertheless, the process still remains computationally expensive when the entire set of thematic classes in the two T 1 and T 2 maps must be processed.Chen et al. [13] suggested applying a stratified change detection approach by considering only one target class of interest (e.g., evergreen forest) at a time in the map.An additional difficulty arises from the high cost of VHR images.This issue may become critical when multi-seasonal VHR acquisitions need to be used in order to produce LC maps having large Overall Accuracy (OA) value and low error rate [18]. CCA Based Scenario Technique at VHR and HR The Cross Correlation Analysis (CCA) is a change detection technique developed by the American company Earthsat, Inc. and is fruitful in evaluating the differences between an existing LC/LU map (T 1 ) and a recent single-date multispectral image (T 2 ) [19][20][21].A (CCA) based scenario will be used in this study to evaluate changes at multiple-scale.Our approach includes a pre-existent LC/LU map available at regional/local scales, obtained by visual inspection of orthophotos and validated by in-field campaigns, will be used as reference image at time T 1 in the change detection process.The image T 2 pixels corresponding to the target class in the T 1 map will be analysed by CCA to detect changes at T 2 [19] at both VHR and HR.Places of transitions from the target class can then be identified without a complete classification process at time T 2 .Even though the new class at T 2 remains unknown it may be determined by in-field inspection or by visual interpretation of VHR images, when available. CCA applications to High Resolution (HR) (e.g., Landsat TM) and Medium Resolution (MR) imagery (e.g., MERIS) have been reported [19,20].More recently, VHR (e.g., WorldView-2) applications for grassland ecosystems changes have been proposed [21].This study argues that the CCA technique can be attractive for fine scale change detection, since it can reduce change detection costs when: (a) the acquisition of several (multi-seasonal) VHR images at time T 2 (e.g., within a year) is too expensive; and (b) no archival VHR data are available at T 1 for direct image comparison between the T 1 image and a new tasked T 2 image (with T 1 < T 2 ). The present study aims at demonstrating: (i) the effectiveness of the CCA techniques for forest change detection at both HR and VHR on two protected areas in different bio-geographical regions, i.e., Southern Italy and Southern India; and (ii) evaluating the CCA accuracy compared to an unsupervised traditional change detection technique based on image differencing of spectral features from available image pair (T 1 and T 2 ) at HR only.NDVI will be used as suitable spectral feature to detect changes in the images.Estimates of uncertainty in the discrimination of classes (i.e., change and no-change) will be provided based on stratified sampling of ground truth reference samples for validation and the recommendations in [15,22].The proposed approach could speed up operational forest change detection as well as prove to be cost effective.The findings encourage speculations about the opportunity to implement CCA technique for compliance with both European and United Nation (UN) commitments. Site Description The CCA analysis and comparison of change detection techniques have been carried out on data obtained from two protected areas, belonging to different biogeographical regions, one located in India and the other in Southern Italy. Italian Site The first study site consists of 500 km 2 located in Apulia, a region of South East Italy, included in the Natura 2000 "Murgia Alta" site (SCI/SPA IT9120007, of the European Union Habitat Directive 92/43/EEC and Bird Directive 147/2009/EC) (Figure 1a).The land consists of a calcareous upland covered by almost 24% semi-natural dry grasslands.The site represents one of the most important areas for the conservation of this ecosystem type in Europe.Highly fragmented evergreen and deciduous forest patches represent what remains of the old natural vegetation coverage otherwise completely destroyed by human activity over time.Agricultural intensification, urbanization, arson and land abandonment represent the main pressure factors on the area biodiversity [23]. For T 1 (2006), an existing LC/LU map (1:5000) based on visual orthophoto interpretation and validated (85% overall accuracy) by in-field campaigns, was available for the site.The authors used this map, originally produced in CORINE and subsequently translated into the FAO-LCCS taxonomy [24], used as image reference (Figure 1b). To detect changes possibly due to arson, multi-spatial resolution summer images were considered at T 2 , according to the procedure described in [19,21].In particular, a VHR WorldView-2 single date image (spatial resolution 2 m) acquired in July 2012 was considered (Figure 1c).The image was orthorectified in the WGS84/UTM33N projection (EPSG code: 32633), with a RMSE less than one pixel, and calibrated to Top of Atmosphere (TOA) reflectance values.The image was provided at no cost by the European Space Agency (ESA) under the Data Warehouse 2011-2014 policy within the FP7-SPACE BIO_SOS project [25]. For change detection at HR, two images were downloaded from the US Geological Survey [26], i.e., a recent Landsat 8 OLI image [27] acquired in August 2013 (as T 2 ), and an earlier Landsat 7 ETM+ image dated July 2006 (as T 1 ).Both images were orthorectified in the WGS84/UTM43N projection, with a RMSE less than one pixel and coregistered.Relative pairwise pixel-based radiometric normalization [28] was applied to the two images in order to reduce disparities in the image acquisition conditions.The Landsat 7 ETM+ image of July 2006 (T 1 ) was used as reference image for co-registration and radiometric normalization.To detect changes possibly due to arson, multi-spatial resolution summer images were considered at T2, according to the procedure described in [19,21].In particular, a VHR WorldView-2 single date image (spatial resolution 2 m) acquired in July 2012 was considered (Figure 1c).The image was orthorectified in the WGS84/UTM33N projection (EPSG code: 32633), with a RMSE less than one pixel, and calibrated to Top of Atmosphere (TOA) reflectance values.The image was provided at no cost by the European Space Agency (ESA) under the Data Warehouse 2011-2014 policy within the FP7-SPACE BIO_SOS project [25]. For change detection at HR, two images were downloaded from the US Geological Survey [26], i.e., a recent Landsat 8 OLI image [27] acquired in August 2013 (as T2), and an earlier Landsat 7 ETM+ image dated July 2006 (as T1).Both images were orthorectified in the WGS84/UTM43N projection, with a RMSE less than one pixel and coregistered.Relative pairwise pixel-based radiometric normalization [28] was applied to the two images in order to reduce disparities in the image acquisition conditions.The Landsat 7 ETM+ image of July 2006 (T1) was used as reference image for co-registration and radiometric normalization. Indian Site The Indian site includes a 540 km 2 Tiger reserve located in the Western Ghats biodiversity hotspot in Southern India, named "Biligiri Rangaswamy Temple Tiger Reserve" (Figure 2a).The site has a heterogeneous physiography, with hills running in the north-south direction, and elevations ranging from 600 to 1800 m above sea level.The area biophysical conditions, with rainfall in two different seasons allow a distinctive ecosystem to thrive in this area and support a diversity of endemic flora and fauna.The only available map of the region is dated 1997, scale (1:50,000), and includes ten different vegetation types.These range from dry scrub forest to dense wet evergreen forests in the higher elevation areas (Figure 2b).The evergreen forest patches (Figure 2c) are found both in contiguous areas and in dense patches among a mosaic of high elevation grassland areas. Indian Site The Indian site includes a 540 km 2 Tiger reserve located in the Western Ghats biodiversity hotspot in Southern India, named "Biligiri Rangaswamy Temple Tiger Reserve" (Figure 2a).The site has a heterogeneous physiography, with hills running in the north-south direction, and elevations ranging from 600 to 1800 m above sea level.The area biophysical conditions, with rainfall in two different seasons allow a distinctive ecosystem to thrive in this area and support a diversity of endemic flora and fauna.The only available map of the region is dated 1997, scale (1:50,000), and includes ten different vegetation types.These range from dry scrub forest to dense wet evergreen forests in the higher elevation areas (Figure 2b).The evergreen forest patches (Figure 2c) are found both in contiguous areas and in dense patches among a mosaic of high elevation grassland areas.In the present paper, only the wet evergreen forest patches shown in the map were considered as input T 1 layer to CCA.In addition, a Worldview-2 image acquired in March 2013 (summer season) and provided by ESA, within the FP7-SPACE BIO_SOS project [25], was used as VHR image at T 2 .The image was orthorectified in the WGS84/UTM43N projection (EPSG code: 32643), with a RMSE less than one pixel, and calibrated to Top of Atmosphere (TOA) reflectance values. Two additional Landsat images, dated March 1997 (Landsat 5) and March 2016 (Landsat 8 OLI), were considered for direct comparison of NDVI indices [26].These images were orthorectified in the WGS84/UTM43N projection, and coregistered with a RMSE less than one pixel.Relative pairwise pixel-based radiometric normalization [28] was applied to the two images in order to reduce disparities in the image acquisition conditions.The March 1997 Landsat 5 image was used as reference image. Methods To detect changes in the forest target class of the two protected areas, two experiments were carried out: One detected changes at HR comparing the NDVI images from the two Landsat images (Image to Image comparison) acquired at T 1 and T 2 , with T 2 > T 1 .2 The other detected changes at HR and VHR by Cross Correlation Analysis (Map to Image comparison). Tables 1 and 2 synthetize techniques and abbreviations used in the rest of this paper for the two sites, respectively.Direct image comparison of spectral signatures or derived indexes, such as NDVI, was used when LC/LU maps are not available at T 1 and T 2 .After co-registration and relative radiometric rectification of the two images acquired in the same month, of different years, can be compared at pixel level, by direct comparison of the reflectance values of each input pixels (e.g., by image differencing, image regression, change vector analysis, and image rationing) or at features level by image differencing of spectral indexes (e.g., NDVI, and PSRI) [10].In this paper, the comparison of NDVI indices from Landsat images, acquired at T 1 and T 2, with T 2 > T 1 , was used as such NDVI feature introduces some semantic related to the coverage and/or state of vegetation.As a result, the areas where changes in the forest layer occurred were identified and compared with those detected by CCA analysis at T 2 . The CCA change detection technique rests on the evaluation of the differences between an existing LC/LU map (T 1 ) and a recent single-date multispectral image (T 2 ) [19][20][21].In this model of analysis, all pixels of the T 2 image corresponding to a specific thematic layer (target class) in the T 1 map are analysed to determine the expected reference class metrics in the T 2 image (i.e., class average spectral response and standard deviation).Following this, for each pixel in T 2, corresponding to the target class layer target layer at T 1 , a statistical measure is computed to evaluate the distance between the pixel spectral signature and the reference target class metrics at T 2 .Large values of such measures show evidence of the occurrence of large probable class changes.This information can be used to derive a Z-statistic for each pixel of the T 2 image falling within the target LC/LU class layer.The Z-statistic describes how close the pixel's response is to the expected spectral response of the target class in T 2 .Large Z-statistic values may identify changed pixels while small Z-statistic values may indicate no-change pixels.The Z-statistic values can be computed by Equation (1): where Z jk is the Z-score for a pixel jk belonging to a given class (stratum); i is the band number in the multispectral image; n is the number of bands; c jk is the thematic class (stratum) being considered at T 2 , jk is a pixel in the stratum; r ijk is the reflectance in band i for pixel jk; µ ic jk is the mean reflectance value in band i of all pixels in a given class c jk ; and σ ic jk is the standard deviation of the reflectance value in band i of all pixels in class c jk . The selection of a threshold (TH) can thus help to identify the most significant changes [19].In the paper three values of the threshold (i.e., µ + 1σ, µ + 2σ and µ + 3σ, where µ and σ are the mean and standard deviation of the Z change image) were adopted and the best result considered.Once the location of forest changes has been located, more accurate information about the specific class transitions/modification from the target class (i.e., forest) can only be obtained by either local in-field campaigns or visual inspection of VHR imagery. Accuracy Assessment For the output map validation, sets of reference change and no-change forest polygons were selected through visual inspection of the available WorldView-2 images.Stratified random sampling was applied.When the sampling intensities for each class considered differed, correct calculation of the overall accuracy (OA) would require that the within-class accuracies be weigh according to the proportions of the study area corresponding to the map class [29].Consequently, OA cannot be calculated as the sum of diagonal counts in the error matrix divided by the total count, as in the case of simple random sampling or systematic sampling design [28].For this reason, for each experiment, the change error matrix was produced in terms of sample counts.To obtain a more accurate quantification of change OA, the protocol described in [15,22] was adopted.This protocol is based on a more informative presentation of the change error matrix and allows direct computation change accuracy and area estimates. Given a map with q categories, when the map categories are the rows (i = 1, 2, . . .q) and the reference categories are the columns (j = 1, 2, . . .q) of the error matrix, A tot represents the total area of the map (window), A m,i is the mapped area (ha) of category i, in the map, and A tot is the proportion of the mapped area as category i, then pij (i.e., the proportion of area for the population having map class i and reference class j in the change error matrix [22]) can be calculated as: where n ij represents the sample counts and n i• is the sum of the sample counts for the i-raw computed over the columns of the change error matrix [22]. The unbiased stratified estimator of the area of category j can be obtained as: where Âj can be considered as an "error-adjusted" estimator of the change area, because it includes the area of map omission error of category j and leaves out the area of map commission error [29]. The estimated standard error of the estimated area proportion is: Therefore, the standard error of the stratified area estimate can be expressed as: and an approximate 95% confidence interval for Âj is: Results Since CCA experiments analysed only the T 2 pixels belonging to a specific target class at T 1 , whereas the NDVI technique compares the whole T 1 and T 2 image pixels, for comparison purposes, the shapefile of the specific T 1 target class considered by CCA was also overlaid on the input and output change images from NDVI experiments following the total image analysis and only the changes appearing on the delimited overlaid area have been investigated. Italian Site For the Italian site, all the patches of broadleaved, evergreen and mixed forest areas were merged into a single target forest layer.Table 3 presents the results obtained in the different experiments, described in Table 1.A window of the input image including forest patches and the corresponding change output image are shown in Figures 3-8 for the different experiments, respectively.More specifically, Figure 3a,b focus on a sub-window of the study area in T 1 and T 2 images, respectively.Figure 3c where can be considered as an "error-adjusted" estimator of the change area, because it includes the area of map omission error of category j and leaves out the area of map commission error [29]. The estimated standard error of the estimated area proportion is: Therefore, the standard error of the stratified area estimate can be expressed as: and an approximate 95% confidence interval for is: Results Since CCA experiments analysed only the T2 pixels belonging to a specific target class at T1, whereas the NDVI technique compares the whole T1 and T2 image pixels, for comparison purposes, the shapefile of the specific T1 target class considered by CCA was also overlaid on the input and output change images from NDVI experiments following the total image analysis and only the changes appearing on the delimited overlaid area have been investigated. Italian Site For the Italian site, all the patches of broadleaved, evergreen and mixed forest areas were merged into a single target forest layer.Table 3 presents the results obtained in the different experiments, described in Table 1.A window of the input image including forest patches and the corresponding change output image are shown in Figures 3-8 for the different experiments, respectively.More specifically, Figure 3a,b focus on a sub-window of the study area in T1 and T2 images, respectively.Figure 3c The changes detected were interpreted by visually analysing the close-up areas reported in Figure 4. Forest fragmentation by road (Figure 4a-c), tree density modification (Figure 4d-f) and burned forest areas (Figure 4g-i) seem evident in the protected area.The presence of burned areas observed in Figure 4h was confirmed by the State Forestry inventory and related to arson dated 2009.When CCA was applied to Landsat imagery, CCA_HR, more changes could be detected than in previous experiment.The observable changes appear in Figures 5 and 6. The close-up images of CCA_VHR in Figure 8 confirm the occurrence of the changes identified by CCA_HR (Figure 5) and show more details due to increased image spatial resolution.Table 3 reports the highest overall accuracy value. Indian Site For the Indian site, the CCA analysis focused on the strata of evergreen forest, as extracted from the existing LC/LU map.Quantitative results of the experiments are reported in Table 4. Figure 9 shows input images of experiment 1 (DIFF_NDVI_HR) and corresponding output image.The DIFF_NDVI_HR experiment underestimated some changes.This is visible in close-ups of Figure 10 compared to the same close up windows shown in subsequent figures related to CCA applications. In the Indian forest, more changes were detected by CCA_HR.As in the Italian site, the threshold µ + 1σ proved best in detecting most of the changes (Figure 11).However, as the close-up windows of Figures 10 and 12 may reveal, detecting and interpreting changes of HR (i.e., coarse) imagery, obtained by both techniques, may prove rather difficult, due to the complex features of dense tropical forest.It seems worth observing that, when using the CCA_VHR technique, the largest OA was obtained with the threshold set at µ + 1σ.Accordingly, stratified changed area estimate with 95% confidence interval was the most accurate (error ±2.55).The inputs and the change images are shown in Figure 13. The close-ups of changed areas, reported in Figure 14, show clear evidence of forest fragmentation due to the new road (Figure 14c).They also indicate forest degradation, which could be due to either community changes (Figure 14f-i) or other possible disturbances, such as invasion by alien plants.On the one hand, these findings confirm the need for VHR data related to conservation studies, and, on the other hand, they pose the need for important changes in the policies related to both VHR periodic image tasking over protected areas and relative costs. The Italian site For the Italian site, the analysis of the quantitative results (reported in Table 3) indicate that: • At comparable scale (HR), when CCA_HR is adopted, the overall accuracy is larger (68.19%) than the accuracy value obtained by DIFF_NDVI_HR (52.18%) with a significant reduction of the error in the stratified changed area estimate (from ±111.71 ha to ±36.9 ha).For example, the change area that is visible in Figure 5c (CCA_HR), close to the red and blue circles, was not detected in Figure 3c (DIFF_NDVI_HR).Moreover, the close-up in Figure 6c (CCA_HR) shows detailed changes due to new roads construction within and in the borders of the forest.These changes, which are clearly visible at VHR in Figure 7b (i.e., the Worldview2 image at T 2 ), are not detected by DIFF_NDVI_HR (Figure 4c), hence the underestimated changes in this case.In addition, CCA_HR analyses are more cost effective, since only the T 2 pixels that correspond to the specific target class in T 1 must be analysed .The VHR image allows validation of the changes detected when in-field campaigns may not be feasible (Figure 7b). • At fine scale, CCA_VHR provides the best overall accuracy (71.11%).This value is very close to the CCA_HR result (68.19%) but the former value is associated with the lowest error in the stratified changed area estimation (±2.48 ha), as already demonstrated in a previous study for grasslands ecosystems [21].Both forest fragmentation by roads and changes in forest density are confirmed at finer scale by CCA_VHR (Figure 8c).Therefore, the best change results can be achieved by comparing fine scale LC/LU map with one single VHR image.These findings appear to support arguments in favour of the acquisition and storage of VHR images on protected areas at a regular base and, possibly, at a reduced cost for those institutions in charge of the protected areas management [8]. The Indian Site For the Indian site, the results reported furnish evidence that: • The changes identified in the evergreen forest by the DIFF_NDVI_HR are underestimated in comparison to the ones found by CCA_HR at the same scale.For instance, Figure 12c shows changes that are completely missed in Figure 10c.In addition, in the Indian experiment, the overall accuracy of CCA_HR is larger (82.29%) than the one from the DIFF_NDVI_HR (63.33%) with comparable errors in the stratified area estimate (Table 4).At HR, the results appear to confirm the inadequacy of direct NDVI image comparison for change detection in tropical forest (Figure 10i). • For validation purposes, CCA was carried out also at VHR resolution, even if the scale of the existing LC/LU map is coarse (1:50,000).The output change images in the Figure 14c,f,i clearly validate the changes detected at HR by both CCA_HR.The changes observed may be mainly due to both deforestation and construction of new roads.Although the findings reported appear to be very interesting, more detailed information on the conditions of the changes in the tropical forest would be required.These may involve complementary in-field inspection and the collaboration of different scientific expertise. • It must be recalled that, in the Indian site, the evergreen forest layer is adjacent to different vegetable classes named "woodland to savanna woodland (tall)" and "tree savanna".Figure 10d,e evidence both such conditions and the presence of some changes in the vegetable classes which remained un-detected by CCA (Figure 10e).This discrepancy can be ascribed to the a priori class selection made to perform CCA analysis.In order to have inclusive detection of overall changes, additional CCA processing steps should be applied.However, the more inclusive process would be time consuming.On the other hand, DIFF_NDVI_HR can yield more comprehensive detection of possible changes in one single processing step, but the resulting overall accuracy would be lower than the one obtained through CCA.Undoubtedly, the choice of the most appropriate change detection technique will depend on specific user's requirements as well as comprehensive cost effectiveness. Conclusions The study carried out offers grounds to suggest that CCA can be effectively applied for detection of changes in forest ecosystems at multiple scales.In fact, when an existing fine scale LC/LU map is available, the comparison of the map with a single recent image (T 2 ) can provide reliable results at both high and very high spatial resolution at rather low costs.The results obtained in the two study areas under investigation are comparable at both scales in terms of coverage (ha) of changed areas even though with reduced error at fine scale.However, for fine scale studies, the cost of VHR image acquisition is still very high for use by public bodies and decision makers.As clearly evidenced in [6,21], and in [3], agreements between space agency and national authorities should be encouraged to reduce such costs for a widespread EO data application to ecosystems monitoring. CCA, applied to coarser images (e.g., Landsat 8 OLI) of both sites investigated, appears to provide better accuracy values than the ones obtained by a traditional technique (DIFF_NDVI_HR). Estimates of uncertainty in the analysis were made on the basis of stratified sampling and recommendations found in [1,2].However, it seems worth noting that the quality of the input LC/LU map and the semantic of the LC/LU classes can influence the accuracy of the change detection analysis. The findings reported, prove that CCA technique can yield useful results for operational purposes, such as detection of change in specific classes (e.g., forest, and grassland) at different scale [30].They also indicate that by using different threshold values, output change maps can be produced with different levels of change details and related accuracy values. From the above discussion it can be concluded that CCA technique investigated could allow long-term monitoring of natural ecosystems in support to conservation management.This would require quantifying LC/LU change dynamics that can play an essential role in estimating forest accounts.The latter could provide a thorough measure of forest assets, flows of forest-related services and yield information on how these variables can change through time [31].Forest accounts information are linked to traditional indicators such as the gross domestic products.They can also be extended to include other forest products such as fuel wood and ecosystem services. Figure 1 . Figure 1."Murgia Alta" Natura 2000 site: (a) the red line is the study site location and extension of "Murgia Alta" National Park and the blue line is the analysed area; (b) existing LC map dated 2006; and (c) available Worldview-2 input image (17,000 × 7000 pixels wide), 2 m resolution, 6 July 2012.False Colour Composite: R = 5, G = 7, B = 2. Figure 1 . Figure 1."Murgia Alta" Natura 2000 site: (a) the red line is the study site location and extension of "Murgia Alta" National Park and the blue line is the analysed area; (b) existing LC map dated 2006; and (c) available Worldview-2 input image (17,000 × 7000 pixels wide), 2 m resolution, 6 July 2012.False Colour Composite: R = 5, G = 7, B = 2. presents the output change map.Yellow, red and blue circles in the latter Figure evidence some change areas.Remote Sens. 2016, presents the output change map.Yellow, red and blue circles in the latter Figure evidence some change areas. Figure 4 . Figure 4. Experiment 1-DIFF_NDVI_HR close-up area from Figure 3, Italian site: (a-c) close-ups of the needle leaved forest in Figure 3c, blue circle; (d-f) close ups of a broadleaved forest in Figure 3c, yellow circle; and (g-i) close ups of the mixed forest in Figure 3c, red circle. Figure 6 Figure 6 . Figure 6 Figure 6.Experiment 2.a-CCA_HR close-up areas from Figure 5, Italian site: (a-c) close-ups of the needle leaved forest shown in Figure 5c, blue circle; (d-f) close-ups of the broadleaved forest shown in Figure 5c, yellow circle; and (g-i) close ups of the mixed forest in Figure 5c, red circle, with changes probably due to arson. Figure 8 Figure 8 . Figure 8 Figure 8. Experiment 2.b-CCA_VHR close-up area from Figure 7, Italian site: (a-c) close-ups of needle leaved forest in Figure 7c, blue circle; (d-f) close-ups of broadleaved forest in Figure 7c, yellow circle; and (g-i) close ups of mixed forest in Figure 7c, red circle.Such changes might be due to arson. Figure 10 Figure 10 . Figure 10Figure 10.Experiment 1-DIFF_NDVI_HR close-up areas from Figure 9, Indian site: (a-c) area close-ups in Figure 9c, red circle, where a new road is present at T 2 in the evergreen forest (evidenced by the blue polygon overlaid on the image); (d-f) close-up areas in Figure 9c, blue circle; and (g-i) close up areas in Figure 9c, yellow circle. Figure 14 . Figure 14.Experiment 2.b-CCA_VHR close-up areas from Figure 13, Indian site: (a-c) close-ups in Figure 13c, red circle where the new road appears at T 2 in an evergreen forest area (the blue polygon overlaid on the images);(d-f) are close-ups in Figure13c, blue circle, where forest degradation or community changes may have occurred at T 2 in an evergreen forest; and (g-i) close-ups in Figure13c, yellow circle.In these areas, deforestation appears to have occurred at T 2 , as documented by the VHR image in (h). Table 1 . Set of experiments and abbreviations used for the Italian site. Table 2 . Set of experiments and abbreviations used for the Indian site. Table 3 . Change detection matrix for Murgia Alta site, CCA_HR and CCA_VHR results.Producer's and overall accuracies based on stratified estimation.TH indicates the threshold value applied to the Z-statistic image in CCA experiments.A m is the mapped changed area. Table 4 . Change detection matrix for the evergreen tropical forest in Indian site.Results obtained by CCA_HR and CCA_VHR.Producer's and overall accuracy values are based on stratified estimation.TH indicates the threshold applied to the Z-statistic image in the CCA experiments.A m is the mapped changed area.
2016-10-31T15:45:48.767Z
2016-10-16T00:00:00.000
{ "year": 2016, "sha1": "12277fe40f01ff27a593e4e3a29170b1414c39bb", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-4292/8/10/850/pdf?version=1477019929", "oa_status": "GOLD", "pdf_src": "Crawler", "pdf_hash": "12277fe40f01ff27a593e4e3a29170b1414c39bb", "s2fieldsofstudy": [ "Environmental Science", "Mathematics" ], "extfieldsofstudy": [ "Computer Science", "Geology" ] }
255608260
pes2o/s2orc
v3-fos-license
Habitat utilization and feeding ecology of small round goby in a shallow brackish lagoon We examined small-scale distribution and feeding ecology of a non-native fish species, round goby (Neogobius melanostomus (Pallas, 1814)), in different habitats of a coastal lagoon situated in the south-western Baltic Sea. First observations of round goby in this lagoon were reported in 2011, 3 years before the current study was conducted, and information on this species’ basic ecology in different habitats is limited. We found that mainly juvenile round gobies are non-randomly distributed between habitats and that abundances potentially correlate positively with vegetation density and thus structural complexity of the environment. Abundances were highest in shallower, more densely vegetated habitats indicating that these areas might act as a refuge for small round gobies by possibly offering decreased predation risk and better feeding resources. Round goby diet composition was distinct for several length classes suggesting an ontogenetic diet shift concerning crustacean prey taxa between small (≤ 50 mm total length, feeding mainly on zooplankton) and medium individuals (51–100 mm, feeding mainly on benthic crustaceans) and another diet shift of increasing molluscivory with increasing body size across all length classes. Differences in round goby diet between habitats within the smallest length class might potentially be related to prey availability in the environment, which would point to an opportunistic feeding strategy. Here, we offer new insights into the basic ecology of round goby in littoral habitats, providing a better understanding of the ecological role of this invasive species in its non-native range, which might help to assess potential consequences for native fauna and ecosystems. Introduction The round goby (Neogobius melanostomus (Pallas, 1814)) originates from the Ponto-Caspian region and has colonized multiple water bodies as a non-native fish species, including the Great Lakes, the Baltic Sea and several European rivers like the Rhine. Thus, it occurs in various temperate freshwater and brackish water systems and has become a dominant fish species in many of these invaded regions (Sapota and Skóra 2005;Kornis et al. 2012;Jůza et al. 2018). Concurrently, round gobies have become established in local food webs, feeding on a wide range of native organisms and serving as prey for several predatory fish species (e.g. cod and perch) and birds, such as cormorants and herons Almqvist et al. 2010;Hempel et al. 2016;Oesterwind et al. 2017;Herlevi et al. 2018). In their colonized range, round gobies can have severe impacts on the native fauna as they compete with native fish species for food resources and habitat, and prey on fish fry and juveniles (e.g. Janssen and Jude 2001;Steinhart et al. 2004;Houghton and Janssen 2015;Hirsch et al. 2016). Direct predation negatively affects macroinvertebrate communities leading to declines in abundance and biomass of certain prey species (e.g. Barton et al. 2005;Lederer et al. 2006;. These interactions with native species and the extensive distribution of round goby can result in cascading food web alterations in invaded ecosystems (Skabeikis et al. 2019) with possible Communicated by C. Buschbaum Electronic supplementary material The online version of this article (https://doi.org/10.1007/s12526-020-01098-0) contains supplementary material, which is available to authorized users. consequences for ecosystem functioning in these regions. Therefore, a solid understanding of the basic ecology, such as habitat utilization and trophic role, of this fast spreading invasive species is essential in order to estimate potential impacts on native fauna and ecosystems. Studies from several colonized areas have examined the distribution among habitats and diet of round goby. The species uses a variety of different habitats, but higher abundances are commonly associated with increased habitat complexity as provided by vegetated or rocky seabed (Ray and Corkum 2001;Sapota 2004;Cooper et al. 2007Cooper et al. , 2009Ramler and Keckeis 2020). Accordingly, round gobies show a preference for structurally more complex habitats, such as macrophytes and cobbles, over open sand areas in laboratory studies (Bauer et al. 2007;Duncan et al. 2011). Round gobies are mostly described as generalists with an opportunistic feeding strategy adapting their diet flexibly to prey availability in the environment (Borcherding et al. 2013;Brandner et al. 2013;Nurkse et al. 2016). They prey mainly on benthic macroinvertebrates including crustaceans (e.g. amphipods), insect larvae (mainly chironomids), molluscs (such as bivalves and gastropods) and polychaetes. Fish are only consumed by larger individuals, whereas fish eggs and juveniles have a rather low contribution to round goby diet (e.g. Vašek et al. 2014;Ustups et al. 2016;Wiegleb et al. 2018;Hempel et al. 2019). However, round goby predation on eggs might generally be underestimated (Lutz et al. 2020). An ontogenetic diet shift has been reported for several water bodies with the dietary proportion of molluscs increasing with body size Brandner et al. 2013;Ustups et al. 2016;Hempel et al. 2019). The size, at which ontogenetic diet shifts of round goby occur, varies with study area, and slight differences in round goby diet concerning specific prey taxa exist, although the overall feeding ecology is rather similar among regions. In the Baltic Sea, overall invasion rates of non-native species have consistently increased over the past decades with the first record of round goby dating back to 1990 from the Polish coast (Skóra and Stolarski 1993;Leppäkoski and Olenin 2000). Since then, the range of this species has expanded extensively in coastal areas , emphasizing the importance of exploring its ecology and potential impacts on native ecosystems. Therefore, the aim of this study is to examine small-scale distribution and feeding ecology of round gobies in distinct littoral habitat types in a coastal brackish lagoon of the southern Baltic Sea, with a main focus on juveniles. Specifically, we (1) assess the abundance of round goby in different habitat types and (2) compare the diet between different size classes. Here, we present an overview of which prey items different round goby size classes feed on in the study area using stomach content data from late summer and autumn, right after the summer production peak. Additionally, we examine (3) how round goby diet might differ between habitats within one size class. Although first records of this species at the German coast of the Baltic Sea date back to 1998 (Winkler 2006), in our study area, round gobies were first caught in 2011 (Paul Kotterba, personal observation). Thus, we are exploring the basic ecology of this invasive species in a region, which has been invaded rather recently but where round goby has already established a viable population, which is why information on its ecology, especially the consideration of several habitat types, is quite scarce (Oesterwind et al. 2017;Wiegleb et al. 2018). This study will therefore contribute to broadening the knowledge on round goby in its invaded range, which is needed to more reliably predict potential impacts on ecosystems. Study site and selected habitats Our study site "Greifswald Bay", a semi-enclosed inshore lagoon located in the south-western Baltic Sea at the German coast ( Fig. 1), covers an area of 510 km 2 with an average depth of 5.8 m. Mean salinity ranges between 7 and 9 (Reinicke 1989;Stigge 1989). The grounds of shallow waters contain almost exclusively sand, besides mud, boulders and rocks (Reinicke 1989). The phytal zone reaches from the water surface to a maximum water depth of about 4 m consisting mainly of pondweed (Potamogetonaceae) and seagrass (Zostera marina), whereas bladderwrack (Fucus vesiculosus) is most abundant on stony substrates. Red algae, such as Furcellaria lumbricalis, are associated with rocks in slightly deeper areas between 3 and 6 m water depth (Geisel and Meßner 1989). The local fish fauna comprises both marine and freshwater species such as herring (Clupea harengus), flounder (Platichthys flesus), pike (Esox lucius), pikeperch (Sander lucioperca) and perch (Perca fluviatilis) (Winkler 1989). First records of round goby in Greifswald Bay were reported in 2011 (Paul Kotterba, personal observation). Round gobies were sampled in different pre-defined habitat types at "Gahlkow", situated in the southern part of Greifswald Bay (Fig. 1). The littoral zone in this area is characterized by a depth-stratified succession of different submerged aquatic vegetation (SAV) communities, which represents a common structure of local soft bottom shore zones in the bay. Based on earlier studies (Moll et al. 2018) and footage recorded by an underwater camera, sampling areas were selected and habitats characterized according to the general phytal zonation in the bay described by Geisel and Meßner (1989) following the natural depth gradient. The "Potamogeton-zone" (PZ, pondweed zone) in our study ranged from 1 to 2 m water depth, where the seafloor is densely covered with different macrophyte species including Stuckenia pectinata and Z. marina (Kanstinger et al. 2018). The "Zostera-zone" (ZZ, seagrass zone) between 3 and 4 m depth was characterized by less dense and more patchily distributed SAV consisting mostly of Z. marina. No vegetation occurred in the "sub-phytal zone" (SZ) between 5 and 7 m depth, and the sediment consisted of bare sand. In all habitat types, the main bottom substrate was sand. Round goby sampling Round goby samples were taken with a 2 m-wide beam trawl (mesh size, 5 mm), which was towed over the seafloor with constant speed (2 knots). Hauls in the habitats were conducted with a towing time between 2 and 11 min, depending on SAV density (Supplementary Table S1). The position of the boat was recorded each second with a handheld global positioning system resulting in continuously recorded tracks for each haul. Track distance and beam trawl width were used to calculate the respective sampling area for each haul. Round gobies were sampled in late summer and autumn 2014 with respectively one sampling in August, October and November during daytime (between 10 am and 2 pm, local time). In August, one haul was conducted in the PZ and two additional hauls in the transition-zone between the PZ and ZZ in 2 to 3 m water depth ("Potamogeton/Zostera-transition-zone" = PZTZ). In October and November, three replicate hauls were conducted in each habitat type (PZ, ZZ, SZ) resulting in nine hauls in total for each month. The sampling procedure in August deviated slightly from the one in October and November because it was conducted as a pilot sampling, after which the methodology was improved and adjusted to local conditions, i.e. sampling in specific habitats and reduction of hauling duration (cf. Supplementary Table S1). Therefore, data from August was only included in the diet analysis, but not in the comparison of round goby abundance between habitats. Round goby samples were frozen directly on board and stored at − 30°C until further processing. Laboratory and stomach content analysis For the analysis of round goby diet composition, stomach content analyses were conducted. Sampled round gobies were counted, and total length (TL) and wet weight were recorded to the nearest mm and 0.01 g, respectively. For the stomach content analyses, gobies were assigned to three length classes (LC): ≤ 50 mm (LC 1 ), 51-100 mm (LC 2 ) and 101-150 mm (LC 3 ). Stomach contents were examined from at least ten gobies per LC for each haul in August, October and November. Round gobies were dissected ventrally, and the stomach was separated from the remaining digestive tract. Stomach contents were examined under a binocular microscope and prey items determined to the lowest possible taxonomic level. The presence/absence of prey organism taxa was noted for each stomach. The data on round goby diet has been used for a merely descriptive purpose in Oesterwind et al. (2017) before. Statistical analysis All statistical analyses were carried out in the open source software R, version 3.6.1 (R Core Team 2019). To test whether round goby abundance differed between habitat types, abundances were compared statistically between habitats for October and November, i.e. these months when the same habitats had been sampled (PZ, ZZ, SZ), by means of Generalized Linear Models (GLM) using Type II Sum of Squares with the car package (Fox and Weisberg 2011). GLMs were executed with round goby count data as the response variable using a quasipoisson distribution and a log-link function for October data and a negative binomial distribution (MASS package; Venables and Ripley 2002) and a squared-link function for November data. In both models, an offset with the respective sampling area of each haul was included. To ensure that GLMs met assumptions regarding data normality and homoscedasticity, residuals were plotted against fitted values. To assess how single habitats differed from each other concerning round goby abundance, post-hoc tests were conducted using the Bonferroni correction. For the comparison of round goby diet between length classes, data from the three sampling months (August, October and November) were pooled across all habitats to obtain a sufficient sample size. Therefore, the comparison among length classes gives an overview of what gobies of different sizes feed on in general in the study area irrespective of sampling month and without taking into account a possible variability in prey availability and round goby diet (for diet composition separated into length classes for each month, cf. Supplementary Fig. S1). To test whether diet composition differed between the three length classes, permutational multivariate ANOVA (PERMANOVA) with 9999 permutations was used on presence/absence data. A permutational test of multivariate dispersion (PERMDISP) was conducted prior to the PERMANOVA, to test whether the within-group spread of the observations to their group centroid (i.e. multivariate dispersions) was equal between length classes. To achieve homogeneity of multivariate dispersions, LC 3 had to be excluded from the analysis. Hence, PERMANOVA and the subsequent analysis were only applied for the comparison between LC 1 and LC 2 . A similarity percentage (SIMPER) analysis was conducted to assess the dissimilarity between LC 1 and LC 2 based on round goby diet composition and to identify the contribution of prey items to this diet difference. To test whether diet composition of round gobies differed between habitat types within one length class (LC 1 ), PERMDISP and PERMANOVA were applied for October and November data separately. Only LC 1 was considered in this analysis to avoid length class-based differences in diet to overshadow habitat effects, and because only gobies from this smallest LC were present in all habitat types (cf. Supplementary Table S1). The binomial dissimilarity index was used for the above described multivariate analyses (PERMDISP, PERMANOVA), as it can handle binary data and different sample sizes between groups. For the multivariate analyses, the vegan package (Oksanen et al. 2018) was used. Maps were generated in R using the following packages: GISTools, rgdal, raster and oceanmap (Brunsdon and Chen 2014;Bivand et al. 2017;Hijmans 2017;Bauer 2018). Results Besides round gobies, eleven other fish species were present in the beam trawl samples from August, October and November (Supplementary Table S2). Two pipefish species (Nerophis ophidion and Syngnathus typhle) represented the most abundant fish group ( Supplementary Fig. S2) followed by native gobies, Pomatoschistus spp., round goby and the three-spined stickleback Gasterosteus aculeatus. Other fish species had a comparatively low occurrence in the samples. Habitat utilization of round goby Round goby abundances were compared between habitats for October and November, since all three habitat types were sampled during these months, but not in August. In total, 124 round gobies were caught in October and 273 individuals in November across all sampled habitats (Supplementary Table S1). Total length ranged between 16 and 129 mm for October and November with the majority of gobies (95.97%) assigned to LC 1 (≤ 50 mm). Additionally, 3.78% were grouped into LC 2 (51-100 mm) and 0.25% into LC 3 (101-150 mm). Mean total length (± standard deviation) in October was 35.0 ± 12.9 mm and 37.3 ± 6.9 mm in November. Round goby abundance differed significantly between habitats for October based on GLM results (p = 0.027, F 2,6 = 7.03; Fig. 2a). However, post-hoc tests did not show any significant differences between habitats. Nevertheless, round goby abundance was clearly higher in the PZ (mean abundance ± standard deviation = 6.82 ± 0.71 n/100m 2 ) compared with the ZZ (0.62 ± 1.07 n/100m 2 ) and SZ (1.31 ± 1.70 n/100m 2 ), and post-hoc results were rather close to being significant for the comparison of the PZ and ZZ (p = 0.080). GLMs revealed that abundances differed significantly between habitats for November (p < 0.001, F 2,6 = 75.78; Fig. 2b) with higher goby abundances in the PZ (24.49 ± 10.54 n/100m 2 ) than in the ZZ (0.15 ± 0.15 n/100m 2 ) and SZ (0.19 ± 0.22 n/100m 2 ), which was confirmed by post-hoc comparisons. Diet composition in different length classes In total, 163 round goby stomachs were examined from August, October and November samples. Thereof 26 stomachs only contained mucus and sand, and another five stomachs contained non-identifiable prey items, resulting in 132 round goby stomachs considered in the diet analysis. The diet composition showed distinct differences between the three length classes. While round gobies in LC 1 and LC 2 predominantly fed on arthropods (89% and 70% of gobies per length class respectively; Fig. 3a, b), the percentage of polychaetes and molluscs increased in the diet of LC 3 with 60% of gobies consuming arthropods and molluscs and 80% feeding on polychaetes (Fig. 3c). The diet composition based on the three main taxa (displayed in Fig. 3a-c) divided into lower taxonomic groups (Fig. 3d-f) differed significantly between LC 1 and LC 2 (PERMANOVA: p < 0.001, F 1 = 20.10). The between-group dissimilarity between these length classes was 86% based on SIMPER results. Prey groups contributing most to this difference were copepods, ostracods, gastropods and isopods, which together explained 57% of the between-length class dissimilarity (for SIMPER results, cf. Supplementary Table S3). In LC 1 , respectively 52% of round gobies consumed ostracods and copepods, whereas individuals in LC 2 fed increasingly on gastropods (30% of gobies) and isopods (41% of gobies; Fig. 3d, e). Additionally, 17% of LC 1 gobies consumed cladocerans, which were not present in the diet of LC 2 . However, 33% of round gobies in LC 2 fed on amphipods. Keeping in mind that the sample size in LC 3 was Note the different scale of the y-axis in (f). No arthropod prey items are displayed in (f), as they could not be assigned to any of the lower taxonomic groups, i.e. arthropod prey items were unidentifiable rather low (n = 5), which is why no statistics could be applied, the diet composition in LC 3 showed differences to the diet of the smaller length classes. In LC 3 , a higher percentage of gobies consumed polychaetes (80%) and bivalves (40%; Fig. 3c and f). Diet composition in different habitat types The diet composition of round gobies was compared between habitat types for October and November in LC 1 . Due to nonhomogeneity of multivariate dispersions between habitats for October data, PERMANOVA could not be applied. For November data, PERMANOVA did not show a difference in diet composition between habitats (p = 0.066, F 2 = 2.39), although results were quite close to being significant at a 0.05 significance level. However, based on qualitative observations, diet composition showed certain differences between habitat types in LC 1 , both in October and November (Fig. 4). Round goby diet was more diverse in the PZ comprising nine and six prey taxa in October and November, respectively, whereas a maximum of four different prey items was consumed in the ZZ and SZ. Correspondingly, certain taxa, such as amphipods and isopods, were only present in round goby diet in the PZ. 4% of individuals fed on amphipods in October and 8% in November, while 11% of gobies consumed isopods in October ( Fig. 4a and d). Discussion In this study, we provide information on small-scale distribution and feeding ecology of a non-native fish species, round goby, in different habitats of a recently colonized shallow lagoon in the south-western Baltic Sea. This knowledge is essential for a better understanding of the ecological role of this invasive species in the littoral zone of colonized areas. We found higher round goby abundances in the shallower vegetated habitat, and the diet composition was distinct for different round goby size classes, additionally displaying certain differences between habitat types. Our findings could support the evaluation of how native fauna and thus ecosystems might be affected by the invasion and rapid expansion of round goby and therefore assist in management actions. Habitat use of round goby Both in October and November, round gobies were more abundant in the PZ compared with the other two habitats (cf. Fig. 2). Whereas GLMs showed significant differences in round goby abundance between habitats for both months, post-hoc tests indicated significant differences only for November. Non-significant post-hoc results for October data might be caused by the relatively low replicate number in general and a comparatively high number of gobies in one of the three replicate hauls in the SZ (cf. Supplementary Table S1), which might possibly not have been representative for this habitat type due to, e.g. comparatively dense vegetation. Nonetheless, abundances were clearly higher in the PZ (6.82 ± 0.71 n/100m 2 ) compared with the other two habitats (ZZ, 0.62 ± 1.07 n/100m 2 ; SZ, 1.31 ± 1.70 n/100m 2 ) based on qualitative observations in October (Fig. 2). Since the shallow PZ was characterized by dense vegetation, it most likely represented the structurally most complex habitat compared with the less vegetated ZZ and the bare sand area in the SZ. Additionally, round gobies sampled in October and November were comparatively small (mean total length ± standard deviation, 36.6 ± 9.2 mm) indicating that shallower, more structured areas with dense submerged aquatic vegetation (SAV) represent an important habitat for juvenile specimen at the study site. This is in accordance with general habitat preferences of round gobies in laboratory experiments, in which more complex cobble and macrophyte habitats were favoured over less complex open sand areas (Bauer et al. 2007;Duncan et al. 2011). Similarly, round gobies were more abundant in SAV and rock habitats than in water lily and bare sediment habitats in the Great Lakes (Ray and Corkum 2001; Cooper et al. 2007Cooper et al. , 2009. In its native range, the Ponto-Caspian region, this species utilizes shallow hard substrates (rocks, gravel, mussel beds), but also seagrass meadows (Svetovidov 1964;Bogutskaya et al. 2004). However, Moran and Simon (2013) documented the occurrence of smaller round gobies in less complex gravel habitats, while larger individuals were associated with structurally more complex areas, suggesting that habitat type alone might not be the sole factor explaining the distribution of this species (Coulter et al. 2015). The presence of small, mainly juvenile, round gobies at the study site might be related to its relatively young invasion history as small round gobies have been linked to more recently colonized sites and larger individuals to originally invaded areas (Ray and Corkum 2001). Round gobies at our study site reached a total length of up to 14 cm and therefore seemed to be aged between 0 and 5 years according to Florin et al. (2018), which would be in accordance with their presumptive young invasion history in Greifswald Bay (study performed 3 years after first observation). Regardless of whether round gobies always reach higher abundances in more complex vegetated habitats, several benefits are associated with using this type of habitat. Densely vegetated habitats generally offer lower predation risk than open areas, most likely due to a higher availability of hiding places and reduced conspicuousness of prey fish (e.g. Savino and Stein 1982;Chacin and Stallings 2016). Indeed, predation on round gobies is lower in more sheltered habitats, and especially small individuals are exposed to high predation pressure in open habitats (Belanger and Corkum 2003). Thus, the shallow PZ might serve as a refuge for small round gobies offering increased protection against predation due to dense SAV. Additionally, more complex vegetated habitats might provide better feeding conditions with a higher availability of prey organisms for juvenile round gobies as high macroinvertebrate abundance and biomass, and especially mobile organisms, are usually associated with vegetation (Boström and Bonsdorff 1997;Christie et al. 2009;Henseler et al. 2019). However, since habitat types examined in this study did not only differ concerning SAV density but also with regard to depth, the high occurrence of small round gobies cannot unambiguously be linked to the structural complexity of the habitats, as depth might also play a role in structuring the fish community. Studies have shown that juvenile fish abundances can be higher in shallow habitats compared with similarly complex, deeper areas, which is often explained by lower abundances of large predatory fish in shallow habitats and thus reduced predation risk for juveniles (Baker and Sheaves 2007;Ryer et al. 2010). Yet, this relation does not always apply, suggesting that juvenile abundance is not exclusively structured by vertical depth. In our study, it seems rather unlikely that effects of depth differences between habitats override effects of habitat complexity, especially as high fish abundances and low predation risk are generally associated with structurally more complex, vegetated habitats (Nagelkerken et al. 2001;Heck Jr. et al. 2003;La Mesa et al. 2011;Chacin and Stallings 2016;Reynolds et al. 2018). Furthermore, when conducting preliminary beach seine samplings in shallow areas of Greifswald Bay, round gobies were always present in vegetated areas, but never on bare sand (personal observation). Nevertheless, the depth gradient at the study site might have influenced round goby distribution additionally, promoting high abundances of small round gobies in the shallower habitat. Moreover, samples were only taken during daytime in our study. Therefore, future studies should investigate this aspect more closely by assessing round goby abundances in adjacent vegetated and non-vegetated habitats of the same depth and during different times of day (cf. Ray and Corkum 2001). Vegetated habitats might only serve as a refuge during daytime when small round gobies are susceptible to predation. However, round gobies might move into more open areas during the night with darkness potentially offering increased protection against predators. Higher abundances of juvenile round gobies in the PZ could have implications for the native fish community in Greifswald Bay. These shallow vegetated areas serve as important spawning beds for spring-spawning Atlantic herring (Kanstinger et al. 2018), and round gobies smaller than 10 cm have been reported to feed on herring eggs in the field during spring (Wiegleb et al. 2018). Thus, large numbers of juvenile individuals could have a negative impact on herring recruitment. Furthermore, round goby can compete for habitat and food resources with native fish species (Hirsch et al. 2016). In our study, fish caught alongside round goby represented species commonly inhabiting the study area (Winkler and Thiel 1993). Gobies of the genus Pomatoschistus had a comparatively high occurrence in the samples (Supplementary Fig. S2), and, as benthic fish, they are likely to interact most with similarly sized round gobies due to competition. The local predatory fish community might also benefit from the presence of small round gobies, as various native species including perch and pikeperch feed on round goby (Oesterwind et al. 2017), which therefore are likely to become part of the local food web transferring additional energy to higher trophic levels Campbell et al. 2009). Diet composition of round goby Several benthic macroinvertebrate species were identified in the round goby stomachs, representing organisms that are commonly found in the study area (Jönsson et al. 1997; for a complete prey species list, cf. Table 1 in Oesterwind et al. 2017). Freely moving species, such as Idotea chelipes, gammarids and Hydrobia spp., as well as the bivalve Mytilus sp. are frequently associated with vegetated areas in Greifswald Bay, whereas chironomids, ostracods, Hediste diversicolor, Corophium spp. and the bivalves Cerastoderma spp., Limecola balthica, and Mya arenaria inhabit sandy to muddy sediments (Geisel and Meßner 1989;Günther 1998). Thus, the diet spectrum of round goby incorporated a broad variety of prey organisms from the local macroinvertebrate community. Although round gobies from both LC 1 and LC 2 mainly fed on arthropods, we found a significant difference in diet composition between these length classes. Whereas copepods, ostracods and cladocerans were present in the diet of LC 1 gobies, medium-sized gobies from LC 2 increasingly fed on isopods, amphipods and gastropods. Round gobies from the two smaller length classes thus consumed different crustacean prey taxa indicating an ontogenetic diet shift from zooplanktonic organisms to larger crustaceans for round gobies at a size of about 50 mm TL. This is in line with other studies from the Baltic Sea (Rakauskas et al. 2008;Skabeikis and Lesutienė 2015;Ustups et al. 2016) and the Great Lakes (Cooper et al. 2009;Brush et al. 2012;Olson and Janssen 2017), which report a high proportion of copepods, cladocerans and ostracods in the diet of smaller individuals. An increasing importance of larger crustaceans with round goby size has also been found in other areas. However, besides amphipods, other crustacean taxa, such as mysids and decapods, are consumed by similar sized round gobies (Rakauskas et al. 2008;Skabeikis and Lesutienė 2015;Ustups et al. 2016). Hence, the overall ontogenetic diet trend seems to be similar for different invaded regions, but the specific diet composition (i.e. the prey taxa) might depend on the study area and, most likely, on which prey items are available in a specific environment. Contrary to the smaller length classes, large gobies from LC 3 predominantly consumed polychaetes and bivalves (keeping in mind the small sample size and the absence of statistical analysis for LC 3 ), which might confirm an ontogenetic diet shift with increasing molluscivory for our study area. A higher percentage of molluscs, especially bivalves, in the diet of larger round gobies is known from the Ponto-Caspian region (Svetovidov 1964;Pinchuk et al. 2003;Bogutskaya et al. 2004), as well as from multiple colonized areas, such as the Baltic Sea (Skora and Rzeznik 2001;Karlson et al. 2007), the Great Lakes Duncan et al. 2011) and river or canal systems (Brandner et al. 2013;Hempel et al. 2019). In the Baltic Sea, mainly Limecola balthica and Mytilus spp. are consumed, whereas dreissenid mussels constitute the main proportion of mollusc prey in the Great Lakes. In its native range, round gobies feed on a large number of mollusc species, amongst others, including Cerastoderma glaucum and Abra segmentum (Svetovidov 1964;Kvach and Zamorov 2001;Pinchuk et al. 2003;Bogutskaya et al. 2004). Similar to our findings, a higher contribution of annelids, including polychaetes, to the diet of larger round gobies has been documented in a brackish water canal and the southeastern Baltic Sea (Skabeikis and Lesutienė 2015;Hempel et al. 2019), although other studies found a declining importance of polychaetes with increasing goby size (Skora and Rzeznik 2001). Thus, our results show a certain accordance with other areas inhabited by this species concerning the diet spectrum of different length classes, though regional differences in diet composition exist, which are possibly linked to the prey availability in the respective environments. The distinct diet composition of differently sized individuals indicates that the impact of round gobies on native macroinvertebrate communities will depend on which round goby sizes prevail in a specific region. Although no fish eggs and larvae were found in the stomachs examined in this study, predation of round goby on the fry of resident species at the study site, such as Atlantic herring, cannot be excluded as it has been shown that small round gobies prey on herring eggs during springtime in Greifswald Bay (Wiegleb et al. 2018). Generally, round goby predation on eggs might be difficult to detect by means of stomach content analyses (Lutz et al. 2020). However, since the colonization of the area and the corresponding restructuring of the regional food web are still in progress, consequences for the ecosystem and fishing resources are not yet fully predictable (Campbell et al. 2009;Hempel et al. 2016). Based on qualitative observations, the diet composition of round goby differed between the studied habitat types in autumn (in both October and November) within LC 1 . Specifically, the diet in the PZ was most distinct from the other two habitats in both sampling months. Round gobies in the PZ fed on a higher number of different prey taxa, which could be related to a higher species richness in this shallower vegetated habitat compared with the structurally less complex ZZ and SZ. It has been shown that more complex, vegetated habitat types usually possess a different invertebrate community composition in addition to higher invertebrate species richness and diversity than non-vegetated areas (Boström and Bonsdorff 1997;Henseler et al. 2019). Accordingly, some organisms were exclusively found in round goby diet from the PZ, such as amphipods and isopods. These mobile crustaceans were potentially associated with SAV in the PZ. Crustaceans such as gammarids and Idotea chelipes inhabit vegetated areas in Greifswald Bay (Geisel and Meßner 1989;Günther 1998), and similar findings from the Gulf of Gdansk in the southeastern Baltic Sea show that the isopod Idotea balthica constitutes a high proportion of round goby diet in a macrophyte habitat (Skora and Rzeznik 2001). This might indicate a link between round goby diet and the prey availability in a specific environment with round gobies feeding on whichever prey taxa occur. It has been documented that round goby possesses an opportunistic feeding strategy and flexibly adapts its diet to seasonal changes in invertebrate community composition (Borza et al. 2009;Borcherding et al. 2013;Brandner et al. 2013). Thus, round goby might not have a preference for specific prey taxa but is instead able to consume a broad variety of different prey items as a generalist species (Raby et al. 2010;Brandner et al. 2013;Nurkse et al. 2016). In summary, our results on diet composition of round goby in different habitats might indicate an opportunistic feeding behaviour of this species, which represents a trait often expressed by invasive species. Opportunistic feeding might, in this respect, facilitate the colonization of different areas and therefore link to invasion success (Ribeiro et al. 2007;Rewicz et al. 2014;Pettitt-Wade et al. 2015;de Carvalho et al. 2019). However, our findings on habitat-dependent feeding of round goby have to be considered with caution keeping in mind that the results cannot be confirmed by statistical analysis, probably due to low sample sizes in the ZZ and SZ (number of gobies ranging between 3 and 11 in October and November). Moreover, no data on prey availability in the studied habitats was collected. To fully investigate opportunistic feeding of round gobies, it would be necessary to quantify invertebrate communities in multiple habitats and compare these to round goby diet composition, which was, however, not accomplished within our study. Furthermore, future investigations should also include other habitat types in the study area, which could not be considered in this study, but might help to increase knowledge on the round goby's plasticity regarding habitat use and feeding behaviour. Conclusion Overall, our study suggests that shallower, more structured habitats serve as important areas for juvenile round gobies, as higher abundances were found in the shallow, densely vegetated habitats of the littoral zone compared with the deeper, less structured habitats at the study site. Diet composition of round goby differed between length classes. Whereas the smallest individuals (LC 1 ) mostly fed on zooplankton, including copepods, ostracods and cladocerans, medium-sized specimen (LC 2 ) increasingly consumed benthic crustaceans, such as amphipods and isopods, suggesting an ontogenetic diet shift regarding crustacean prey organisms. As commonly stated in literature, the proportion of molluscs increased in the diet of larger round gobies (LC 3 ). Furthermore, we offer indications for habitat-specific feeding of round goby within the smallest length class, which would conform to the generally suggested opportunistic feeding strategy for this species. Our findings shed light on the basic ecology of a widely spread invasive fish species in a quite recently, and therefore not yet extensively studied, colonized region, which could contribute to assessing its impact in non-native ecosystems and to the design of adequate management actions. Ethical approval All applicable international, national, and/or institutional guidelines for the care and use of animals were followed by the authors. Sampling and field studies All necessary permits for sampling and observational field studies have been obtained by the authors from the competent authorities and are mentioned in the acknowledgements, if applicable. Data availability The datasets generated during and/or analysed during the current study are available from the corresponding author on request. Authors' contributions Designed the field study: CH, PK, DO. Performed the field work and lab analysis: CH, PK, DO. Conducted the statistical analysis of the data: CH. Wrote the manuscript: CH, PK, EB, MCN, DO Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
2023-01-12T14:10:40.085Z
2020-09-21T00:00:00.000
{ "year": 2020, "sha1": "e266581efd39a14c060d3a6bdf0c7c8692f7871d", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s12526-020-01098-0.pdf", "oa_status": "HYBRID", "pdf_src": "SpringerNature", "pdf_hash": "e266581efd39a14c060d3a6bdf0c7c8692f7871d", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [] }
245805285
pes2o/s2orc
v3-fos-license
Thunbergia laurifolia aqueous leaf extract ameliorates paraquat-induced kidney injury by regulating NADPH oxidase in rats We aim to study the antioxidant ability of Thunbergia laurifolia (TL) aqueous leaf extract against PQ-induced kidney injury. Rats were divided into four groups (n = 4 per group): control group, the rats received subcutaneous injection of 1 ml/kg body weight (BW) normal saline; PQ group, the rats received subcutaneous injection of 18 mg/kg BW paraquat dichloride; PQ + TL-low dose (LD) group, the rats received subcutaneous injection of 18 mg/kg BW paraquat dichloride and were orally gavaged with TL leaf extract (100 mg/kg BW); and PQ + TL-high dose (HD) group, the rats received subcutaneous injection of 18 mg/kg BW paraquat dichloride and were orally gavaged with TL leaf extract (200 mg/kg BW). This study analyzed blood urea nitrogen (BUN) and creatinine levels, renal malondialdehyde (MDA) levels, kidney histopathology, mRNA expressions of renal NADPH oxidase (NOX) and protein expressions of renal NOX-1 and NOX-4 using immunohistochemistry. The PQ group showed a significant increase in BUN and creatinine levels, renal MDA level, and a upregulation of the mRNA expression of renal NOX compared with the control group. It also demonstrated mild hydropic degeneration of the tubules. Immunohistochemistry displayed a significant increase in the protein expressions of renal NOX-1 and NOX-4 compared with the control group. TL aqueous leaf extract especially in the high dose group significantly reduced the BUN and creatinine levels, the renal MDA level, and downregulated the mRNA expression of renal NOX and protein expressions of renal NOX-1 and NOX-4 compared with the PQ group. Furthermore, it can improve PQ-induced kidney injury. TL aqueous leaf extract can ameliorate PQ-induced kidney injury by regulating oxidative stress through inhibiting NOX, especially NOX-1 and NOX-4 expressions. Introduction Paraquat (PQ) is a highly toxic herbicide which is widely used in many countries of the world (Yu et al., 2014). Due to its severe toxicity and fatality rate of 60-80%, many countries have banned use of PQ (Kim et al., 2017;Weng et al., 2017). PQ toxicity causes severe acute and chronic health problems which may range from mild to fulminant and is commonly caused mortality (Safaei Asl and Dadashzadeh, 2016;Oa et al., 2013). Even with proper use, PQ ingestion can cause development of intracellular oxidative stress in multiple organs (Dinis-Oliveira et al., 2008). Previous studies have reported that PQ is a common cause of kidney injury in the patients (Weng et al., 2017;Isha et al., 2018). Histological examination of PQ causes mild hydropic degeneration of the proximal convoluted tubules in kidney (Lock and Ishmael, 1979). Oxidative stress is strongly implicated in the pathogenesis of kidney injury upon PQ exposure (Tan et al., 2015;Ranjbar et al., 2015). It occurred from the imbalance between reactive oxygen species (ROS) generation and antioxidant defenses (Liguori et al., 2018). ROS is mainly composed of superoxide radical (O 2 -), hydroxyl radical ( OH), and hydrogen peroxide (H 2 O 2 ), which are a highly reactive molecules to macromolecules leading to cellular and tissue damage (Schieber and Chandel, 2014). Nicotinamide adenine dinucleotide phosphate oxidase (NADPH) oxidase (NOX) family, a ROS-generating enzyme, plays a crucial role in PQ-induced oxidative stress (Crist ovão et al., 2009). Due to lack of specific treatment, medicinal plants are investigated to explore specific and effective antidote against PQ poisoning (Suntres, 2018). Thunbergia laurifolia (TL) is commonly known as blue trumpet vine or laurel clock vine which is an important herb in Thai traditional medicine (Chan et al., 2011;Junsi et al., 2020). In Thailand, TL leaves are commonly consumed as herbal tea for detoxification purpose. It has been reported that they possess various pharmacological properties including antioxidant, anti-microbial, anti-proliferative, hepatoprotective, and anti-inflammatory with non-toxic effects (Chan et al., 2011). Therefore, this study aims to investigate whether TL aqueous leaf extract possesses beneficial effects to help alleviate PQ-induced kidney injury by inhibiting NOX. Herbal collection The TL leaves were obtained from Nakhon Si Thammarat province, Thailand. Voucher specimen Thunbergia laurifolia AHS2008120101 was deposited at Herbarium of Plant Genetic Conservation Project under The Royal Initiative of Her Royal Highness Princess Maha Chakri Sirindhorn (RSPG), Walailak University, Nakhon Si Thammarat. Extraction The TL leaves were dried and ground into powder using a blender. The powder (10 g) was extracted with 100 ml of boiled distilled water. The filtrate was lyophilized in a freeze dryer (Eyela, Tokyo, Japan) (yield 13% w/w) and preserved at -80 C for further investigation. Animal treatments All animal procedures were approved and performed in accordance with the Animal Ethics Committee, Walailak University (Certification no. no. 002/2018). This study was performed in a manner that minimized animal sufferings and the numbers of animals used in studies. Male Wistar rats (Rattus norvegicus) (n ¼ 16) aged six weeks were obtained from Nomura Siam International Co, Ltd., Bangkok, Thailand. All of them were kept under controlled conditions of 23 AE 2 C and 50-60% relative humidity with a 12 h light/dark cycle. They were provided access to diet and water. They were randomly divided into 4 groups: control group, the rats received subcutaneous injection of 1 ml/kg body weight (BW) normal saline once a week for 6 weeks; PQ group, the rats received subcutaneous injection of 18 mg/kg BW paraquat dichloride once a week for 6 weeks; PQ þ TL-LD group, the rats received subcutaneous injection of 18 mg/kg BW paraquat dichloride once a week for 6 weeks and were orally gavaged with low dose TL leaf extract (100 mg/kg BW); and PQ þ TL-HD group: the rats received subcutaneous injection of 18 mg/kg BW paraquat dichloride once a week for 6 weeks and were orally gavaged with high dose TL leaf extract (200 mg/kg BW) once a day for 6 weeks. PQ and TL treatments were performed according to the method of Orito et al. (2004), and Tangpong and Satarug (2010). The rats were euthanized with thiopental sodium overdose (100 mg/kg BW) anesthesia, then the kidneys were collected. Biochemical analysis Blood samples were centrifuged at 3000 rpm for 5 min. Sera were collected. Levels of BUN and creatinine were measured using Cobas Mira Plus CC Chemistry Analyzer (Switzerland). Determination of MDA level The measurement was performed using the OxiSelect ™ TBARS Assay Kit (CAT no. STA-330, Cell Biolabs, Inc., USA) according to the manufacturer's protocol. The kidney tissues were cut into small pieces, washed by phosphate-buffered saline and homogenized to give a final concentration of 50 mg/mL in phosphate buffered saline (PBS) containing 1X butylated hydroxytoluene (BHT). The tissues were then homogenized on ice and centrifuged at 10000 Â g for 5 min to collect supernatant. Then, 100 μL of samples or MDA standard was added to microcentrifuge tubes, 100 μL of the SDS lysis solution was added and mixed thoroughly. All the samples were incubated for 5 min at room temperature. Then, 250 μL of thiobarbituric acid (TBA) reagent was added. Each tube was closed and incubated at 95 C for 60 min. The tubes were then removed and cooled to room temperature in an ice bath for 5 min. All tubes were then centrifuged at 3000 rpm for 15 min. The supernatant was removed and finally 200 μL of samples and MDA standard was transferred to a 96-well microplate compatible with a spectrophotometric plate reader. The absorbance was read at 532 nm. Microscopic examination of histological alterations The kidney section was fixed with 10% neutral buffered formalin solution, processed, and embedded in paraffin. It was sliced into 5 μm section using a microtome and then stained with hematoxylin and eosin (H&E). The sample was observed under light microscope (Olympus BX53F2, Japan). Determination of mRNA expressions of renal NOX using reverse transcription-polymerase chain reaction (RT-PCR) Total RNA was prepared using a Tissue Total RNA Mini Kit (Geneaid, Korea). The purity and quantity of RNA were analyzed using NanoDrop ™ one/one C Microvolume UV-Vis Spectrophotometer with Wi-Fi (Thermo Scientific ™ , USA). Reverse transcription and PCR were performed for amplification. The thermal cycling conditions were set up as follows: denaturation at 95 C for 15 min and at 94 C for 1 min, annealation at 65 C for 1 min, extension at 72 C for 1 min, and elongation at 72 C for 10 min. The primers contained NOX: forward primer, GGAAATA-GAAAGTTGACTGGCCC and reverse primer, GTATGAGTGCCATCCAGAG CAG (Rashed et al., 2011); and β-actin: forward primer, TTCTTTGCAGCTCCTTCGTTGCCG and reverse primer, TGGATGGC-TACGTACATGGCTGGG (Bessa et al., 2012). The primer sequence was 5 0 -3 0 . The sample was examined on 2 % gel agarose. Following ethidium bromide staining, the bands were visualized using an UV trans-illuminator. The density of PCR product was analyzed using Gen-eTools image analysis software (Syngene, Frederick, MD, USA). Immunohistochemistry of the renal NOX-1 and NOX-4 After deparaffinization, the section was rehydrated and heated in sodium citrate buffer solution at pH 6.0 (Merck, Germany). Then the slide was incubated with 3% H 2 O 2 in distilled water for blocking endogenous peroxidase activity. The blocking buffer (normal goat serum) was used for blocking nonspecific binding site. The slide was incubated with a primary antibody containing rabbit anti-mouse NOX-1 (Santa Cruz Biotechnology Inc., USA) and NOX-4 (Santa Cruz Biotechnology Inc., USA) overnight, then incubated with secondary antibody (Vector Laboratories, CA, USA). Avidin-biotin complex (Vectastain ABC Kit, Vector Laboratories, USA) conjugated with horseradish peroxidase was added to the sections and the DAB Kit (Vector Laboratories, USA) was applied. The sections were then counterstained with Mayer's hematoxylin (Merck, Germany), and were dehydrated and mounted. The slide was scored in 50 random microscopic fields at high magnification: score 0 ¼ none; score 1 ¼ 1-25% of immunopositive cell; score 2 ¼ 26-50% of immunopositive cell; score 3 ¼ 51-75% of immunopositive cell; and score 4 > 75% of immunopositive cell. Statistical analysis Results were expressed as mean AE standard error of the mean (SEM). Differences between the groups were determined using one-way analysis of variance followed by the least significant difference test (LSD). P values of <0.05 were considered statistical significance. Ethical statement All animal procedures were carried out in accordance with the guidelines of the Animal Ethics Committee of Walailak University (certification no. 002/2018). Effects of TL aqueous leaf extract on BUN and creatinine in PQtreated rats As illustrated in Figure 1, the PQ group showed a significant increase in the levels of BUN and creatinine (p < 0.001) compared with the control group. The PQ þ TL-LD group demonstrated a significant decrease in the levels of BUN (p < 0.05) and creatinine (p < 0.001) compared with the PQ group. Moreover, The PQ þ TL-HD group also demonstrated a significant decrease in the levels of BUN (p < 0.05) and creatinine (p < 0.001) compared with the PQ group. Effects of TL aqueous leaf extract on renal MDA levels in PQ-treated rats The PQ group showed a significant increase in the renal MDA levels compared with the control group (p < 0.05). TL aqueous leaf extract in the low dose group had no statistically significant difference of the renal MDA levels compared with the PQ group. Fortunately, the high dose of TL aqueous leaf extract significantly decreased the renal MDA levels compared with the PQ group (p < 0.05) (Figure 2). Effects of TL aqueous leaf extract on pathological alterations of kidney in PQ-treated rats The PQ group showed mild hydropic degeneration of the tubules. TL aqueous leaf extract both low dose and high dose can improve the pathological alterations of the kidney (Figure 3). Effects of TL aqueous leaf extract on the renal NOX expression in PQtreated rats The PQ group demonstrated a significant upregulation in the mRNA expression of renal NOX compared with the control group (p < 0.001). However, TL aqueous leaf extract of both low and high dose significantly downregulated the mRNA expression of renal NOX compared with the PQ group (p < 0.001) (Figure 4, Supplementary data 1). Effects of TL aqueous leaf extract on immunohistochemistry of renal NOX-1 and NOX-4 in PQ-treated rats The PQ group showed a significant increase in the expressions of renal NOX-1 (p < 0.05) and NOX-4 (p < 0.001) compared with the control group. Fortunately, TL aqueous leaf extract of both low and high dose significantly reduced the expressions of renal NOX-1 and NOX-4 (p < 0.05) compared with the PQ group ( Figure 5). Discussion PQ is considered a highly toxic herbicide in the world. PQ poisoning in human primarily occurs due to accidental or intentional ingestion which caused fatal multiple organ failure mediated by ROS (Kim et al., 2008). This study demonstrated that PQ caused elevated BUN and serum creatinine levels similar to previous work (Kan et al., 2012). BUN and creatinine are nitrogenous end products of metabolism which are most widely used as a valuable screening test to evaluate kidney damage (Nisha and Srinivasa Kannan, 2017). BUN is a non-protein nitrogenous end product. The concentration of BUN depends on protein intake, body's capacity to catabolize protein, and capacity to excrete BUN by the renal system (Salazar, 2014). Creatinine is also a non-protein nitrogenous waste product generated by the breakdown of creatine and phosphocreatine which is accepted as an indicator to evaluate renal function (Price and Finney, 2000). Previous study demonstrated that PQ poisoning caused a markedly increase in creatinine levels, which is associated with loss of renal function indicated by increased generation of creatine and creatinine following severe oxidative stress (Mohamed et al., 2015). This study indicated that elevated levels of BUN and serum creatinine are associated with PQ-induced kidney injury. Lipid peroxidation is involved in PQ poisoning resulting from the induction of ROS, eventually causing pathological alterations of cells and tissues (Bus et al., 1976). MDA is one of the final products of polyunsaturated fatty acids peroxidation in the cells and is used as a lipid peroxidation biomarker. Overproduction of MDA indicates oxidative damage (Gaweł et al., 2004). This study demonstrated that PQ caused an increase in the level of renal MDA similar to previous study (Amanov et al., 1994). The level of plasma MDA also increased in PQ intoxication (Yasaka et al., 1981). This study demonstrated that an increase in the level of renal MDA indicated oxidative damage and might reflect ROS generation in PQ-treated rats. NOX family is an important source of ROS production in biological system. It can be characterized into seven isoforms including NOX-1, 2, 3, 4 and 5, and DUOX-1 and 2 (Burtenshaw et al., 2017). It is a membrane-bound enzyme complex that transfers electrons across biological membranes and is involved in the pathogenesis of oxidative stress (Bedard and Krause, 2007). This study showed that mRNA expression of renal NOX upregulated in the PQ-treated rats. Furthermore, immunohistochemistry illustrated the increased protein expressions of renal NOX-1 and NOX-4. We suggested that NOX especially NOX-1 and NOX-4 is implicated in the pathogenesis of oxidative stress following PQ exposure. Currently, there is no specific effective antidote for treating PQ poisoning. TL is one of the most important Thai herbs and is commonly used in Thai traditional medicine. In Thailand, herbal tea from TL leaf is widely known for detoxification. Moreover, aqueous leaf extract of TL is found to exhibit low toxicity in prolonged use and serve as a significant anti-mutagenic activity (Saenphet et al., 2005;Chivapat et al., 2010). It also possesses various beneficial properties such as antioxidant, anti-inflammation and anti-microbial (Junsi et al., 2020;Wonkchalee et al., 2012;Chan et al., 2011). We hypothesized that TL aqueous leaf extract might possess antioxidant properties that help alleviate PQ-induced kidney injury by inhibiting NOX. This study found that both low dose and high dose of TL aqueous leaf extract could reduce the levels of BUN and creatinine, downregulate the mRNA expression of renal NOX, reduce protein expressions of renal NOX-1 and NOX-4, and alleviate kidney injury following PQ treatment, possibly due to its antioxidant abilities. The result showed that only a high dose of TL aqueous leaf extract could reduce the level of MDA. The modulating properties might result from the potential role of its aqueous extract constituents. Previous study demonstrated that aqueous TL leaf extract contained primarily caffeic acid and apigenin using high-performance liquid chromatography (Oonsivilai et al., 2007). Taken together, we found that TL aqueous leaf extract possessed antioxidant properties, which could modulate PQ-induced kidney injury by regulating oxidative stress through inhibiting NOX, especially NOX-1 and NOX-4, expressions. Conclusions PQ caused an increase in the levels of BUN, serum creatinine and renal MDA. Additionally, it induced upregulated mRNA expression of renal NOX. Immunohistochemistry demonstrated an increase in renal NOX-1 and NOX-4 expressions. This study shows that a high dose of TL aqueous leaf extract is more effective than a low dose in that it possesses antioxidant properties, which help reduce renal MDA level, downregulate mRNA expression of renal NOX and reduce protein expressions of renal NOX, specifically NOX-1 and NOX-4. As a result, it can alleviate kidney pathology induced by PQ. Author contribution statement Sarawoot Palipoch: Conceived and designed the experiments; Performed the experiments; Analyzed and interpreted the data; Contributed reagents, materials, analysis tools or data; Wrote the paper. Chuchard Punsawad: Performed the experiments; Contributed reagents, materials, analysis tools or data. Phanit Koomhin: Performed the experiments; Contributed reagents, materials, analysis tools or data. Funding statement This work was supported by a grant from the Institute of Research and Development (under the contract WU 61111) and by the new strategic research project (P2P) Walailak University, Thailand. Data availability statement Data included in article/supplementary material/referenced in article. Declaration of interests statement The authors declare no conflict of interest. Additional information Supplementary content related to this article has been published online at https://doi.org/10.1016/j.heliyon.2022.e09234.
2022-01-08T16:16:59.230Z
2022-04-01T00:00:00.000
{ "year": 2022, "sha1": "f97d2f52f3e20e105d884a9cc2787d16685d5243", "oa_license": "CCBYNCND", "oa_url": "http://www.cell.com/article/S2405844022005229/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6fa53645d43d743c13064a2784f0f24389760515", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
235713666
pes2o/s2orc
v3-fos-license
Assessment of Mycotoxin Exposure in a Rural County of Chile by Urinary Biomarker Determination Aflatoxin B1 (AFB1), ochratoxin A (OTA), zearalenone (ZEN), and deoxynivalenol (DON) are frequent mycotoxins that may cause carcinogenic, mutagenic, estrogenic, or gastrointestinal effects. The aim of this study was to assess the exposure to and risk from AFB1, OTA, ZEN, and DON in 172 participants of the Maule Cohort (MAUCO) by a biomarker analysis in urine and to associate their exposure with food consumption and occupation. Mycotoxins in the first morning urine were analyzed by solid-phase extraction and quantified by Ultra-High-Performance Liquid Chromatography with a mass–mass detector. Participants’ information regarding food consumption, occupation, and other characteristics was obtained from a baseline and 2-year follow-up survey of the cohort. The prevalence and mean levels of mycotoxins in the urine were as follows: DON 63%, 60.7 (±78.7) ng/mL; AFB1 8%, 0.3 (±0.3) ng/mL; α-zearalenol (α-ZEL) 4.1%, 41.8 (±115) ng/mL; β-ZEL 3.5%, 17.4 (±16.1) ng/mL; AFM1 2%, 1.8 (±1.0) ng/mL; OTA 0.6% (1/172), 1.3 ng/mL; and ZEN 0.6%, 1.1 ng/mL. These results were translated into exposures of DON, ZEN, and aflatoxins of public health concern. Participants who consumed coffee and pepper the day before had a significantly greater presence of DON (OR: 2.3, CI95 1.17–4.96) and total ZEL (OR: 14.7, CI95 3.1–81.0), respectively, in their urine. Additionally, we observed associations between the habitual consumption of beer and DON (OR: 2.89, CI95 1.39–6.42). Regarding the levels of mycotoxins and the amount of food consumed, we found correlations between DON and nuts (p = 0.003), total ZEL and cereals (p = 0.01), and aflatoxins with capsicum powder (p = 0.03) and walnuts (p = 0.03). Occupation did not show an association with the presence of mycotoxins in urine. Introduction Mycotoxins are toxic metabolites produced naturally by some species of filamentous fungi, such as Aspergillus, Fusarium, and Penicillium [1,2]. Fungal growth can occur before or after harvest, during storage, or in foods, especially in environments with high humidity and temperatures, followed by mycotoxin production. Most mycotoxins are chemically stable and persist after food processing [3,4]. The most investigated mycotoxins are aflatoxin B1 (AFB1), ochratoxin A (OTA), zearalenone (ZEN), and deoxynivalenol (DON), causing carcinogenic, mutagenic, estrogenic, and gastrointestinal effects in humans and animals [5]. Human exposure to these mycotoxins occurs predominantly through the consumption of contaminated foods [6]. Daily exposure to mycotoxins has been measured in an indirect way by an estimated daily intake (EDI), which is based on consumption and the mycotoxins' concentrations in foods [7], as well as by a direct exposure assessment utilizing a probable daily intake (PDI); the latter is based on biomarker measurements in biological fluids, such as urine and blood, and the excretion rate of the mycotoxins [8]. This method is the most accurate and has been used to estimate individual mycotoxin intakes, including all sources of exposure [9]. Urine is usually preferred for population-based studies because it is a noninvasive method; its limitations are the daily variations in urine composition and that the mycotoxin excretion rates varies among individuals [6,10]. Urinary biomarkers are better indicators for short-term variations in exposure, as blood biomarkers may not reflect this because of the protein-binding properties of some mycotoxins (e.g., aflatoxin and OTA) [8,11]. After exposure estimation, further health risk assessment is usually conducted in terms of the tolerable daily intake (TDI) or a Margin of Exposure (MoE) approach. For the risk characterization of non-carcinogenic toxins, a Hazard Quotient (HQ) is usually assessed, being the ratio between the calculated PDI and the reference TDI determined by toxicological studies in sensitive laboratory species [17]. The MoE is the ratio of the benchmark dose lower bound (BMDL) of a dose-response curve, usually at the 10% effect (BMDL10), and the estimated PDI [18]. The MoE approach is usually used for carcinogenic toxins, such as aflatoxin. Recent studies regarding the uncertainty of OTA and kidney carcinogenicity estimated that the previous tolerable weekly intake (TWI) of 120 ng/kg body weight (bw) was no longer valid and an MoE approach needs to be applied for risk characterization [19]. In Chile, previous research showed that OTA and aflatoxins were the most prevalent mycotoxins in food, especially from imported spices and capsicum (chili and paprika powder). An indirect exposure assessment based on these data estimated that aflatoxin contamination of cereals, dairy, and nuts should be considered a health concern [20]. Additionally, population-based studies showed OTA in plasma, urine, and breast milk [21][22][23], and aflatoxin in plasma [24]. The latter case-control study reported aflatoxin-albumin adducts in serum associated with the risk of gallbladder cancer (GBC) (Odds Ratio (OR): 13.2; 95% Confidence Interval (CI95) 4.3-47.9), and higher consumption of capsicum among GBC patients than among the controls (OR: 13.2; CI95 4.3-47.9). The higher GBC risk associated with capsicum consumption had been previously noted by Serra et al. [25]. Thus, it has been proposed that capsicum contaminated with aflatoxin could in part explain the high rates of GBC in the Chilean population, with it being especially high in areas consuming high quantities of capsicum [26]. Even though aflatoxins and OTA are the most sampled and prevalent mycotoxins in food in Chile [20], there is no current information regarding the direct estimation of exposure based on measurements of different mycotoxins in the population in the form of biomonitoring studies. It is hypothesized that rural populations may experience higher mycotoxin exposure than urban populations, due to occupational exposures [27]. Currently, exposures in Chile are not associated with specific foods or occupations. Thus, the aim of this study was to assess exposure to AFB1, OTA, ZEN, and DON in residents of an agricultural county with high rurality by measuring the mycotoxin biomarkers in urine, and to explore the mycotoxins' associations with specific food items and occupational exposures. For this aim, we nested our study in the agricultural county of Molina, in the population-based Maule Cohort, MAUCO [28,29]. Description of the Population The description of the participants is shown in Table 1. The 172 participants had an average age of 57 (±9.3) years, and no differences were observed between the sexes in age, education, ethnicity, and current habits regarding smoking, drinking alcohol, and physical activity. The only significant difference was the higher number of men in agricultural work (p = 0.001). Consumption in grams per day (g/day), resulting in habitual consumption and the 24 h recall reported by the participants, is shown in Table 2. We did not find differences between the sexes regarding food consumption, with the exceptions of a higher beer consumption among men and a higher ginger consumption among women ( Table 2). Occurrence and Concentration of Mycotoxins in Urine The prevalence and mean (standard deviation (SD)) concentration of mycotoxins in urine, along with the limits of detection (LOD) and quantification (LOQ) of the method, are shown in Table 3. The analysis of urine samples showed that DON was the most frequently occurring mycotoxin (73%), with 63% of the samples above the LOQ of the method. It was followed by AFB1 (8%), α-ZEL (8%), and β-ZEL (7.5%). The higher concentrations creatinine-adjusted were for DON with 64.6 (SD: 205.8) ng/mg creat, β-ZEL with 21.9 (SD: 57.8) ng/mg creat, α-ZEL with 19.1 (SD: 25.2) ng/mg creat, and much lower for AFB1 with 0.3 (SD: 0.2) ng/mg creat. The metabolites OTα and DOM-1 were not found in urine. We did not find difference between the sexes in relation to mycotoxin prevalence or levels. The age groups between 45-54 years and 65-75 years had significantly more DON than the 35-44 and 55-64 age groups (p = 0.03). Moreover, ten subjects had a co-occurrence between aflatoxins and DON in their urine; ten subjects had ZEN metabolites, along with DON; and one participant showed the presence of DON, aflatoxins, and ZEL. Table 3. Prevalence and concentrations of detected mycotoxins found in the urine of the 172 participants of this study, along with the limit of detection (LOD) and quantification (LOQ) of the method. Association Between Food Consumption and Mycotoxins in Urine Considering food items of special interest, we found that participants who consumed coffee and pepper the day before had a significantly greater presence of DON (OR: 2.3, CI95 1.17-4.96) and total ZEL (OR: 14.7, CI95 3.1-81.0), respectively, in their urine. Furthermore, a protective association between dairy and DON (OR: 0.42, CI95 0.23-0.75) was observed. Regarding habitual consumption of the selected food items, we observed a protective association between nuts and DON (OR: 0.37, CI95 0.15-0.83), as well as between wine and total ZEL (OR: 0.29, CI95 0.11-0.69). Additionally, borderline associations between the habitual consumption of beer and DON (OR: 1.99, CI95 1.02-4.13) as well as coffee and aflatoxins (OR: 2.52, CI95 1.04-6.12) were observed. Crude ORs for all the selected food items are shown in Supplementary Table S1. These associations remained after adjusting for sex and age. In the case of the association between beer and DON, the adjusted ORs were significant (OR: 2.89, CI95 1.39-6.42), and risk was associated with men and younger participants (<54 years old). Regarding the levels of DON, aflatoxins, and ZEN metabolites and the amount (g/day) of food consumed, regression models revealed significant correlations between DON and nuts (R 2 : 0.07, p = 0.003), total ZEL and cereals (R 2 : 0.04, p = 0.01), as well as between aflatoxins with capsicum powder (R 2 : 0.18, p = 0.03) and walnuts (R 2 : 0.44, p = 0.03). Regressions of the selected food items are shown in Supplementary Table S2. Association Between Occupation and Mycotoxins in Urine No significant associations between food handling or agricultural work and the presence of mycotoxins in urine were found. Of the 19 food handlers, only two were mill workers or bakers, and both had DON in their urine, with levels of 30 and 66 ng/mg, adjusted for creatinine. We found 11 participants with very high DON levels (>100 ng/mg creatinine-adjusted). Among them, nine (82%) were women, and they differed in neither food consumption nor occupation. Dietary Exposure and Risk Assessment Exposure was calculated in prevalent participants using probable daily intake (PDI). The mean (SD) PDI for DON was 2532 (6921) ng/kg bw creatinine-adjusted; a mean of 5997 (9556) ng/kg bw creatinine-adjusted was estimated for ZEN; and a mean of 1.1 (2.3) ng/kg bw creatinine-adjusted was calculated for AFB1. Compared to the tolerable daily intake (TDI) of DON (1 µg/kg bw/day), the exposure of 55% of the participants (66/125) resulted in a public health concern (Figure 1). A woman of 62 years old had a PDI as high as 68,860 ng/kg bw creatinine-adjusted. Regarding ZEN, when compared to the TDI (0.25 µg/kg bw/day), as well as aflatoxins compared to the Margin of Exposure (MoE) for carcinogenic effects (0.4 µg/kg bw per day), 100% of the participants tested positive for zearalenone metabolites (18/18) and for aflatoxins (16/16) in their urine, representing a potential health concern in terms of exposure (for more details, see Supplementary Material Table S3). Two women had a PDI for ZEN above 40,000 ng/kg bw creatinine-adjusted. ). The red line shows an HQ of 1. If the HQ > 1, the exposure could be of health concern. An HQ > 5 (outliers) was not considered in the figure (n = 6). Discussion High prevalence and concentrations of DON, low prevalence and high concentrations of ZEN metabolites, and low prevalence and concentrations of aflatoxin were observed. Compared with other similar studies, while prevalence was lower than reported in Europe [30][31][32][33][34], Brazil [35], USA [36], Haiti [32], and South Africa [37], levels in Chile were higher than reported for DON and ZEN metabolites (Table 4). A study of pregnant women in Croatia found high levels of DON and its conjugates DON-3-GlcA and DON-15-GlcA [38], reporting similar total DON levels than our study. In our case, the samples were hydrolyzed, which means that the DON aglycone and DON conjugate (glucuronide or sulfate) are being measured simultaneously. Table 4. Comparative prevalence (%) and mean levels (ng/L) of aflatoxins, ochratoxin A (OTA), deoxynivalenol (DON), and zearalenone (ZEN), as well as their metabolites α-zearalenol (α-ZEL) and β-zearalenol (β-ZEL) analyzed in adults' urine of different countries. The MAUCO participants were residents of the Molina county, in Central Chile, an area characterized by a temperate climate, with a high range of variation in temperature and rainfall regimes [39]; this region is especially prone to Fusarium development and the production of zearalenone, fumonisins, and trichothecenes, such us DON [40]. A food item that was not considered in the food surveys because of its universal consumption was bread. According to the National Food Consumption Survey (ENCA), 99.1% (CI95 98.8-99.4%) of the Chilean population eats bread [41], with a median consumption of 151.9 (85.2-233.7) g/day [41]. The bread consumed in Chile is mainly wheat-based, with 63% local production [42]. During the summer of 2017 (the year of the urine sampling), Chile had its second hottest summer in more than 50 years. The Annual State of the Climate Report of the USA [43], highlighted the unusual weather conditions in the country, with extended periods of drought and extreme heatwaves, leading to increased rainfall and the heaviest snowfall in nearly 100 years. Curicó, a city near Molina, also broke its record for maximum temperature. These exceptional conditions may be produced by local outbreaks of Fusarium and could explain the high levels of DON observed in this study, as the accumulation of trichothecene mycotoxins in the kernels are strongly weather dependent [44]. The DON levels observed in this biomonitoring do not correlate with the levels reported by the Chilean Mycotoxin Surveillance Program, where all samples analyzed for DON (mainly wheat flour) were below the regulation [20]. This means that sampling programs must be encouraged to identify the dietary contribution of DON. In this regard, significant associations were observed between beer consumption and DON. Beer is often found to have been contaminated with DON, ZEN, and other mycotoxins prevalent in cereals [45][46][47]. There has been an important and sustained increase in the consumption of beer in Chile in recent years, where consumption has gone from 25 to 44 L/year per capita [48]. These results suggest that beer must be incorporated into the Chilean Mycotoxin Surveillance Program, and further analysis is needed in this matrix. Coffee consumption was also associated with DON prevalence. This association can be explained because a large percentage of coffee consumed in Chile is instant (95%) [49], of which an unknown percentage may be cereal coffee, such as barley coffee. Currently, coffee is analyzed only for OTA [20], so exploring other mycotoxins in this food item is necessary. Although Fusarium toxins have been found in nuts [50,51], more research is needed to explain the correlation between the DON levels and nuts consumption regarding possible cross-contamination between grains or if they are consumed with cereal-based foods. On the other hand, breakfast cereal consumption, which could also explain these high levels, is lower than 27% in adults according to ENCA [41], which is in accordance with the consumption reported in this study. Another possible explanation of the high DON levels seen in this study is imported processed products based on cereals that were not detected by the Surveillance Program due to the low number of analyses made for this mycotoxin (approx. 25 analyses per year) [20]. We explored both 24 h recall and habitual consumption using a validated Mediterranean diet survey in food items associated with mycotoxin exposure. The association between capsicum powder and aflatoxin was expected as it is the most mycotoxincontaminated food item in Chile [20]. Capsicum is prone to aflatoxin contamination [52][53][54], but is not usually associated with aflatoxin exposure [15]. Other food items found to be associated with mycotoxins in urine were walnuts and aflatoxins, while ZEN was associated with cereals; these are both associations found in previous studies [15,[55][56][57]. However, due to the low number of positive samples, correlations made for aflatoxins and ZEL may be not predictive. Regarding the potential health effects of DON in this population, acute effects, such as gastroenteritis, and chronic health effects, such as altered nutritional efficiency, weight loss, and anorexia [15], must be studied given the exceptionally high exposures estimated. Furthermore, ZEN exposures should be continually monitored in this population because of their estrogenic effects [5,58]. For assessing chronic effects, prospective studies regarding the possible association between mycotoxins and chronic digestive diseases in MAUCO must be designed [59]. The Maule region has one of the highest cancer mortality rates in Chile, especially when it comes to digestive cancers, such as gastric, gallbladder, and esophagi cancer [60]. We did not find an association between occupation and mycotoxins in urine. This could be explained because most agricultural participants reported their work in open environments (fruit, cereals, and vegetable harvest), whereas occupational exposure have been mainly associated with airborne mycotoxin due to poor ventilation and inappropriate protective clothing [61][62][63]. In the case of the 19 food handlers of this study, the majority worked as cooks in kitchens or in delivery roles. Only two of them worked in a bakery or mill, and both presented DON in their urine. In this regard, future studies must be focused on mill and bakery workers to assess occupational exposure. Although this study presents new information about mycotoxin exposure, the results must be interpreted with caution as they represent only a small part of the population. Compared to the MAUCO participants [28], even though there were no differences between the proportion of men and women between the two studies, in our sample, women were significantly older than the MAUCO population. This could limit the representativeness of the results. Another limitation was that dietary intake and other descriptive information of the participants were self-reported, so misreporting could not be excluded. Additionally, habitual consumption was obtained from a Mediterranean diet survey, which did not have all of the most mycotoxin-prevalent foods. Due to the long half-lives of aflatoxins and OTA, urine biomarkers of these mycotoxins (especially for once-off urine samples, such as in this study) may not be the most effective methods of choice [10,15]. Furthermore, AFM1, the main biomarker of AFB1, is excreted within the first 1-4 days, and unmetabolized aflatoxin B1 is excreted in the first day [61]. This fact could explain the higher prevalence of AFB1 in this sample. For further and more accurate analyses, investigating biomarkers such as aflatoxin-albumin adduct and OTA in serum should be considered [6,8]. Additionally, the LODs of AFM1 and OTA of this method were higher than usually detected, so the results may be underestimated. Despite these limitations, this is the first study to report the simultaneous presence of aflatoxins, OTA, DON, and ZEN in the urine of members of the general Chilean population. As such, this study has presented new information regarding the associations between direct mycotoxin exposure and food. Conclusions This study presents new information about mycotoxin exposure in Chile. High prevalence and concentrations of DON, low prevalence and high concentrations of ZEN metabolites, and low prevalence and concentrations of aflatoxin were observed in the urine of 172 participants. The risk assessment estimations based on those levels were translated into a high exposure risk for DON, AFB1, and ZEN in the participants. The significant associations observed between mycotoxins and food consumption were the following: DON with nuts, coffee, and beer; ZEN metabolites with pepper and cereals; and aflatoxins with capsicum powder and walnuts. Further studies must address the following: (i) continuous biomonitoring of these mycotoxins to assess if these levels were due to climate exception or they are habitual regarding our unique consumption patterns; (ii) prospective population-based studies for assessing the health effects of the high exposures observed in this study; (iii) sampling and analysis of food items not usually considered in the surveillance program, e.g., beer and bread; and (iv) assessments of occupational exposure, especially in relation to mill workers and bakers. Study Population The study subjects were a sample (n = 172) of the Maule Cohort (MAUCO), in Molina, Region del Maule, Chile (latitude S35 • 21'12.13" and longitude O70 • 54'34.34"). A priori power analyses indicated that 160 subjects of the 8000 MAUCO participants at the time would be sufficient for significant results, albeit with the assumption of 90% prevalence (as in the case of OTA in [21][22][23]), with 80% power and 95% confidence. Participants were selected by a convenience sampling, which included around 50% agricultural workers and all participants who were working in food handling that had started their 2-year follow-up of MAUCO in 2017. MAUCO is the first prospective population-based cohort of cardiovascular disease and cancer in Chile [28,29]. All the residents of the Molina County aged 38 to 74 years who were able to autonomously consent to join the cohort and who did not have a late-stage disease were eligible to enter the cohort. Enrollment was initiated in January 2015, and participants will be followed for at least 10 years. The methods and baseline findings have been reported elsewhere [28,29]. According to Berdegué et al. [62], Molina belongs to Group 2 of the rural counties, a group that represents 44% of the rural population of Chile. Diet and Occupation Assessment In 2015, MAUCO's participants answered a habitual Mediterranean diet consumption poll as part of the health and lifestyle questionnaire. The question for each specific food was as follows: "On average, in the last 12 months, how many servings of did you consume per week?". Possible answers were alternatives: no consumption, less than a portion, 1 portion, or 2 or more portions. A sample of the first morning urine of the participants was taken as part of their 2-year follow-up (from May to November 2017), and was frozen until analysis. A food questionnaire on dietary intake the previous day was administered the day of the urine sample, along with their current job. Food consumption was assessed from yes/no consumption and the quantity of the consumption in g per day. A serving was defined depending on the type of food, i.e., a cup of cereals (33 g); a cup of dairy (200 mL); a teaspoonful of coffee, tea, spices (5 g); a cup of maize (100 g); a plate of legumes (190 g); a portion of meat (85 g); and a handful of peanuts or mix of nuts (30 g), including walnuts, almonds, hazelnuts, cashews, pistachios, and peanuts. For occupation, the participants were asked the following: "Do you have a current job? What kind of activity does the company, business, industry, service, or office where you work do? Does your current job consist in the production, processing, or handling of food (for human and animal consumption)?". Urine Sample Extraction The first morning urine samples were unfrozen and centrifuged at environmental temperature for 3-5 min at 5600× g prior to extraction. Then, the urine concentration was calculated by the creatinine percentage by a creatinine kit (Creatinine Respons KIT, Sigma-Aldrich) and further measured by spectrophotometry. Three milliliters of urine were mixed with 250 µL of sodium acetate buffer (1.4 M) at pH 5.0 and 40 µL of βglucuronidase/arylsulfatase enzyme (Sigma-Aldrich), and was in-move incubated for 16 h in an oven at 37 • C. Oasis HLB Prime columns 1 cc (Waters) were conditioned with 100% methanol and then distilled water (Merck). The hydrolysate was later passed through them. The columns were cleaned twice with distilled water, and the contents were subsequently eluted with 3 mL of acetonitrile 100%. The eluate was evaporated under a gentle nitrogen stream (Thermo Scientific, MA, USA) at 45 • C and then reconstituted with 450 µL of acetonitrile and subsequently was filtered with a 0.22 µm Teflon syringe filter. The filtrate was received in amber vials and [13C]-labelled internal standards were added for further quantification by Ultra-High-Performance Liquid Chromatography with a mass spectrometric detection detector (UPLC-MS/MS; Shimadzu, Kyoto, Japan). Chromatographic Conditions The LC-MS analysis was performed with a Shimadzu (Kyoto, Japan) Nexera X2 UH-PLC system, which consisted of a LC-30AD pump, a DGU-20A5R degassing unit, an SIL-30AC autosampler, a CTO-20AC column oven, a CBM-20A communication module, an SPD-M20A diode array detector, and an ESI-LCMS-8030 triple quadrupole mass spectrometer. The system was controlled by LabSolution 5.8 software. The separation was achieved by a Phenomenex (Torrance, CA, USA) Kinetex XB-C18 column (100 mm × 4.6 mm, 2.6 µm), with an oven temperature of 30 • C and a flow rate of 0.4 mL/min. The mobile phase A was 0.1% acetic acid in Milli-Q water, and phase B was acetic acid 0.1% in acetonitrile. The volume of injection was 20 µL. The gradient used started with 10% B for 5 min, and then increased to 50% B over 3 min. Then, eluent B was raised to 95% until min 15.0 followed by a hold-time of 2.0 min and subsequent 3 min column re-equilibration at 10% B. The triple quadrupole mass spectrometer had an electrospray ionization source (ESI). In general, the detection parameters were the following: collision gas argon, nebulizer gas (N2) of 3 L/min, desolvation gas (N2) of 15 L/min, desolvation line temperature of 250 • C, and heat block temperature of 400 • C. Full-scan spectra were acquired from m/z 100 to 2000 with a multiple reaction monitoring working mode. The UPLC-MS/MS parameters for the detection of the targeted mycotoxins are shown in Supplementary Table S4a. Quantification was done by interpolation of the data in the calibration curve for all mycotoxins. Matrix compensation was done via IS. The lowest detection levels for the investigated mycotoxins were set as the lowest level of the calibration curve (Table S4b). The recovery of the method is specified in Table S4c. Exposure Assessment The probable daily intake (PDI) of mycotoxin was calculated according to Equation (1): where C is the urinary concentration of the mycotoxin biomarker (ng/mL), V is the mean volume of daily urine production in adults (1500 mL), BW is the individual weight of the participants (kg), and E is the mean urinary excretion rate per mycotoxin (%): 1.5% for aflatoxins [63], 2.5% for OTA [64], 9.4% for ZEN, and 72% for DON [65]. Concentrations were assumed to be the mean value between LOD and LOQ in <LOQ. PDI was also calculated based on the creatinine-adjusted biomarker concentrations (ng/mg of creatinine). Samples too diluted (creatinine <0.3 mg/mL) or too concentrated (creatinine >3 mg/mL) were excluded from the estimations [66]. Risk Characterization For aflatoxins, the Margin of Exposure (MoE) was calculated as the ratio between the reference benchmark dose level (BMDL) at 10% (BMDL10) and the estimated PDI. According to the CONTAM Panel of EFSA, we used a BMDL10 of 0.4 µg/kg bw per day for the incidence of HCC; a calculated MOE below 10,000 implies a high health concern [67]. For DON and ZEN risk characterization, a Hazard Quotient (HQ) was assessed, being the ratio between the calculated PDI and the reference tolerable daily intake (TDI) determined by toxicological studies in sensitive laboratory species [17]. A TDI of 1 µg/kg bw per day was used for DON [68] and a TDI of 0.25 µg/kg bw was used for ZEN [69]. A HQ < 1 was considered a health concern. Statistical Analysis A descriptive analysis of the available data was performed using the software Rproject version 4.0.2 (https://www.r-project.org). Differences between the sexes for the sociodemographic characteristics were analyzed using the Student's t-test (t.test) and Kruskal-Wallis test (kruskal.test) for continuous variables, while categorical variables were analyzed with a Chi-squared test (chisq.test). To assess possible associations between the prevalence of mycotoxin levels and food consumption patterns, linear regression was used. On the other hand, a generalized linear model was used to fit the presence of mycotoxins and food consumption, assuming a binomial distribution of the explanatory variable and a logit link function. The p-value in food consumption in the total population, stratified by sex, was calculated using the kruskal.test R function, since data were not normally distributed. A p-value <0.05 was considered significant. Supplementary Materials: The following are available online at https://www.mdpi.com/article/ 10.3390/toxins13070439/s1, Table S1. Odds ratio (OR) and Confidence Intervals (CI) between the selected food items and the presence of mycotoxins in the urine of the n = 172 participants; Table S2. Calculated regressions of the levels of mycotoxins in urine found and the grams per day consumed by the 172 participants of the study.; Table S3. Levels in ng/kg bw creatinine-adjusted, probable daily intake (PDI), and risk estimation by the Hazard Quotient (HQ) or Margin of Exposure of mycotoxins in urine of the participants of the study.; Table S4. LC-MS/MS parameters, equations, and recovery for the detection of the targeted mycotoxins.
2021-07-03T06:16:57.039Z
2021-06-25T00:00:00.000
{ "year": 2021, "sha1": "409c8bf1d09d38846372a9caceffb1aacc30499b", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2072-6651/13/7/439/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5a31d66089f1fe9557737aeb6c19dd68428d28f6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
18651254
pes2o/s2orc
v3-fos-license
Coding theory on (h(x), g(y))-extension of Fibonacci p-numbers polynomials In this paper, we define (h(x), g(y))-extension of the Fibonacci p-numbers. We also define golden (p, h(x), g(y))-proportions where p (p = 0, 1, 2, 3, · · · ) and h(x)(> 0), g(y)(> 0) are polynomials with real coefficients. The relations among the code elements of a new Fibonacci matrix, Gp,h,g, (p = 0, 1, 2, 3, · · · ), h(x) (> 0), g(y) (> 0) coincide with the relations among the code matrix for all values of p and h(x) = m(> 0) and g(y) = t(> 0) [8]. Also, the relations among the code matrix elements for h(x) = 1 and g(y) = 1, coincide with the generalized relations among the code matrix elements for Fibonacci coding theory [6]. By suitable selection for the initial terms in (h(x), g(y))-extension of the Fibonacci p-numbers, a new Fibonacci matrix, Gp,h,g is applicable for Fibonacci coding/decoding. The correct ability of this method, increases as p increases but it is independent of h(x) and g(y). But h(x) and g(y) being polynomials, improves the cryptography protection. And complexity of this method increases as the degree of the polynomials h(x) and g(y) increases. We have also find a relation among golden (p, h(x), g(y))-proportion, golden (p, h(x))-proportion and golden p-proportion. Introduction In 13th century, Italian mathematician Leonardo discovered the Fibonacci numbers. First of all, the Fibonacci numbers anticipated the method of recursive relations, one of the most powerful methods of combinatory analysis. Later the Fibonacci numbers were found in many natural objects and phenomena. Now a days Fibonacci numbers [2,10,11] are used in sciences, arts and more recently in combinatorial design theory, high energy physics, information and coding theory [5,7]. The Fibonacci numbers F n (n = 0, ±1, ±2, ±3, . . .) satisfy the recurrence relation with initial terms F 1 = F 2 = 1. We take the ratio of two adjacent numbers and direct this ratio towards infinity. We derive the following unexpected result: where µ is the golden mean. Stakhov [1] introduced Fibonacci p-numbers given by the following recurrence relation: with initial terms where p = 0, 1, 2, 3, · · · The Fibonacci p-numbers can be represented by binomial coefficients as follows: where the binomial coefficients n−kp C k = 0 for the case k > n − kp. For p = 0 the equation (4) reduces to the well-known formula of combinatorial analysis: In fact, when p = 1 we obtain the Fibonacci numbers For calculations of Fibonacci p-numbers for all values of n, we consider the recurrence relation with initial terms Considering (7) as initial term then from (6) we have Since F p (p + 1) = F p (p) = 1. Therefore, F p (0) = 0. Continuing this process by writing n = p, p − 1, · · · , 2 in (6) we get So, we summarize above the following table: (6) and (7) we have For case p = 0, the formula (10) reduces to the well known formula for the binary numbers For case p = 1, the classical Fibonacci numbers F n satisfy the following formulae: We have characteristic equation: The only one positive root, µ p of (12) is called golden p-proportion. The golden p-proportion possess the following remarkable properties: Coding theory on (h(x), g(y))-extension of Fibonacci p-numbers polynomials Stakhov [1] proves that the golden p-proportion represents a new class of irrational numbers which express some unknown mathematical properties of the Pascal triangle. Clearly, such mathematical results are of fundamental importance for the development of modern sciences. The generalized Fibonacci numbers [12,13,14] based on the relation with initial terms where m (> 0) and n = 0, ±1, ±2, ±3, . . .. The m-extension of Fibonacci p-numbers [15] defined by the recurrence relation with initial terms where p (≥ 0) is integer, m (> 0), n > p + 1 and a 1 , a 2 , a 3 , · · · , a p+1 are arbitrary real or complex numbers. The Fibonacci polynomials [4] are defined by the recurrence relation with initial terms The h(x)-Fibonacci polynomials [3] (where h(x) is a polynomial with real coefficients) are defined by the recurrence relation with initial terms F h,0 (x) = 0, F h,1 (x) = 1. Connection among Golden (p, h(x), g(y))-proportion, Golden (p, h(x))proportion and Golden p-proportion The characteristic equation of the (h(x), g(y))-extension of the Fibonacci p-numbers is The equation (26) has only one positive root u 1 = µ p,h(x) , called golden (p, h(x))-proportion. The characteristic equation of the Fibonacci p-numbers is The equation (27) has only one positive root u 2 = µ p called golden p-proportion. Fibonacci G p,h,g matrix In this paper, we define a new Fibonacci G p,h,g matrix of order (p + 1) on the (h(x), g(y)-extension of the Fibonacci p-numbers where p (≥ 0) is integer and h(x) (> 0), g(y) (> 0) The initial terms c 1 , c 2 , c 3 , · · · , c p+1 are in such a manner that Det G p,h,g = (−1) p which is independent of h(x) and g(y) and nth power of G p,h,g , and Det G n p,h,g = (−1) np which is independent of h(x) and g(y). We choose c 1 , c 2 , c 3 , · · · , c p+1 in such a manner that the matrix G p,h,g and nth power of G p,h,g satisfied (29) and (30) respectively. Then matrix, G p,h,g , is applicable for Fibonacci coding/decoding. When g(y) = 1 and c 1 = 1, c 2 = h(x), c 3 = h 2 (x), · · · , c p+1 = h p (x) then (29) and (30) satisfy cheerfully [9]. Conclusion In this paper, we define (h(x), g(y))-extension of Fibonacci p-numbers and golden (p, h(x), g(y))-proportion. We also established a relation among Golden (p, h(x), g(y))-proportion, Golden (p, h(x))-proportion and Golden p-proportion. The research work can be develop for finding the suitable initial terms c 1 , c 2 , c 3 , · · · , c p+1 in such a manner that G p,h,g matrix applied for Fibonacci coding/decoding method. The correct ability of this method increases as p increases but it is independent of h(x), g(y) and for large value of p, it is approximately to 100%. For g(y) = 1, properties of G p,h,g , G n p,h,g matrix coincide with the properties of G p,h , G n p,h matrix respectively [9]. The relations among the code matrix elements for h(x) = 1 and g(y) = 1, coincide with the generalized relations among the code matrix elements for Fibonacci coding theory [6].
2019-04-20T13:14:43.540Z
2014-01-01T00:00:00.000
{ "year": 2014, "sha1": "ba8bcc8020fab8c2e375d34571b11d805f47a14d", "oa_license": "CCBY", "oa_url": "http://www.hrpub.org/download/20131215/UJCMJ2-12401197.pdf", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "c5f03cd7d2d6043750fd4c9fe6e709474c8f476a", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
45782396
pes2o/s2orc
v3-fos-license
Expression of liver cancer associated gene HCCA3 AIM: To study and clone a novel liver cancer related gene, and to explore the molecular basis of liver cancer genesis. METHODS: Using mRNA differential display polymerase chain reaction (DDPCR), we investigated the difference of mRNA in human hepatocellular carcinoma (HCC) and paired surrounding liver tissues, and got a gene probe. By screening a human placenta cDNA library and genomic homologous extend, we obtained a full-length cDNA named HCCA3. We analyzed the expression of this novel gene in 42 pairs of HCC and the surrounding liver tissues, and distribution in human normal tissues by means of Northern blot assay. RESULTS: A full-length cDNA of liver cancer associated gene HCCA3 has been submitted to the GeneBank nucleotide sequence databases (Accession No. AF276707). The positive expression rate of this gene was 78.6% (33/42) in HCC tissues, and the clinical pathological data showed that the HCCA3 was closely associated with the invasion of tumor capsule ( P = 0.023) and adjacant small metastasis satellite nodules lesions ( P = 0.041). The HCCA3 was widely distributed in the human normal tissues, which was intensively expressed in lungs, brain and colon tissues, while lowly expressed in the liver tissues. CONCLUSION: A novel full-length cDNA was cloned and differentiated, which was highly expressed in liver cancer tissues. The high expression was closely related to the tumor invasiveness and metastasis, that may be the late heredited change in HCC genesis. INTRODUCTION Primary hepatocellular carcinoma (HCC) is one of the most common fatal malignant tumors in China . According to the statistics of our country, primary liver cancer claims 20.40 lives per 100 000 people annually, with 19.98 per 100 000 in cities and 23.59 per 100 000 in rural areas, ranking as the 2nd and the 1st leading cause of cancer death, respectively. Of all the newly enrolled cases in the world each year, 45% are found in the mainland of China. In the southeast areas of high incidence, the situation is even worse with tumors tending to occur in a younger age group. The molecular events for HCC development are very complex, and HCC has proved to be genetically heterogenous neoplasm [27][28][29][30] . But to date, the identified genes have not yet fully disclosed the mechanisms of HCC [31][32][33][34][35][36][37][38] . In an attempt to identify HCC susceptible genes, differential display method was employed in this study. In the analysis of altered expression genes between HCC tissues and their nontumor counterparts, we isolated a novel gene named HCCA3 with a full length of cDNA. Patients and specimens Primary HCC and their surrounding liver tissues were obtained from 42 patients who received surgical resection at the Eastern Hepatobilliary Surgical Hospital of the Second Military Medical University, Shanghai, China. These included 41 male and 1 female patients with a median age of 49 years (range 24-72 years, mean age of 49.8 years). Thirtyfive (83.3%) patients had serological evidence of hepatitis B virus infection. The serum AFP level was above 25 µg·L -1 in 23 cases (54.8%). The tumor size was smaller than 5cm (small HCC) in 13 patients and larger than 5 cm in 29. Histologically, 40 patients (95.2%) were complicated with cirrhosis. There were 7 well differentiated (Edmondson's grades I and II) and 35 poorly differentiated (Edmondson's grades III and IV) HCCs. Macroscopic portal vein tumor spread was found in 3 patients, and microscopic surrounding liver vascular cancer thrombi were found in 26. Gross and microscopic intrahepatic adjacent small satellite nodules lesions were found in 28, and tumor capsule invasion of liver cancer in 32. Adult normal tissues were obtained from a healthy young man who died of a traffic accident. Methods RNA extraction and differential display Total cytoplasmatic RNA was extracted by the acid guanidinium thiocyanate-phenolchloroform extraction methods [39,40] . The differential display method was perf ormed as described previously [39][40][41]42] . Amplification consisted of initial den aturation at 94 for 4 min, followed by 40 reaction cycles (60 s at 93 , 2 min at 40 , and 90 s at 70 ) and a final cycle at 72 for 10 min. PCR fragments were then reamplified by the same primer, separated on a 16 g·L -1 agarose gel, purified by Qiaex II gel extraction kit, and subcloned into the pGEM -T vector using standard molecular cloning techniques. Library screening and DNA sequencing Fragment contained in the PCR clone (length in 350 base pairs) from DDPCR served as probe to screen placental cDNA library, using the standard filter hybridization techniques described [43,44] . At the end of the third screening, we got several plagues containing the target DNA sequence and sequenced them by DNA automated sequencing system. To obtain the cDNA in a full length, genomic homologous screening was used through comparing the cDNA sequence obtained by screening the library with the NCBI GeneBank EST database. We used the PCR assay and sequencing to confirm the correct ness of the cDNA sequence. Northern blot assay For Northern blot analysis [37,40,44] , 40 µg of total RNA was denatured, loaded on a 15 g·L -1 agarose gel and ran at 5 V·cm -1 for about 3 h. The collected gels were then transferred to nitrocellulose. Hybridization of the filters was performed using specific probe of HCCA3 cDNA fragment (length in 1125 bp) obtained from screening the library. The probe was labeled with 50 µCi Α-32 P-dATP using Prime-a-Gene labeling kit according to the given protocol. After prehybridization at 42 for 3 h, the membranes were hybridized in the same solution containing the labeled probe for 6 h at the same temperature, and exposed to X-ray film for 10 d at-70 . In order to calibrate relative quantities of loaded RNAs, the blot was rehybridized with a cDNA probe of the â-Actin gene. Statistical analysis χ 2 test or Fisher's exact test was used to examine the differences and relationship among groups of patients classified by HCCA3 expression. Differences at P<0.05 were judged to be statistically significant. Differential display analysis and library screening By DDPCR, we found a differentially expressed gene fragment that exclusively present in the liver cancer lane. This fragment (length in 350 bp) was then subcloned into pGEM-T vector and served as specific probe to screen human placental cDNA library. We have o btained the gene fragment of 1125 bp in length, which shortened nearly 600 bp according to the location of Northern hybridization by screening the library. We also obtained the full-length cDNA of 1706 bp, which was in good agreement with the size of the mRNA species observed by Northern blotting through genomic h omologous extend, along with the EST sequence (GeneBank Accession No. AP001077, length in 197663 bp) of NCBI GeneBank EST databases. The sequence of 1706 bp in length was corrected by PCR assay and sequencing. It was named HCCA3 (HCC associated gene 3, also named STW-2) and submitted to EMBL/GenBank/DDBJ nucleotide sequence databases (Accession No. AF27 6707). Sequence characteristics of the full-length HCCA3 HCCA3 contains a consensus initiation codon [45,46] Further·more, there was no nuclear targeting sequence. The schematic presentation of HCCA3 cDNA is shown in Figure 1. Normal tissue distribution of HCCA3 Mrna Northern blot analysis showed that HCCA3 mRNA appears to be widely expressed in human normal tissues ( Figure 2). The HCCA3 gene was particularly highly expressed in human lungs, brain and colon, moderately expressed in muscle, stomach, spleen and heart tissues, weakly expressed in small intestines, pancreas, and liver tissues. Among the tissues with positive signals, HCCA3 mRNA was observed in a transcript of approximately 1.7 kb which well corresponded to the size of the cloned cDNA. Figure 2 Northern blot analysis of HCCA3 mRNA in human adult normal tissues. Upper panel showed HCCA3 mRNA was highly expressed in human lungs, brain and colon, moderately expressed in muscle, stomach, spleen and heart tissues, weakly expressed in small intestines, pancreas, and liver tissues. Equal a mounts of total RNA loading as indicated by rehybridizing with (-actin cDNA probe (lower panel). HCCA3 mRNA expression in HCC and its clinical significance HCCA3 mRNA expression was noticed in 78.6% (33/42) patients, which was intensively expressed in HCC tissues (Figure 3) DISCUSSION Differential display polymerase chain reaction (DDPCR ) method is a useful tool for detecting and characterizing altered gene expression in eukaryotic cells [39,41,47,48] . Using this technique, we have successfully isolated a gene named HCCA3 with a transcript of 1706 bp. A consensus initia tion coden is at 681 bp and the franking sequence of the predicted initiating me thionine coincides with Kozak [45,46,49] criterion because the nucleot ides at -3, +4 site of start codon are purine, and there is one stop codon at the upstream sequence. The length of HCCA3 cDNA agrees with the size of mRNA by Northern b lot analysis. No significant homologues with known genes at nucleotide and amino acid levels were found [47][48][49][50] . No signal peptides searched by GCG and PC/GENE software were found, suggesting that HCCA3 was not a secretive protein [51,52] . This protein also has no transmembrane domain and nuclear targeting sequence indicating that it may not be located on cell member or within nucleus. The putative protein of HCCA3 revealed several phosphorylation sites for protein kinase C and casein kinase II. It is generally accepted that protein phosphorylation-deph osphorylation plays a role in the regulation of essentially all cellular functions, and there is evidence that deregulation of protein phosphorylation is involved in several human cancers [22,53,54] . HCCA2 as a possible oncoprotein may be functionally abnormal, and phosphorylative deregulation may be a mechanism. The function of HCCA3 protein may also largely depend on its phosphorylation status [22,54] . However, this needs further studies. Studies on the expression of HCCA3 can reveal its potential biological significance [22,36,37,55] . By Northern blot analysis, we noted that HCCA3 mRNA was expressed widely in normal human tissues, indicating that HCCA3 is a normal cellular gene which may be involved in the physical process of the distributed tissues. Although HCCA3 mRNA was lowly expressed in normal liver tissues, it was significantly expressed in HCC tissues, showing that the high expression of HCCA3 might participate in the process of liver on cogenesis [36,37] . Because HCCA3 is a normal cellular gene, increased expression of HCCA3 in HCC implys the genetic abnormality and acts as an oncogene in the dev elopment of HCC. In 42 patients with HCC, HCCA3 mRNA was detected in 33 (78.6%), which was highly expressed in HCC tissues, suggesting that HCCA3 is a very common molecular event involved in the pathogenesis of HCC. These findings indicate that HCCA3 mRNA is overexpressed preferentially in HCC and can serve as a tumor biomarker for HCC [7,18,23,36,37] . The three patients with macroscopic portal vein tumor thrombi all had significantly high expression of HCCA3 mRNA, implying that HCCA3 is a late incidence in HCC carcinogenesis. Compared with pathological features,HCCA3 mRNA expression was associated with the invasion of tumor capsule and the adjacant small satellite nodules lesions (P<0.0 5), indicating that HCCA3 mRNA expression is a factor of HCC invasiveness and metastasis [37,38] . Although the function of HCCA3 is still unknown, our results suggest that the up-regulation of expression of HCCA3 mRNA may play an important role in the development and/or progression of hepatocellular carcinoma [36][37][38]42,45] . This finding demonstrated that it is possible to identify the pr eviously unknown, differential gene expression from a small amount of clinical samples [39][40][41][42]56] . Information about such alteration of HCCA3 in gene expression could be useful in elucidating the genetic events in HCC pathogenesis, developing new diagnostic markers, or determining novel therapeutic targets.
2018-04-03T06:08:15.718Z
2001-12-15T00:00:00.000
{ "year": 2001, "sha1": "026aa5949404ca6c53e8fa8d14c046d3b83d536c", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.3748/wjg.v7.i6.821", "oa_status": "HYBRID", "pdf_src": "Adhoc", "pdf_hash": "378a3e6f331c400e27df61bf62d0c3dcf98107b4", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
55283613
pes2o/s2orc
v3-fos-license
Towards understanding the diversity of banana bunchy top virus in the Great Lakes region of Africa The genetic variability of banana bunchy top virus (BBTV) isolates from the Great Lakes region of Africa (GLRA) spanning Burundi, the Democratic Republic of the Congo and Rwanda was assessed to better understand BBTV diversity and its epidemiology for improved disease management. DNA-R and DNA-S fragments of the virus genome were amplified and sequenced in this study. These two BBTV fragments were previously used to classify isolates into the South Pacific and the Asian groups. Phylogenetic analyses based on nucleotide sequences involving GLRA isolates and those obtained from the GenBank database were carried out. Sequence similarity for both DNA-R and DNA-S fragments ranged between 99.1 to 100.0% among the GLRA isolates, 96.2 to 100.0% and 89.7 to 94.3% between the GLRA isolates and those previously clustering in the South Pacific and the Asian groups, respectively. These results showed that GLRA isolates belong to the South Pacific group and are phylogenetically close to the reference Indian isolate. The similar banana cultivars and BBTV isolates across the GLRA implied that the disease may have mainly spread through exchange of planting material (suckers) between farmers. Thus, farmers’ awareness and quarantine measures should be implemented to reduce BBTV spread in the GLRA. INTRODUCTION Musa spp.(banana and plantain) is a staple food crop for approximately 400 million people worldwide and nourishes over 70 million people in sub-Saharan Africa (AATF, 2003).This crop is ranked the first in terms of contribution to the total annual agricultural production in Burundi and Rwanda while it is the second after cassava in the Democratic Republic of the Congo (DR Congo) (FAOSTAT, 2009).The perennial nature of banana, compared with other staples, allows households to access food all-year round, providing significant amounts of micronutrients (Kumar et al., 2011).Among banana cultivars grown in Africa, plantain types (AAB genome) are mainly found in the humid lowlands of West and Central Africa, while the highland cooking and beer banana (AAA-EA) which contribute to approximately 30% of world banana production are common in the Eastern African highlands (Tenkouano et al., 2003).Eastern Africa, including the Great Lakes zone, is considered as a secondary centre of diversity for the highland banana (Karamura et al., 1998;Tenkouano et al., 2003) where smallholder farmers grow a mixture of 5 to 10 different cultivars around their homesteads (AATF, 2003).Banana plantations are subjected to various natural calamities, in particular viral diseases, limiting their production.Among the viral infections, banana bunchy top disease (BBTD) is reported as the most destructive disease (Dale, 1987;Islam et al., 2010;Stainton et al., 2012). BBTV spreads from one location to another by exchange of infected planting material and from plant to plant through the banana aphid, Pentalonia nigronervosa Coquerel (Hemiptera, Aphididae), but is not transmitted mechanically (Thomas and Dietzgen, 1991;Foottit et al., 2010).The banana aphid transmits the virus with high host specificity to Musa spp. in a circulative and persistent manner (Hafner et al., 1995;Hogenhout et al., 2008;Foottit et al., 2010).In the plant, replication of these circulative viruses is frequently restricted to the phloem providing a route for uptake and inoculation of viruses between plants via stylet-feeding aphids (Hogenhout et al., 2008).The virus is also transmitted over long distances through the movement of BBTVinfected planting materials (Kumar and Hanna, 2008;Vishnoi et al., 2009;Kumar et al., 2011). BBTD is easily recognizable from other banana diseases by its characteristic symptoms consisting of dark green streaks on leaves and petioles, marginal leaf chlorosis, dwarfing of the plant and leaves that stand more erect and bunched at the top of the pseudostem, forming a rosette with a 'bunchy top' appearance (Magee, 1927;Su et al., 2003). The BBTV is a member of family Nanoviridae, genus Babuvirus belonging to a group of circular singlestranded DNA (cssDNA) viruses (Allen, 1987;Amin et al., 2008;Karan, 1995).It is an isometric virus with a genome consisting of at least 6 fragments (Harding et al., 1993;Horser et al., 2001;Hu et al., 2007) and two components were considered in this study.The DNA-R encodes the 'master' Rep (M-Rep) that directs self replication in addition to replication of other BBTV genome fragments (Harding et al., 1993;Karan et al., 1994;Theresa, 2008).On the other hand, the coat protein (CP) is encoded by DNA-S for the integral BBTV fragment (Horser et al., 2001).Based on sequence analysis of DNA-R and DNA-S (CP) fragments, respectively, Karan et al. (1994), Wanitchakorn et al. (2000) and Kumar et al. (2011) demonstrated that BBTV isolates can be clustered into two distinct groups.The 'South Pacific' group comprising isolates from Australia, the South pacific region, South Asia (that is, India, Pakistan) and Africa; while the 'Asian' group comprises isolates from China, Indonesia, Japan, the Philippines, Taiwan and Vietnam (Horser et al., 2001).Although BBTD has long been recognised (Magee, 1927), molecular characterisation of BBTV began in the early 1990s (Harding et al., 1993).In Africa, a handful of BBTV isolates from sub-Saharan Africa (SSA) have been characterized (Wanitchakorn et al., 2000;Kumar et al., 2011) which includes only a single isolate originating from Burundi (accession AF148943).To date, significant molecular characterization using a substantial number of samples from the African Great Lakes region is lacking.To better understand BBTV diversity and its epidemiology for accurate BBTD management, knowledge of the molecular nature of BBTV in Africa is required.In this study, the DNA-R fragment and the coat protein (CP) (Wanitchakorn et al., 2000;Horser et al., 2001;Furuya et al., 2005;Kumar et al., 2011) were used to characterize BBTV isolates from the African Great Lakes region.BBTV isolates from the GLRA were compared with isolates already available in existing GenBank databases to assess their relationship with the Asian and South Pacific groups.In addition, the likely sampling site at different altitudes and influence of banana cultivar on sequence mutations within the GLRA were considered. Sampling Banana leaf samples were collected in regions affected by BBTD in three countries namely Burundi, DR Congo and Rwanda from April to May 2010.Duplicate pieces of banana leaves of approximately 4 cm 2 each were taken from the youngest leaf of a banana plant displaying advanced BBTD symptoms.Leaf pieces were placed in individual Petri dishes lined with silica gel for the duration of the transport and transferred to the laboratory, where they were extracted and stored at -20°C pending use (Chase and Hills, 1991).In all, 37 samples were collected from five Provinces of Burundi (Bubanza, Bujumbura Rural, Bururi, Cibitoke and Makamba), 22 from three districts in the Eastern South Kivu DR Congo (Kabare, Nyangezi and Kamanyola) and 20 from the Rusizi district of the Western Province of Rwanda, giving a total of 79 samples.These samples were collected from diverse local banana genotypes namely AAA-EA, ABB, AAB and AABB types cultivated at different altitudes across the three countries.Diagnostic tests confirming the viral status of samples were performed using previously described PCR analysis (Harding et al., 1993;Thomson and Dietzgen, 1995). The PCR reactions were set up in a final volume of 50 µl comprising 5 µl of crude extract (diluted 1:100 in distilled water), 5 µl of 10x PCR buffer (Roche), 6 µl of MgCl 2 (25mM), 1.2 µl (200 µM/each) of dNTPs mix, 1 µl of each primer (0.5 µM), 0.25 µl (1.25 u/50 µl) of Taq DNA polymerase obtained from Fermentas-France and sterile distilled water (30.55 µl) was added to make the final volume (Amin et al., 2008;Burns et al., 1995).The PCR procedure was performed using MyCycler from Bio-Rad, Belgium.The thermocycling scheme consisted of denaturation at 94°C for 4 min; 40 cycles of 30 s to 1 min at 94°C, 1 min at 52°C and 2 min at 72°C followed by a final elongation step at 72°C for 10 min.The amplified products were visualized by electrophoresis in a 1% (w/v) agarose gel using ethidium bromide staining along with 100 bp ladder from Fermentas, France.Gels were then photographed on a digital gel documentation system.PCR products were quantified in ng/µl using NanoDrop ND-1000 spectrophotometer machinery with a limit of 1.80 values at A260/280 absorbance ratio.Amplified specific products to each of the DNA-R and CP fragments were then shipped for subsequent sequencing, using the same primer pairs, at Macrogen in South Korea. Phylogenetic analyses (sequence alignment and phylogenetic tree) of BBTV nucleotide sequences The nucleotide sequences of BBTV DNA-R and CP fragments of the GLRA isolates were compared in a pairwise matrix with existing BBTV and Abaca bunchy top viruses (ABTV) sequences obtained from the GenBank database using the Basic Local Alignment Search Tool (BLAST) available on the National Centre for Biotechnology Information (NCBI) (Theresa, 2008;Vishnoi et al., 2009).Multiple alignments for sequence comparison were performed using CLUSTALO (Thomson and Dietzgen, 1995;Amin et al., 2008).The genetic diversity of BBTV isolates was determined between GLRA isolates and those representing reference isolates from the previously described Asian and South Pacific groups including a previously sequenced Burundian isolate (AF148943) and other isolates from sub-Saharan Africa (Kumar et al., 2011).The consensus trees were generated using neighbour-joining algorithms with 100 bootstrap replications with Sea View Version 4.2.9 (Gouy et al., 2010). PCR detection and sequencing of BBTV The BBTV was confirmed in all 79 samples collected from symptomatic banana plants using a primer pair targeting the putative replicase gene (349 bp) of the BBTV genome.Among these positive samples, BBTV DNA-R and CP fragments of 27 representative samples covering the different localities and banana varieties were amplified using corresponding primer pairs of each fragment.Among those samples, 14 same isolates were successfully amplified for DNA-R and CP in addition to 8 and 5 different isolates for DNA-R and CP, making a total of 22 and 19 isolates, respectively.The PCR products of DNA-R (1111bp) and CP (550bp) fragments (Figure 1) were sequenced and used in comparisons. Sequence analysis of BBTV based on coat protein fragment The phylogenetic analysis was carried out using the sequences of a 475 bp product representing the BBTV-CP fragment.The sequence comparisons showed a nucleotide sequence identity between the BBTV isolates from the GLRA (sequenced in this study) greater than 99%.The GLRA isolates showed nucleotide sequence identity ranging from 97.2 to 99.7% with the South Pacific group which include isolates from sub-Saharan Africa (SSA), whilst they share only between 89.8 and 94.3% identity with CP nucleotide sequences of Asian isolates.On the other hand, pairwise comparisons between CP nucleotide sequences from the Asian and South Pacific groups showed higher sequence variability among isolates of the Asian group (99.3 to 92.0%) than among isolates from the South Pacific (100.0 to 95.2%).Phylogenetic analysis based on the BBTV CP nucleotide sequences confirmed the clustering of BBTV isolates into two major groups, the Asian and the South Pacific groups.The South Pacific group consists of all GLRA isolates including Burundian isolate (AF148943) that was deposited earlier in GenBank and Indian isolate (EF584544) followed by isolates from sub-Saharan Africa and Fiji (Figure 2).Within the South Pacific group, three sister subgroups with high bootstrap support (99%) were distinguished.The first subgroup includes all GLRA isolates (sequenced in this study), the Burundian isolate (AF148943) previously reported and the Indian isolate (EF584544).The second subgroup is represented by all sub-Saharan isolates, while a single isolate from Fiji (AF148944) was classified in the third subgroup.The Asian group was divided into two main subgroups, the subgroup which includes isolates from China, Japan, Philippines and Taiwan and the subgroup of Vietnam's isolates with bootstrap support of 92 and 56%, respectively (Figure 2). Sequence analysis of BBTV based on DNA-R genome fragment Sequence analysis was carried out using a 238 bp DNA fragment for each of the 22 isolates, corresponding to the core region of the BBTV DNA-R.The core region was considered for the purpose of sequences comparisons using the same size of the majority of reported BBTV sub-Saharan Africa isolates available in GenBank database.Nucleotide sequence comparisons showed greater than 99% identity among GLRA isolates.BBTV GLRA isolates showed high levels of nucleotide sequence similarity with the South Pacific group (96.2 to 100.0%) compared with the Asian group isolates (89.7 to 93.4%).In addition, the nucleotide sequence variability was rather high within the Asian group (99.3 to 89.3 %) compared with those of the South Pacific group including the GLRA isolates (100.0 to 95.8%). Phylogenetic analysis based on Rep sequences using the neighbour-joining method has also confirmed the previous reports of the clustering of BBTV isolates into the Asian and the South Pacific groups with high bootstrap support (100%).Four subgroups were distinguished among South Pacific isolates, the first includes all GLRA and SSA isolates followed by the Indian isolate (AF 416470-In); the second subgroup includes isolates from Australia, Fiji, Tonga, Hawaiï and Pakistan, while Egypt is in its own subgroup.The GLRA and other SSA isolates show the closest relationship with a Maharashtra isolate from India. DISCUSSION This study contributed to better knowledge of the GLRA BBTV genome in comparison with other isolates from South Pacific and Asian zones based on two BBTV genome fragments (DNA-R and CP).The BBTV-CP GLRA sequences compared with those of the South Pacific and the Asian groups showed nucleotide differences ranging from 0.7 to 2.8 and 5.7 to 10.2%, respectively.This corroborates previous estimations of 3% variability among the South Pacific group isolates and around 6% across the Asian group isolates (Wanitchakorn et al., 2000).On the other hand, using BBTV DNA-R fragment (Rep) of GLRA isolates, a range of 0.9 to 3.8% and 6.6 to 10.3% of nucleotide differences were comparable to the previous averages of 3.8% among South Pacific group isolates and approximately 10% between the two groups (Karan et al., 1994).Additionally, the phylogenetic analysis using these two genome fragments strongly confirmed that all GLRA isolates belong to the South Pacific group (Figures 2 and 3). Among the South Pacific isolates, based on the CP nucleotide sequences, the Indian isolate (EF584544) showed a closer relationship with the GLRA isolates than other SSA isolates.Using the core region of the DNA-R fragment, all BBTV isolates from sub-Saharan Africa grouped together followed by the Indian isolate.Additionally, the interrelationships among the SSA isolates showed that Cameroon isolate (JF755989) fell within the group of GLRA isolates.Karan (1995) had suggested that BBTV infections should have two major sources, one in Asia and another in the South Pacific, while Stainton et al. (2012) based on the evidence of reassortment and recombination events within and between the Asian and the South Pacific BBTV subgroups support the hypothesis of the same geographical origin of both subgroups.Irrespective of the means of the first BBTV introduction, the GLRA isolates fall within the South Pacific group and may spread through either the traditional farmers' practices of intra-and inter-regional exchange of suckers for planting material or introduction of infected plants from research stations (that is, as observed in Rwanda during the survey, banana field of ISAR research station at Bugarama, Rusizi valley, have been reported to have contributed to the spread of BBTV in surrounding areas).Aphids may have extended the spread between plantations at a local level (Kumar et al., 2011).The two gene sequence-based phylogenies (Figures 2 and 3) suggest that the virus isolates from the GLRA could have originated from India rather than from other countries of the South Pacific through exchange of non indexed virus-free banana plantlets before development of diagnostic molecular tools. In the Great lakes region, previous survey work (Sebasigari and Stover, 1988) suggested in 1987 that BBTD might have been present since the early 1970s in the Rusizi valley (encompassing parts of Burundi and Rwanda) while the DNA-R sequence variation was around 0.9% among GLRA isolates.In other countries within the south Pacific group such as Pakistan, the DNA-R sequence variations of 0.45% were reported 20 years after disease identification compared with 2% reported over 80 years in Australia (Karan et al., 1994). The BBTV isolates across the three countries of the GLRA were grouped together.This suggests similarity in origin of GLRA isolates which were most likely distributed Figure 2 . Figure 2. Neighbour-joining tree showing relationships based on BBTV CP nucleotide sequences of 19 isolates collected in the African Great Lakes Region (GLRA) compared with representative BBTV and ABTV isolates both obtained from the GenBank database. Figure 3 . Figure 3. Neighbour-joining tree illustrating the BBTV DNA-R sequence relationships among 22 isolates from the Great Lakes Region of Africa (GLRA) compared with 26 representative isolates from GenBank database.
2018-12-06T11:11:07.168Z
2015-02-12T00:00:00.000
{ "year": 2015, "sha1": "c6eb80bb0373695da99c7d0e92de789999c3f0d2", "oa_license": "CCBY", "oa_url": "https://academicjournals.org/journal/AJAR/article-full-text-pdf/F26793950392.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "c6eb80bb0373695da99c7d0e92de789999c3f0d2", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
261682029
pes2o/s2orc
v3-fos-license
A new class of multiple nonlocal problems with two parameters and variable-order fractional $p(\cdot)$-Laplacian In the present manuscript, we focus on a novel tri-nonlocal Kirchhoff problem, which involves the $p(x)$-fractional Laplacian equations of variable order. The problem is stated as follows: \begin{eqnarray*} \left\{ \begin{array}{ll} M\Big(\sigma_{p(x,y)}(u)\Big)(-\Delta)^{s(\cdot)}_{p(\cdot)}u(x) =\lambda |u|^{q(x)-2}u\left(\int_\O\frac{1}{q(x)} |u|^{q(x)}dx \right)^{k_1}+\beta|u|^{r(x)-2}u\left(\int_\O\frac{1}{r(x)} |u|^{r(x)}dx \right)^{k_2} \quad \mbox{in }\Omega, \\ u=0 \quad \mbox{on }\partial\Omega, \end{array} \right. \end{eqnarray*} where the nonlocal term is defined as $$ \sigma_{p(x,y)}(u)=\int_{\Omega\times \Omega}\frac{1}{p(x,y)}\frac{|u(x)-u(y)|^{p(x,y)}}{|x-y|^{N+s(x,y)p(x,y)}} \,dx\,dy. $$ Here, $\Omega\subset\mathbb{R}^{N}$ represents a bounded smooth domain with at least $N\geq2$. The function $M(s)$ is given by $M(s) = a - bs^\gamma$, where $a\geq 0$, $b>0$, and $\gamma>0$. The parameters $k_1$, $k_2$, $\lambda$ and $\beta$ are real parameters, while the variables $p(x)$, $s(\cdot)$, $q(x)$, and $r(x)$ are continuous and can change with respect to $x$. To tackle this problem, we employ some new methods and variational approaches along with two specific methods, namely the Fountain theorem and the symmetric Mountain Pass theorem. By utilizing these techniques, we establish the existence and multiplicity of solutions for this problem separately in two distinct cases: when $a>0$ and when $a=0$. To the best of our knowledge, these results are the first contributions to research on the variable-order $p(x)$-fractional Laplacian operator. Here, Ω ⊂ R N represents a bounded smooth domain with at least N ≥ 2. The function M(s) is given by M(s) = a − bs γ , where a ≥ 0, b > 0, and γ > 0. The parameters k 1 , k 2 , λ and β are real parameters, while the variables p(x), s(·), q(x), and r(x) are continuous and can change with respect to x. To tackle this problem, we employ some new methods and variational approaches along with two specific methods, namely the Fountain theorem and the symmetric Mountain Pass theorem. By utilizing these techniques, we establish the existence and multiplicity of solutions for this problem separately in two distinct cases: when a > 0 and when a = 0. To the best of our knowledge, these results are the first contributions to research on the variable-order p(x)-fractional Laplacian operator. Statement of the Problem and the Main Results Given that N ≥ 2 and Ω ⊂ R N , is a smooth bounded domain. The goal of this paper is to investigate the existence and multiplicity of solutions for variable order p(x)-Kirchhoff tri-nonlocal fractional equations. in Ω, where N > s(x, y)p(x, y) for all (x, y) ∈ Ω × Ω, λ, β are two real parameters, k 1 , k 2 > 0, M(x) = a − bx γ , a ≥ 0, b, γ > 0 and q, r are continuous real functions onΩ. The operator defined as (−∆) s (·) p(·) is referred to as the p(x)-fractional Laplacian with variable order, and it is defined as follows: |x − y| N+s(x,y)p(x,y) dy; for any u ∈ C ∞ 0 (R N ), where the notation P.V. means the Cauchy principal value. As the problem (1.1) involves integrals over the domain Ω, it deviates from being a pointwise identity. Consequently, it is commonly referred to as a tri-nonlocal problem due to the presence of the following integrals. In recent years, the wide class of problems involving nonlocal operators have been an increasing attention and have acquired a renovate relevance due to their occurrence in pure and applied mathematical point view, for instance, the finance, thin obstacle problem, biology as the interaction of bacteria, probability, optimization and others. In the current work, our attention will be focused on a very interesting nonlocal operator known as the fractional p(x)-Laplacian with variable order. This type of operator represents an extension and a combination of many other operators. Indeed, the nonlocal fractional p-Laplacian, which has been extensively studied in the literature, is defined as During this time, problems involving variable exponents have attracted many researchers [9,10,11]. These types of problems primarily arise from the p(x)-Laplace operator div(|∇u| p(x)−2 ∇u), which serves as a natural extension of the classical p-Laplace operator div(|∇u| p−2 ∇u) when p is a positive constant. However, these operators possess a more intricate structure due to their lack of homogeneity. Hence, problems involving p(x)-Laplacian become more tricky. Moreover, concerning the nonlocal problem involving the p(x)-Laplacian, we can refer to [8,13,14,15,17,27,28,29] and the references therein. For instance, in [17], the authors focused their study on a specific fourth-order bi-nonlocal elliptic equation of Kirchhoff type with Navier boundary conditions, which is expressed as: By using a variational method and critical point theory, the authors obtained a nontrivial weak solution. Consequently, the idea to replace the fractional p-Laplacian by its variable version was initiated. For this purpose, Kaufmann et al. [18] introduced the fractional p(x)-Laplacian (−∆) s p(·) as follows: To address such problems, the authors considered the fractional Sobolev space with variable exponents, variational methods, existence. Simultaneously, many works involving the variable-order fractional Laplacian (see [22]) have emerged, defined as follows: Furthermore, the combination of these operators leads to the emergence of the so-called fractional p(x)-Laplacian with variable order. This class of operators has captured the attention of numerous researchers [1,12,20,22,24,26], who have investigated various aspects, including the existence, multiplicity, and qualitative properties of the solutions. Additionally, there are several works focusing on the nonlocal fractional p(x)-Laplacian with variable order [1,6,7,12,22,24] and their references therein. For instance, in [24], the authors studied the existence and multiplicity of solutions for the following fractional p(·)-Kirchhoff type problem with a variable order s(·): where (x, y) ∈ R N × R N satisfies the condition N > p(x, y)s(x, y), s(·) : R 2N → (0, 1) and p(·) : R 2N → (1, ∞), and p(x) = p(x, x) for x ∈ R N , M is a continuous Kirchhoff-type function, g(x, v) is a Carathéodory function and µ > 0 is a parameter. The authors obtained at least two distinct solutions for the above problem by applying the generalized abstract critical point theorem. In addition, under weaker conditions, they also proved the existence of one solution and infinitely many solutions using the mountain pass lemma and fountain theorem, respectively. Motivated by the aforementioned works, the present work aims to study the problem (1.1) mentioned above. The main difficulties and innovations lie in the form of the new Kirchhoff functions M(s) = a − bs γ , derived from the negative Young's modulus when the atoms are spread apart rather than compressed together, resulting in negative deformation. In the case a = 0, to overcome this challenge, inspired by [3], our main approach is based on the notion of the first eigenvalue associated with our operator. The specificity of this tool is that in the literature, we only find the recent paper [3], in which the authors introduce the s(·, ·)-fractional Musielak-Sobolev spaces W s(x,y) L ϕ(x,y) (Ω). By employing Ekeland's variational principle, the authors establish the existence of a positive value λ * * > 0 such that for any λ within the interval (0, λ * * ), it serves as an eigenvalue for the following problem: where Ω is a bounded open subset of R N with a C 0,1 -regularity and a bounded boundary conditions. It is noteworthy that this operator represents a generalization of (−∆) s(·) p(·) (whenever we take a (x,.) = t p(x,.)−2 ). Thus, this characterization is applicable in our case. However, as far as our knowledge extends, there are no existing results regarding the existence and multiplicity of solutions for problem (1.1) involving the new tri-nonlocal Kirchhoff function and the p(x)-fractional Laplacian operator with variable order. The structure of this paper is as follows: In the second section, an abstract framework is presented, where we provide a review of some preliminary results that will be utilized throughout the subsequent sections. The third section is specifically focused on presenting the Palais-Smale condition separately for the cases of a > 0 and a = 0. The subsequent sections are dedicated to proving the main results of this study. Generalized Lebesgue and Sobolev spaces In this section, we provide a brief review of the definition and key results concerning Lebesgue spaces with variable exponents and generalized Sobolev spaces. For a more comprehensive understanding, interested readers are referred to [9,10,19] and the references therein. For this purpose, let us define For p(·) ∈ C + (Ω), the variable exponent Lebesgue space L p(·) (Ω) is defined by This space is endowed with the so-called Luxemburg norm given by and (L p(·) (Ω), |u| p(·) ) becomes a Banach space, and we call it variable exponent Lebesgue space. Now, in order to claim the (PS ) condition cited in Section3, we state the following lemma for the variable exponent Lebesgue spaces (see [11, in Ω. Let h 2 : Ω → R be a measurable function such that h 1 h 2 ≥ 1 a.e. in Ω. Then for any u ∈ L h 1 (·)h 2 (·) (Ω), The generalized Sobolev space, denoted by W k,p(·) (Ω), is defined as follows is a uniformly convex, separable, and reflexive Banach space. Fractional Sobolev spaces with variable exponents In the present part, we recall some properties of the fractional Sobolev spaces with variable exponents which will be useful in the rest of the paper. For more details, we can refer to [4,5,6,7,18]. In the present part, we give the variational setting of problem (1.1) and state important results to be used later. We set Q : and define the fractional Sobolev space with variable exponent as The space X is equipped with the norm where [u] X is the seminorm defined as follows Then (X, · X ) is a separable reflexive Banach space. Now, let define the subspace X 0 of X as We define the norm on X 0 as follows Thus, we have Theorem 2.1. Let Ω be a smooth bounded domain in R N , s(·, ·) ∈ (0, 1) and p(·, ·) satisfy (H 1 ) and (H 2 ) with s + p + < N. Then for any r ∈ C + (Ω) such that Moreover, this embedding is compact. The interplay between the norm in X 0 and the modular function ρ X 0 can be studied in the following lemma. Lemma 2.2. Let u ∈ X 0 and ρ X 0 be defined as in (2.1). Then we have the following results: The next lemma can easily be obtained using the properties of the modular function ρ X 0 from Lemma 2.2. Proposition 2.1 ( [24,25]). Let u, u m ∈ X 0 , m ∈ N. Then the following two statements are equivalent: In this part, we will use as space of work, the space X 0 and by simplicity we will denote this as X instead of X 0 in the rest of this paper. Considering the variational structure of (1.1), we look for critical points of the corresponding Euler-Lagrange functional I λ,β : X → R, which is defined as follows: for all u ∈ X. It is important to note that I λ,β is a C 1 (X, R) functional, and its derivative can be computed as follows: for any v ∈ X. Consequently, critical points of I λ,β correspond to weak solutions of (1.1). . This implies that the following conditions hold: where X * denotes the dual space of X. Step 1. Firstly, we aim to prove that the sequence u n is bounded in X. By assuming the contrary, i.e., supposing that u n is unbounded in X, so up to a subsequence, we may assume that u n X → ∞ as n → ∞. we have (3.4) From (1.3) and the fact that γ > 0 and k i > 0 for i = 1, 2, it follows that We deduce from (3.4) and (3.5), that If the sequence (u n ) is unbounded in X, we can assume, by passing to a subsequence if necessary, that u n X > 1. Considering the previous inequalities, we have the following: which is absurd since (γ + 1)p − > 1. Thus, {u n } must be bounded in X, and the first assertion is proven. Step 2. Now, we aim to demonstrate that the sequence {u n } has a convergent subsequence in X. According to Theorem 2.1, the embedding . Since X is a reflexive Banach space, passing, if necessary, to a subsequence, there exists u ∈ X satisfying: (3.6) From (3.2), we find that Furthermore, utilizing Hölder's inequality and (3.6), we can estimate: (3.8) Therefore, thanks to the convergence result (3.6), we can deduce that |u n − u| q(x) → 0 as n → ∞. (3.9) By combining the boundedness of {u n } in X with the estimates (3.8) and (3.9), we can conclude that As {u n } is bounded in X, there exist positive constants c 1 and c 2 such that Similarly, we obtain By (3.3), we have I ′ λ,β (u), u n − u → 0. Which means, based on equations (3.11) and (3.12), that (3.13) Since {u n } is bounded in X, passing to a subsequence, if necessary, we may assume that when n → ∞ Considering two cases: t 0 = 0 and t 0 > 0. Now, let's proceed with a case analysis. First, if t 0 = 0, then the sequence {u n } converges strongly to u = 0 in X, and the proof is concluded. However, if t 0 > 0, we will further examine the two sub-cases below: N+s(x,y)p(x,y) dxdy γ → 0 converges to zero. Thus, we can find a positive value δ > 0 such that for sufficiently large n. As a result, we can conclude that the set |u n (x) − u n (y)| p(x,y) |x − y| N+s(x,y)p(x,y) dxdy γ → 0 is bounded. (3.14) , for all u ∈ X. Then It follows that To complete our proof we require the following lemma. Proof. By (3.6), we have u n → u in L p(x) (Ω) which implies that From (3.10) we deduce that Due to Hölder's inequality, we have By making a minor adjustment to the aforementioned proof, we can also establish assertion (ii), but we will omit the specific details. As a result, by combining parts (i) and (ii), we can conclude assertion (iii). Consequently, ϕ ′ (u n ) − ϕ ′ (u) X * → 0 and ϕ ′ (u n ) → ϕ ′ (u). We are now able to conclude the proof of Subcase 2. Utilizing Lemma 3.2 and taking into account the and therefore By invoking the fundamental lemma of the variational method (see [21]), we can conclude that u = 0. Hence, Hence, we can deduce that Therefore, we have reached a contradiction since Similarly to Subcase 1, we can argue as follows: So, combining the two cases discussed above, we can conclude that: Ω×Ω |u n (x) − u n (y)| p(x,y)−2 (u n (x) − u n (y))((u n (x) − u(x)) − (u n (y) − u(y))) |x − y| N+p(x,y)s(x,y) dxdy → 0. Therefore, by invoking the (S + ) condition and Proposition 2.1, we conclude that u n X → u X as n → ∞, which implies that I λ,β satisfies the (PS ) c condition. Hence, the proof is now complete. Proof. Let {u n } be a (PS ) c sequence of I λ,β , that is The where X * is the dual space of X. Step 1. We will prove that {u n } is bounded in X. Let us assume by contradiction that {u n } is unbounded in X. Without loss of generality, we can assume that u n X > 1 for all n. Take then, we have (3.18) For simplicity, let's denote Using (3.18) and (3.19), we can write , if λ > 0, β < 0. (3.20) It follows from (1.4) and (3.20) that {u n } is bounded in X. Step 2. We will now demonstrate that the sequence {u n } possesses a convergent subsequence in the space X. According to Theorem 2.1, the embedding X ֒→ L τ(x) (Ω) is compact where 1 ≤ τ(x) < p * s (x). Since X is a reflexive Banach space, passing, if necessary, to a subsequence, there exists u ∈ X such that u n ⇀ u in X, u n → u in L τ(x) (Ω), u n (x) → u(x), a.e. in Ω. (3.21) From (3.2), we find that Similarly, we obtain By (3.17), we have I ′ λ,β (u), u n − u → 0. So, based on the expressions (3.23) and (3.24), we can conclude that (3.22) leads to the following implications: |x − y| N+p(x,y)s(x,y) dxdy → 0. Therefore, we can conclude from the two aforementioned cases that Therefore, by utilizing the (S + ) condition and Proposition 2.1, we can deduce that u n X → u X as n → ∞, indicating that I λ,β satisfies the (PS ) c condition. This concludes the proof. Proof of Theorem 1.1 In this part, one is addressed in proving Theorem 1.1 by applying the mountain pass theorem, see [21]. Proof. Let u ∈ X with u X < 1. From (3.3), Lemma 2.2 and Sobolev immersions, we get Hence, based to the fact that u X < 1 and p satisfies the condition(1.3), we infer the result. Proof. Let φ 0 ∈ C ∞ 0 (Ω). According to the condition (1.3), for t > 1 large enough, we have . Proof of Theorem 1.2 Since X is a reflexive and separable Banach space, so there exist e i ∈ X and e * i ∈ X * such that e i , e * j = δ i j where δ means the Kronecker symbol. We denote [21]). Let X 0 be a Banach space with the norm · X 0 and let X i be a sequence of subspace of X 0 with dimX i < ∞ for each i ∈ N. In addition, set For each even functional J ∈ C 1 (X 0 , R) and for each k ∈ N, we suppose that there exists ρ k > γ k > 0 such that So, we obtain Lemma 5.4. Assume that (1.4) holds. Then there exist ρ > 0 and α > 0 such that I λ,β (u) ≥ α > 0, for any u ∈ X with u X = ρ. We have completed the proof of Lemma 5.4. Therefore, as a consequence, all norms on the finite-dimensional space W are equivalent, implying the existence of a positive constant C W such that Ω |u| q(x) dx and Ω |u| r(x) dx Therefore, we obtain Then, it is deduced from (1.4) that I λ,β (u) < 0. Hence, the proof of Lemma 5.5 is complete.
2023-09-12T06:43:07.999Z
2023-09-09T00:00:00.000
{ "year": 2023, "sha1": "b7e0665ae6f7317f3016f50201b3a340be3f94e1", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "b7e0665ae6f7317f3016f50201b3a340be3f94e1", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
239978423
pes2o/s2orc
v3-fos-license
CST6 protein and peptides inhibit breast cancer bone metastasis by suppressing CTSB activity and osteoclastogenesis Background: Bone metastasis is a frequent symptom of breast cancer and current targeted therapy has limited efficacy. Osteoclasts play critical roles to drive osteolysis and metastatic outgrowth of tumor cells in bone. Previously we identified CST6 as a secretory protein significantly downregulated in bone-metastatic breast cancer cells. Functional analysis showed that CST6 suppresses breast-to-bone metastasis in animal models. However, the functional mechanism and therapeutic potential of CST6 in bone metastasis is unknown. Methods: Using in vitro osteoclastogenesis and in vivo metastasis assays, we studied the effect and mechanism of extracellular CST6 protein in suppressing osteoclastic niches and bone metastasis of breast cancer. A number of peptides containing the functional domain of CST6 were screened to inhibit bone metastasis. The efficacy, stability and toxicity of CST6 recombinant protein and peptides were evaluated in preclinical metastasis models. Results: We show here that CST6 inhibits osteolytic bone metastasis by inhibiting osteoclastogenesis. Cancer cell-derived CST6 enters osteoclasts by endocytosis and suppresses the cysteine protease CTSB, leading to up-regulation of the CTSB hydrolytic substrate SPHK1. SPHK1 suppresses osteoclast maturation by inhibiting the RANKL-induced p38 activation. Importantly, recombinant CST6 protein effectively suppresses bone metastasis in vitro and in vivo. We further identified several peptides mimicking the function of CST6 to suppress cancer cell-induced osteoclastogenesis and bone metastasis. Pre-clinical analyses of CTS6 recombinant protein and peptides demonstrated their potentials in treatment of breast cancer bone metastasis. Conclusion: These findings reveal the CST6-CTSB-SPHK1 signaling axis in osteoclast differentiation and provide a promising approach to treat bone diseases with CST6-based peptides. Introduction Distant metastasis is the major cause of mortality of patients with breast cancer [1]. Skeleton is the most common site where breast cancer cells tend to metastasize [2]. Normally, the rigid bone matrix limits the growth of tumor cells; however, bone is also a rich reservoir of nutrients and growth factors to support tumor growth through the vicious circle of osteolytic metastasis. Disseminated tumor cells produce soluble factors, such as receptor activator of nuclear factor kappa-B ligand (RANKL), parathyroid hormonerelated peptide (PTHrP), interleukin-6, Jagged1, and matrix metalloproteases, to stimulate osteoclast Ivyspring International Publisher maturation. In turn enhanced osteoclast activity promotes tumor cell survival and proliferation by releasing nutrients and growth factors embedded in bone matrix [3][4][5]. Thus, targeting osteoclasts represents a major approach to interrupt this osteolytic vicious cycle and stop bone metastasis. Indeed, many of the current therapeutic drugs of bone metastasis, including bisphosphonates and the anti-RANKL antibody Denosumab, are osteoclasttargeting agents. However, the clinical benefit of these agents is limited by side effects, high cost or minimal long-term benefit [6,7]. Development of additional therapeutic strategies based on new understanding of the communication between tumor cells and bone stroma is of paramount clinical significance. CST6 belongs to the type 2 cystatin family, which are mainly extracellular polypeptide inhibitors of cysteine proteases to prevent extra proteolysis [8]. Three proteases, including Cathepsin B (CTSB), Cathespin L (CTSL) and Legumain (LGMN), are known to be inhibited by CST6 in human cells. The CST6 protein consists of only 149 amino acids including a 28-residue signal peptide. Epigenetic silencing of CST6 has been widely observed in multiple cancer types [9][10][11][12][13][14]. Previously by secretomic profiling, we found that CST6 is downregulated in bone-tropic breast cancer cells and tumor samples. Functional analysis validated the suppressing roles of CST6 in tumor invasion and breast-to-bone metastasis [15]. However, the molecular mechanism of CST6 in metastasis regulation, as well as the clinical potential of CST6 for prognosis and treatment of metastasis, remains unknown. CST6 suppresses osteoclastogenesis and bone colonization of breast cancer cells Previously we showed that CST6 suppresses bone metastasis of breast cancer and observed a reduction of osteoclasts in the metastases caused by CST6-expressing cancer cells [15]. To further assess the effect of tumor-derived CST6 on osteoclastogenesis, CST6 was overexpressed in SCP2, a bone-tropic breast cancer cell line [16] with weak expression of CST6 [15]. CST6 overexpression enhanced the extracellular level of the protein in conditioned medium (CM) of SCP2 (Fig. S1A). Osteoclastogenesis analysis by culturing primary bone marrow of mice with SCP2 CM showed that CST6 overexpression greatly diminished CM-induced osteoclast maturation from bone marrow cells, as showed by tartrate-resistant acid phosphatase (TRAP) staining (Fig. 1A). In contrast, CST6 knockdown with two different short-hairpin RNAs in the weakly bone-metastatic breast cells SCP4 [16] reduced the secreted level of CST6 (Fig. S1B) and enhanced the capacity of SCP4 CM to induce osteoclastogenesis of bone marrow cells (Fig. 1B). Importantly, xenograft metastasis analyses by intracardiac injection of these cancer cells into mice showed that CST6 significantly reduced the number of TRAP + osteoclasts along the tumor-bone interface and inhibited bone matrix destruction, leading to suppression of bone metastasis ( Fig. 1C-G). We further found that CTS6 was expressed and secreted only by tumor cells, but not by osteoclasts derived from primary bone marrow or RAW264.7 cells (Fig. S1C). These data validated a role of tumor-derived CST6 to regulate osteoclasts in the bone microenvironment for bone metastasis. To elucidate the direct cell target of extracellular CST6 for osteoclast regulation, we expressed and purified the human CST6 recombinant protein, followed by treating murine bone marrow cells cultured in RANKL-containing medium with the CST6 protein for osteoclastogenesis assay. The result showed that human recombinant CST6 significantly suppressed RANKL-induced differentiation of bone marrow cells into osteoclasts in a dosage-dependent manner ( Fig. 1H and Fig. S1D), but did not affect the viability of osteoclasts (Fig. S1E). The murine recombinant CST6 protein showed a similar inhibitory effect on osteoclastogenesis (Fig. 1H). We further used the pre-osteoclast RAW264.7 cell line for osteoclastogenesis analysis and also observed the inhibition of osteoclastic differentiation of cells by recombinant CST6 protein (Fig. 1I). Several marker genes of osteoclastic differentiation, including Caclr, Nfatc1, Acp5, Ctsk and Fos, were also downregulated by CST6 (Fig. 1J). Furthermore, the suppression of osteoclastogenesis by recombinant CST6 could be rescued by a CST6 neutralizing antibody (Fig. 1I, J). Notably, CST6 protein did not directly affect the growth of tumor cells (Fig. S1F). Collectively, these data indicated that CST6 directly targets on cells of osteoclast lineage, instead of tumor cells, to regulate osteoclastogenesis. CST6 inhibits CTSB enzymatic activity for osteoclast suppression CST6 binds to and inhibits cathepsins (CTSB and CTSL) and LGMN through distinct binding sites of the protein [17]. To interrogate the protease target of CST6 for osteoclastogenesis regulation, we cloned the N64A (CST6 △N ) and W135A (CST6 △W ) point mutations of CST6 protein (Fig. S2A) that were known to selectively diminish the inhibitory effect of CST6 on LGMN and cathepsins [17]. Enzymatic activity assays confirmed that CST6 △N and CST6 △W could only inhibit cathepsins and LGMN, respectively, while the double mutant CST6 △NW failed to inhibit either protease ( Fig. 2A and Fig. S2B). Notably, CST6 △W and CST6 △NW completely lost the capacity to suppress osteoclastogenesis, while CST6 △N remained as an inhibitor of osteoclast maturation (Fig. 2B, C). In vivo metastasis studies also showed that the W135A mutation, but not the N64A mutation, diminished the inhibitory effect of CST6 on osteoclasts and bone metastasis (Fig. 2D, E). These results indicated that CTSB or CTSL was the downstream target of CST6 to suppress osteoclastogenesis. Further, we found that the selective CTSB inhibitor CA-074Me, but not the CTSL inhibitor Z-FY(t-Bu)-DMK, was able to inhibit osteoclast differentiation of primary bone marrow cells and RAW264.7 cells (Fig. 2F, G, and Fig. S2C-E). In addition, Ctsb knockdown in murine primary bone marrow cells with siRNAs also significantly depressed osteoclastogenesis (Fig. 2H). Taken together, these suggested that CTSB suppression mediates the role of CST6 in osteoclastogenesis. Cathepsins are lysosomal proteases and function only intracellularly in acidic environment [18]. To verify that extracellularly derived CST6 could inhibit the intracellular CTSB activity of osteoclasts, RAW264.7 cells were cultured in the medium containing recombinant CST6 protein and the cell lysates were collected for CTSB activity analysis after phosphate-buffered saline (PBS) washing of the cells. The results showed that CST6 treatment indeed inhibited the intracellular CTSB activity of RAW264.7 (Fig. 2I). More importantly, gradual accumulation of the CST6 protein inside RAW264.7 cells was observed following extracellular CST6 treatment ( Fig. 2J and Fig. S3A). Since osteoclasts are active in endocytosis, it is likely that exogenous CST6 is transported into osteoclasts via endocytosis. Indeed, we observed a co-localization of CST6 that entered the cells with the lysosome marker LAMP1 (Fig. S3B). More importantly, treating RAW264.7 with the endocytosis inhibitor Dynasore reduced the intracellular accumulation of CST6 in a dosage-dependent manner ( Fig. 2K and Fig. S3C). Endocytosis inhibition also rescued the CST6-suppressed osteoclastogenesis of RAW264.7 cells (Fig. 2L). These findings suggested that extracellular CST6 protein could be internalized by pre-osteoclasts through endocytosis to inhibit CTSB and osteoclast differentiation. CST6 stabilizes SPHK1 and inhibits p38 in osteoclasts To further elucidate the downstream mechanism of CTSB in osteoclast regulation, we searched the peptidase database MEROPS [19] and found that sphingosine kinase 1 (SPHK1) was shown to be one of the CTSB substrates. SPHK1 is cleaved by CTSB [20], and more importantly, SPHK1 can negatively regulate osteoclast differentiation by phosphorylating sphingosine into sphingosine-1-phosphate (S1P) [21]. Concordantly, we observed that Ctsb knockdown increased SPHK1 expression in primary bone marrow cells (Fig. 3A). Treating RAW264.7 cells with the CST6 protein or the CTSB inhibitor CA-074Me or stabilized the intracellular SPHK1 protein in a dose-dependent manner (Fig. 3B). In addition, Sphk1 overexpression in RAW264.7 significantly inhibited differentiation of the cells into osteoclasts, while Sphk1 knockdown displayed an opposite effect (Fig. 3C). Notably, simultaneous Sphk1 and Ctsb knockdown in primary bone marrow cells (Fig. 3D) reversed the suppression of osteoclastogenesis caused by Ctsb inhibition (Fig. 3E, F). Similarly, Sphk1 knockdown could also recover osteoclastogenesis of RAW264.7 that was suppressed by the treatment of CST6 recombinant (Fig. 3G, H). These results indicated that CST6 attenuates osteoclastogenesis by upregulating SPHK1. Previous studies showed that RANKL-induced p38 signaling activation is critical for osteoclast maturation [22] and SPHK1 inactivates p38 [21]. We also observed that RANKL treatment led to p38 phosphorylation in pre-osteoclasts, while Ctsb knockdown and Sphk1 overexpression in pre-osteoclasts blocked such effect of RANKL and inhibited p38 phosphorylation (Fig. 3I). Furthermore, treating pre-osteoclasts with CM from CST6overexpressing cancer cells also diminished RANKLinduced p38 phosphorylation in pre-osteoclasts (Fig. 3J). CST6 treatment also inhibited p38 activation (Fig. 3B). To further validate the involvement of p38 signaling in CST6-suppresesed osteoclastogenesis, the p38 inhibitor SB203580 was used to treat RAW264.7 cells that were cultured in cancer cell CM. It was found that CST6 knockdown in cancer cells enhanced CM-induced p38 phosphorylation and osteoclast maturation of RAW264.7, while SB203580 treatment showed the opposite effect (Fig. 3K, L). Taken together, our data showed that CST6 inhibits CTSB activity and stabilizes SPHK1, leading to suppression of p38 activation and osteoclast maturation. Recombinant CST6 protein suppresses breast cancer bone metastasis Next, we tested whether the recombinant CST6 protein could be used to treat bone metastasis of breast cancer. Nude mice were pre-inoculated with SCP2 cancer cells and then treated with 1 mg/kg CST6 recombinant protein by intravenous delivery every day. The recombinant mutant protein CST6 △NW was used as a negative control. In accordance with the in vitro function of CST6 on osteoclasts, CST6 treatment efficiently alleviated bone metastasis (Fig. 4A, B), reduced osteoclastogenesis and bone destruction (Fig. 4A, C), and extended survival of the mice (Fig. 4D). These data argued for the potential of CST6 as a protein drug to treat bone metastatic disease. CST6 peptides suppress osteoclastogenesis and breast cancer bone metastasis Peptides have promising potentials in clinical applications. Thus, we wanted to screen for CST6-mimicking peptides with smaller molecular sizes as candidate drugs of bone metastasis. The protein structure of CST6 has been well studied and it is reported that a hairpin loop containing the highly conserved Gln-Leu-Val-Ala-Gly (QLVAG) fragment (Fig. S4A, B) was important for CST6 to bind to cathepsins for enzymatic inhibition [23][24][25][26][27]. Thus, we designed a series of peptides of various lengths that include the QLVAG fragment or other CST6 domains ( Fig. S4A; Table S1). These candidate peptides were produced by recombinant expression (Fig. S4C) or chemical synthesis. We tested their capabilities to inhibit CTSB enzymatic activity and osteoclastogenesis in comparison to CST6 wild type and mutant proteins. It was found that two QLVAG-containing peptides GQ86 and DQ51, in lengths of 86 and 51 amino acids, respectively, significantly suppressed CTSB (Fig. 5A) and inhibited osteoclastogenesis of bone marrow cells with dosage-dependent effects similar to that of the full length CST6 protein (Fig. 5B). In addition, the osteoclast-suppressing performance of these recombinant protein and peptides was similar to or better than Zoledronic Acid (Fig. 5B), a bisphosphonate that is in clinical use to treat bone metastasis and osteoporosis. Other shorter QLVAGcontaining peptides such as GM30 and AY11 also displayed moderate CTSB and osteoclast-suppressing activity. In contrast, peptides without the QLVAG fragment in various lengths failed to inhibit CTSB or osteoclastogenesis (Fig. 5A, B). We further assessed the in vivo efficacy of the peptides to inhibit bone metastasis of SCP2 cells in the mice. Both GQ86 and DQ51 significantly inhibited bone metastasis and osteoclast maturation at the treatment concentration of 1 mg/kg, with the efficacy similar to that of full length CST6 (Fig. 5C-E). The peptide treatment also restored the body weight and extended survival of the animals (Fig. 5F, G). We further compared the peptides to two clinical drugs for osteolytic bone lesion, Zoledronic Acid and Bortezomib. The data showed that the efficacy of the peptides was similar to both drugs for inhibition of bone metastasis and extension of animal survival (Fig. 5H, I, and Fig. S5A). In addition, the peptides, but not Bortezomib, recovered the body weight of mice that was reduced due to metastasis (Fig. S5B), indicating less toxicity of the peptide than Bortezomib. Pharmacological evaluation of CST6 protein and peptides We then performed pharmacological evaluation of CST6 protein and peptides. Immunohistochemistry analyses demonstrated that the administrated protein mainly distributed in liver, kidney, intestine, spleen and bone metastasis foci of the mice (Fig. S6A). DQ51 displayed a longer plasma half-life than GQ86 and CST6 in the mice (Fig. 6A and Table S2). Acute toxicity analysis also showed that the maximum tolerance dose (MTD) of DQ51 (250 mg/kg) was slightly higher than that of GQ86 and CST6 (200 mg/kg). Importantly, MTD and median lethal dose (LD50, 322.00-360.10 mg/kg) of these candidate drugs (Table S3) were all significantly higher than the dose (1 mg/kg) that could effectively inhibit bone metastasis. Chronic toxicity assessment by daily intravenous administration of these peptides at the dose of 1 mg/kg revealed no apparent abnormality in the animals after 4 weeks. No differences were observed in the weights of whole body (Fig. 6B) or various organs including heart, liver, spleen, lung, kidney, adrenal gland, thymus, ovary and brain (Table S4) after the treatment. Histological examination of these organs revealed no abnormality either (Fig. 6C). Hematological analysis also showed that the treated mice appeared largely normal (Fig. 7). These data indicated the drug safety of CST6 and the related peptides, especially DQ51, for treating osteolytic bone metastasis. Discussion Epigenetic silencing of CST6 has been widely observed in cancers [15,28,29] and it has long been implicated as a tumor suppressor. Indeed, previous studies have solidly demonstrated that CST6 suppresses the proliferation, survival [12,30,31] and metastasis of cancer cells [15]. The functional role of CST6, as well as its relatively small size and the secretory nature, argues for the clinical potential of this protein in cancer treatment. However, the molecular mechanism of CST6 remained elusive, hindering the rational designing of CST6-based therapeutics. Here, our study elucidated a CST6-CTSB-SPHK1-p38 signal axis to regulate osteoclastogenesis and bone metastasis. We showed that CST6 targets CTSB to inhibit its enzymatic activity and stabilizes SPHK1 in osteoclasts. SPHK1 suppresses p38 activation and osteoclastogenesis, leading to disruption of the osteolytic vicious cycle of bone metastasis. Interestingly, our data showed that although the W135A mutation substituting the Tryptophan within the C-terminal hairpin loop of CST6 [23][24][25][26] diminished its inhibitory effect on CTSB, several CST6 peptides without the loop fragment remained as CTSB inhibitors. These results suggest that the C-terminal loop may not be required for CST6 binding to CTSB. Instead, the W135A mutation might disrupt the protein structure and interfere CST6-CTSB binding. The exact role of this C-terminal loop, as well as that of the QLVAG domain, for CTSB inhibition is worthy of further investigation. In addition, it is not known yet how CST6-stabilized SPHK1 regulates p38 phosphorylation. Nevertheless, our study revealed a new role of CST6 to regulate tumor stroma instead of cancer cells. Importantly, extracellular CST6 protein can be internalized by osteoclasts through endocytosis to inhibit the CTSB activity, and the QLVAG fragment in the protein appears crucial for this inhibitory effect. These findings validate the potential of CST6-based approaches to treat bone metastasis and provide the rationale for screening of CST6 peptides as drug candidates. Peptide drugs have demonstrated great potential in clinical application due to the advantages in potency, selectivity and low toxicity. Numerous peptide drugs, such as insulin, oxytocin, gonadotropin-releasing hormone, vasopressin and PTHrP, have been successful in pharmaceutical market [32]. Our study also demonstrated the potential of several QLVAG-containing peptides to inhibit osteoclastogenesis. More importantly, the CST6 recombinant protein and peptides showed promising effects to treat in vivo bone metastasis of breast cancer. Although we could not directly compare the peptides to the clinical drug Denosumab in the animal model because the humanized antibody does not target bone resorption in mice [33,34], the efficacy of the peptides is at least comparable to two other clinical drugs Zoledronic Acid and Bortezomib. In addition, our preliminary data of pharmacological analysis demonstrate that the MTD of these peptides in mice are significantly higher than the effective dose to inhibit bone metastasis and improve overall survival, indicating a broad therapeutic window of these candidate drugs. Long-term treatment with the peptides caused no apparent abnormality of the mice. In contrast, the chemical inhibitor CA-074, although also targeting CTSB and demonstrating a metastasis-inhibiting efficacy [35], caused apparent body weight loss of the mice (Fig. S6B), indicating a superior potential of peptide inhibitors of CTSB for metastasis treatment. However, plasma stability and oral bioavailability are still the main obstacles for development of peptide drugs. Our data indicated that shorter peptides appeared more stable without compromise of efficacy, thus providing the possibility of further optimization of the peptides with sequence screening and peptide modification. In addition, drug delivery vehicles could be also considered for controlled release of the peptides. Although bisphosphonates and Denosumab show positive impact on delay of metastasis progression, the effects of these osteoclast-targeting drugs to improve long-time patient survival have been minimal [36]. In addition, resistance to these drugs also occurs. As the CST6 protein and peptides regulate osteoclastogenesis through a mechanism different to these clinical drugs, they represent promising candidates to supplement the current therapeutics of bone diseases. Plasmids and reagents The full length human CST6 was cloned into the pLVX-puro vector (Clontech) for overexpression. Murine Sphk1 was amplified from RAW264.7 cDNA and cloned into pMSCV (Clontech) vector for overexpression. For recombinant protein or peptide production in E. coli, full length or truncated CST6 sequences were subcloned into pET-28a (Novagen) with a C-terminal 6× His tag. The short hairpin RNAs (shRNAs) were cloned into the pLKO.1-puro vector (Addgene) for CST6 or the pSuper-puro vector (Oligoengine) for Sphk1. The sequences of shRNAs for CST6, Sphk1 and siRNAs for Ctsb and Sphk1, and qPCR primer sequences of Caclr, Nfatc1, Acp5, Ctsk and Fos were provided in Table S5. The CST6 mutants N64A (CST6 △N ) and W135A (CST6 △W ) were constructed as previously described [17]. Fluorometric enzymatic assays Enzymatic activities of cathepsins were analyzed with the fluorometric kits of CTSB (K147, Biovision) and CTSL (K161, Biovision) according to the manufacturer's protocol. The LGMN activity was analyzed as previously described [37] with minor modifications of the protocol. Briefly, the cells were harvested and kept in lysis buffer (100 mM sodium citrate, 1 mM disodium EDTA, 1% noctylbeta-D-glucopyranoside, pH 5.8). After three freeze-thaw cycles at -80 °C, the cell lysate was centrifuged at 10000 g for 15 min and the supernatant was transferred into a new tube followed by protein quantification. 10 μg cell lysate in the assay buffer (39.5 mM citric acid, 121 mM Na2HPO4, 1 mM DTT, 1 mM EDTA, and 0.1% CHAPS, pH 5.8) in a total volume of 100 µL was added to the wells of a flat-bottom 96-well microplate (3631, Corning). After adding 1 µL substrate Z-Ala-Ala-Asn-AMC (I1865, Bachem) into each well with a final concentration of 10 μM, fluorescence measurements were performed every 5 min at 37 °C using a BioTek's Synergy™ Mx Microplate Reader with a 360 nm excitation filter and a 460 nm emission filter. All measurements were performed in triplicates. Production of recombinant proteins and peptides CST6 wild type and mutant proteins, as well as the peptides longer than 30 amino acids, were expressed in BL21 (DE3) competent E. coli cells. The cells were transformed with the pET-28a plasmids expressing the protein or peptides, and then a single colony was transferred into 1 L of LB liquid medium with 100 μg/ml kanamycin. The cells were shaken at 37 o C and 200 rpm until the absorbance reached 0.6 at 600 nm, followed by addition of 1 mM IPTG and incubation at 16 o C for another 12 hours. The cells were harvested by centrifugation at 5000 rpm for 15 min and the pellet was resuspended in lysis buffer (50 mM NaH 2 PO 4 , 0.3 M NaCl, pH 8.0). After treatment with 1 mg/mL Lysozyme, the cells were sonicated for 60 s and the lysate was centrifuged at 12,000 g for 30 min. The clear supernatant was collected and filtered with a 0.45-μm syringe filter. The purification was performed with the HisSep Ni-NTA 6FF Chromatography Column (20504, Yeasen Biotech) according to manufacturer's instruction. The target proteins were visualized by Western blot analysis and Coomassie staining. Peptides of 30 amino acids or shorter were chemically synthesized and purchased from Ontores Biotech (GM30) and GL Biochem (AY11 and DR9). All peptides were obtained with a purity > 95%. Osteoclastogenesis assays Primary bone marrow cells were harvested by flushing the cavity of tibia and femur from 5-6 week old Balb/c mice. Erythrocytes were removed by treating cell suspension with the Red Blood Cell Lysing Buffer (R7757, Sigma) for 5 min. The remaining cells were cultured in α-MEM (A1049001, Gibco) supplemented with 10% FBS and 5 ng/ml M-CSF for 12 h. Non-adherent cells were transferred into a 24-well plate and cultured in α-MEM supplemented with 10% FBS, 25 ng/ml M-CSF and 25 ng/ml RANKL for another 5 to 6 days. Cancer cell CM was mixed with α-MEM at a ratio of 1:3. CST6 protein, peptides or other drugs at indicated concentrations were administrated into the culture medium. For assessment of osteoclast differentiation, cells were stained using the Acid Phosphatase TRAP staining kit (387A, Sigma) as previously described [38]. For RAW264.7 osteoclastogenesis assay, 700 cells per well were added into a 96-well plate and cultured in the same medium as shown above except that M-CSF was not essential. 7 ug/ml CST6 antibody (MAB1286, R&D) was used to neutralize recombinant human CST6 in RAW264.7 osteoclastogenesis assay. Mouse experiments of bone metastasis For bone metastasis study was performed as previously described [38]. Briefly, athymic Balb/c mice were anesthetized and 10 5 cells in 100 µL phosphate-buffered saline (PBS) were injected into the left cardiac ventricle. Peptides were administrated intravenously into the mice (n = 10 per group) at a dose of 1 mg/kg/day. The control mouse group were administered with same volume of peptide solvents. The metastatic burden was measured by bioluminescent imaging (BLI) with a NightOWL II LB983 Imaging System (Berthold). The osteolytic area was analyzed by the SkyScan 1076 In vivo X-ray Microtomograph (Bruker). The sections of hind legs of the mice was used for H&E, TRAP and immunohistochemistry staining. Pharmacokinetic and toxicity assays For pharmacokinetic assays, CST6 protein or His-tagged peptides were administrated intravenously into Balb/c mice (n = 4 per group) at a dose of 1 mg/kg. Blood samples (100 µL) were collected using capillary at 0.25 h, 0.5 h, 1 h, 2 h, 3 h and 4 h, respectively. After 1 h at room temperature, the serum samples were obtained from the supernatant by centrifugation at 3000 rpm for 15 min. The CST6 content in serum was analyzed with CST6 ELISA kit (Sino Biological, 10438). The peptide content was analyzed with the His-tag ELISA kit (GenScript, L00436) according to the manufacturer's instruction. Blank serum samples from mice without protein/peptide administration were used as control. Standard curves for each protein or peptide were generated with standard samples of known concentrations to calculate the protein/peptide concentration in blood samples. The data for all peptides were fitted with the one-phase decay model using Prizm software (GraphPad Software) to calculate the half-lives. For acute toxicity assays, Balb/c mice (n = 5 per group) were intravenously injected with the indicated protein or peptides at doses of 150, 200, 250, 300, 350 mg/kg. The death of mice was counted after 24 h. The median lethal dose (LD50) was calculated using the Bliss method. Acute toxicity assays were repeated three times, and all analyses showed similar results. For chronic toxicity assays, Balb/c mice (n = 4 per group) were treated with daily intravenous injection of the indicated protein or peptides at a dose of 1 mg/kg/day for 4 weeks, or 10 mg/kg/day CA-074 for 8 days. The body weights of the mice were measured. After 4 weeks of CST6 peptide treatment, blood samples (50 µL) were collected using tubes and immediately diluted by PBS containing 5 mM EDTA into 100 µL, and analyzed by an Auto Hematology Analyzer (Mindray, BC-2800 Vet). Mice were sacrificed and organs were harvested for weight measurements and H&E staining to observe the pathological changes. Statistical analysis BLI curves of in vivo bone metastasis were compared by two-tailed nonparametric Mann-Whitney test without assumption of Gaussian distribution. Log-rank test was performed for survival analyses of the mice. Two-tailed independent Student's t-test without assumption of equal variance was performed to analyze the in vitro and in vivo osteoclastogenesis, as well as other assays. P values less than 0.05 were considered as significant.
2021-10-27T15:14:28.072Z
2021-10-11T00:00:00.000
{ "year": 2021, "sha1": "955e9e3d8171347db863181936ee3553be292764", "oa_license": "CCBY", "oa_url": "https://doi.org/10.7150/thno.62187", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5fba7042eebb643bc7611676c2525937870c3bf9", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
2085559
pes2o/s2orc
v3-fos-license
Exposure to welding fumes is associated with hypomethylation of the F2RL3 gene: a cardiovascular disease marker Background Welders are at risk for cardiovascular disease. Recent studies linked tobacco smoke exposure to hypomethylation of the F2RL3 (coagulation factor II (thrombin) receptor-like 3) gene, a marker for cardiovascular disease prognosis and mortality. However, whether welding fumes cause hypomethylation of F2RL3 remains unknown. Methods We investigated 101 welders (median span of working as a welder: 7 years) and 127 unexposed controls (non-welders with no obvious exposure to respirable dust at work), age range 23–60 years, all currently non-smoking, in Sweden. The participants were interviewed about their work history, lifestyle factors and diseases. Personal sampling of respirable dust was performed for the welders. DNA methylation of F2RL3 in blood was assessed by pyrosequencing of four CpG sites, CpG_2 (corresponds to cg03636183) to CpG_5, in F2RL3. Multivariable linear regression analysis was used to assess the association between exposure to welding fumes and F2RL3 methylation. Results Welders had 2.6% lower methylation of CpG_5 than controls (p<0.001). Higher concentrations of measured respirable dust among the welders were associated with hypomethylation of CpG_2, CpG_4 and CpG_5 (β=−0.49 to −1.4, p<0.012); p<0.029 adjusted for age, previous smoking, passive smoking, education, current residence and respirator use. Increasing the number of years working as a welder was associated with hypomethylation of CpG_4 (linear regression analysis, β=−0.11, p=0.039, adjusted for previous smoking). Previous tobacco smokers had 1.5–4.7% (p<0.014) lower methylation of 3 of the 4 CpG sites in F2RL3 (CpG_2, CpG_4 and CpG_5) compared to never-smokers. A non-significant lower risk of cardiovascular disease with more methylation was observed for all CpG sites. Conclusions Welding fumes exposure and previous smoking were associated with F2RL3 hypomethylation. This finding links low-to-moderate exposure to welding fumes to adverse effects on the cardiovascular system, and suggests a potential mechanistic pathway for this link, via epigenetic effects on F2RL3 expression. INTRODUCTION Exposure to welding fumes is associated with an increased risk of cardiovascular disease (CVD); the standardised incidence ratio for acute myocardial infarction was 1.12 (95% CI 1.01 to 1.24) in a Danish prospective study of welders followed until 2006, 1 and the standardised mortality ratio for ischaemic heart disease was 1.35 (95% CI 1.1 to 1.6) in a Swedish study of welders followed until 1995. 2 Despite these observational data, the mechanisms coupling exposure to welding fumes with harmful cardiovascular events remain unclear. Low DNA methylation, so-called hypomethylation, of the F2RL3 gene in blood is a predictor of mortality from CVDs and cancers. 3 F2RL3 hypomethylation can also serve as a marker of CVD prognosis. 4 Recent studies have found that tobacco smoking, a strong risk factor for CVD, is associated with hypomethylation of F2RL3. [5][6][7][8] Breitling et al 5 first reported in a population-based epidemiological study that cg03636183 of F2RL3 (CpG_2) was significantly hypomethylated in smokers. Later studies have shown that, along with CpG_2, CpG_4 and CpG_5 can serve as biomarkers for current and lifetime smoking, 9 and as strong predictors of CVD-associated mortality. 3 F2RL3 is expressed in different cell types, including circulating leucocytes, 10 and encodes thrombin protease-activated receptor 4 (PAR-4), a cell surface protein. 11 functions in blood coagulation 12 and activation of PAR-4 is important for multiple aspects of immune function, including recruitment of leucocytes, modulation of rolling and adherence of neutrophils Open Access Scan to access more free content What this paper adds ▸ Exposure to welding fumes has been reported to be associated with increased risk of cardiovascular disease. Tobacco smoke exposure has recently been linked to hypomethylation of the F2RL3 gene, a marker for mortality and cardiovascular disease prognosis. Both welding fumes and tobacco smoke consist mainly of ultrafine condensation particles. ▸ We investigated F2RL3 methylation by pyrosequencing in welders and controls. ▸ We found that exposure to welding fumes was associated with F2RL3 hypomethylation, suggesting evidence that low-to-moderate exposure to welding fumes may have adverse effects on the cardiovascular system via epigenetic modifications. and eosinophils, as well as regulation of vascular endothelial cell activity. 10 12-14 These physiological events also occur as early steps in inflammatory reactions in the vascular system, 10 12 15 and moreover, these events have been observed to occur more frequently in smokers and partly in relation to welding particle exposures. [16][17][18][19] We hypothesised that welding fume exposure is associated with hypomethylation of F2RL3, and we analysed the methylation status of F2RL3 in a group of welders and non-exposed controls. MATERIALS AND METHODS Study population We enrolled 101 welders from 10 different companies in southern Sweden. The companies were medium sized and produced heavy vehicles, lifting tables, stoves, heating boilers and pumps, and equipment for the mining industry. Detailed characteristics of 8 of the 10 companies which employed 83 welders were recently described. 20 Furthermore, we enrolled 127 controls from seven different companies: participants from six companies were 'blue-collar' worker with the routine task of organising grocery goods and participants from the seventh company were working as gardeners; these unexposed control workers had no obvious exposure to respirable dust at the workplace. All study participants were male and currently non-smokers. Among the previous smokers, one reported smoking in the past 12 months; the remaining previous smokers stopped smoking at least 1 year before study enrolment. Structured questionnaire-based interviews were carried out by a trained nurse to obtain information about age, height and weight (all coded as continuous variables), ethnicity ( participants and their parents' nationality), education (five categories: primary school, high school, professional school, university <3 years, university >3 years of education), medical history, personal and family disease history (cancer and CVDs), diet (frequency of intake of fruit, vegetables and fish), physical activity (four levels, from sedentary to regular exercise), previous smoking history (yes or no; if yes, year of start and end), passive smoking (at home and/or at work), alcohol consumption (wine or other alcohol consumption, with six different levels in each), current residence, having a wood burning stove or boiler at home, exposure to wood smoke from the neighbourhood, exposure to traffic (traffic intensity around the residence and time spent in traffic every day), working environment, occupational history, and hobbies with exposure to smoke (eg, working with car engines, etc). The participants were asked whether they had had myocardial infarction, angina pectoris, hypertension, stroke, thrombosis or other CVDs diagnosed by a physician. All study participants answered the same questionnaire, apart from questions regarding work tasks that differed between welders and controls. Venous blood samples were collected from the participants. This study was approved by the Regional Ethical Committee of Lund University, Sweden, and all study participants gave their informed written consent to take part in the study. Occupational exposure assessment Monitoring of exposure Respirable dust was measured once in each of the welding companies and samples were collected in the breathing zones of welders. For welders who were wearing powered air purifying respirators (PAPRs), air outside the PAPRs was sampled. Exposure to respirable dust was measured by air sampling on pre-weighed 37 mm mixed cellulose ester filters (0.8 mm pore size) fitted in leak-free cassettes (Sure-Seal). Respirable dust cyclone air samplers of nickel-plated aluminium (BGIL4, BGI Inc) were attached to the filter cassettes. Battery powered sampling pumps (MSA Escort Elf ) were operated at a flow rate of 2.2 L/min. The airflow was checked before, during and after the sampling with a primary calibrator (TSI Model 4199, TSI Inc). Most of the air sampling was performed during full-shift work; the average sampling time was 6.8 h (range 2.4-8.6 h). The filter samples were analysed gravimetrically for respirable dust according to a certified method. The limit of detection (LOD) was 0.05 mg/sample. Parallel measurements of respirable dust were performed to assess the workplace protection factor for the PAPRs. A setup consisting of two parallel sampling systems for respirable dust was used: one for sampling inside and one for sampling outside the PAPR on the shoulder in the breathing zone. Parallel samplings were performed on three workers at different companies and the respirable dust concentrations were at least three times lower inside the PAPRs compared with concentrations outside in the breathing zones. 20 The exposure to respirable dust was also measured by personal sampling in the breathing zone for 19 workers from two control companies. The average sampling time for the controls was 7.2 h. Stationary measurements of respirable dust were conducted in four other control companies with a direct-reading instrument (Sidepak Model AM510, TSI Inc). In these six companies, the particle number concentrations (size range 20-1000 nm) were also measured with direct-reading, stationary instruments (P-Trak, TSI Inc). Exposure assessment of welding fumes Respirable dust was measured for 53 of 101 welders. For the remaining 48 welders without measurements of respirable dust, their exposure to welding fumes was assessed from exposure data of the 53 welders aforementioned and 17 welders working with similar tasks at the same companies, but not included in the study, as well as exposure data from a previous study. 19 We excluded two participants from the exposure assessment: one welder reported that he worked only with soldering/brazing, which seemed unreasonable and one welder's information about the local exhaust ventilation was missing, making it difficult to correctly estimate his exposure. After excluding these two participants, the total number of welders with data on estimated respirable dust exposure was n=99. In order to calculate the exposure to respirable dust for welders with PAPRs, the respirable dust concentrations were reduced by a correction factor of three to get a better estimate of the exposure inside the PAPRs. The correction factor was based on the results from our parallel respirable dust measurements described above, and literature data on the workplace protection factor of PAPRs. [20][21][22][23] Analysis of DNA methylation DNA was isolated from venous blood with the QIAamp DNA Blood Midi kit (Qiagen, catalogue nr 51183). The DNA quality was evaluated on a NanoDrop spectrophotometer (Thermo Scientific, NanoDrop 1000) and the DNA showed good quality (260/280 nm >1.80). DNA was bisulfite treated with the EZ DNA Methylation kit (Zymo Research, catalogue nr D5008). Pyrosequencing assays were designed to quantify the percentage of methylation of F2RL3 at CpG sites that have previously been linked to tobacco smoke. 5 F2RL3 is on chromosome 19 and has two exons. The cg03636183 site (number based on Illumina 27 K and 450 K beadchips) is located on a CpG island (http://www.ncbi.nlm.nih.gov/epigenomics/genome/GCF_ 000001405.13/gene:9002/) in exon two. The cg03636183 site (labelled CpG_2), based on previous studies 3 9 24 and three other CpG sites downstream (labelled CpG_3 to CpG_5) within a total length of 70 bp were analysed. Two pyrosequencing assays covered these four sites: the first assay (amplicon length of 235 nucleotides) encompassed one site and the second assay (amplicon length of 341 nucleotides) encompassed three sites (see online supplementary table S1). The assays were designed by PyroMark Assay Design 2.0 software (Qiagen). The forward primers were biotinylated. PCR was performed using PyroMark PCR reagents (Qiagen, catalogue nr 972807). The PCR product was purified using Streptavidin Sepharose High Performance beads (Amersham Biosciences, catalogue nr 17-5113-01). Pyrosequencing was carried out using the PSQ HS96 Pyrosequencing System (Qiagen). We repeated 21% (N=48) of the samples and found the variation in coefficients (VC) as 5.4%, 3.5%, 2.5% and 3.3% for CpG_2 to CpG_5, respectively. Negative controls were included in each run. C reactive protein and serum amyloid A measurements C reactive protein (CRP) was measured in plasma by immunoturbidimetry, and serum amyloid A (SAA) was measured in serum by immunonephelometry at the Department of Clinical Chemistry in Lund University Hospital using standard protocols. 25 Statistical analyses Differences in characteristics and F2RL3 methylation percentages between welders and controls were compared by the Mann-Whitney U test for continuous variables and Fisher's exact test for categorical variables. General linear models were employed to explore the main research hypothesis, that is, that working as a welder is associated with F2RL3 methylation, as well as that measured/calculated respirable dust exposure and years working as a welder are associated with F2RL3 methylation. In addition, the general linear model was employed to evaluate associations between previous smoking and F2RL3 methylation. Associations between F2RL3 methylation and CVD were investigated by logistic regression. The methylation levels observed at the CpG_4 were not normally distributed when combining welders and controls; however, it was normally distributed when analysing welders only. We performed unadjusted, adjusted and full model analyses while investigating the associations between welding versus F2RL3 methylation. Adjusted analyses included age (continuous) and previous smoking (yes or no), as these two variables were reported to be associated with F2RL3 methylation. For identification of additional factors to be included in the full model, we considered variables from both published literature and our own data, including individual characteristics, diet, lifestyle, passive smoking and disease history. These variables were tested individually in our main hypothesis, that is, the differences of F2RL3 methylation between welders and controls. Any variable altering the effect estimate by more than 10% with at least two of four CpG sites was included in the full model. These additional variables were education (low or high), passive smoking (yes or no) and current residence (big city or others). To be consistent, we included the same variables in the full model investigating the associations between welding and methylation, and when investigating the association between previous smoking and methylation. While investigating the association between F2RL3 methylation and CVD, adjusted analyses included age and body mass index (BMI; continuous). Other possible confounders were considered, but none altered the effect estimate by more than 10% in the associations between F2RL3 methylation and CVD. For the analysis of measured respirable dust versus methylation, the use of a respirator was also considered. When the number of years working as a welder was evaluated, age was not included in the adjusted analysis and full model, as age and years worked were highly correlated (r=0.75), and including both of them in the same model increased the SE by more than 50% (eg, from 0.032 to 0.049). All statistical analyses were performed using SPSS V. 22.0 (SPSS Inc) and statistical significance refers to p<0.05 (two tailed). RESULTS Welders were exposed to average levels of 1.2 mg/m 3 (range 0.1-19.3 mg/m 3 ) of respirable dust, whereas all the control subjects measured had exposure to respirable dust lower than 0.2 mg/m 3 . Characteristics of study subjects, including methylation of F2RL3, are presented in table 1. Welders and control subjects did not differ in age, BMI, ethnicity, reported CVD, family history of CVD, number of individuals who reported previous smoking, or hobbies with exposure to dust, gases and fumes. Furthermore, welders and control subjects did not differ in intake of vegetables, fruit or fish, use of snus, consumption of wine, or time spent in traffic (not in table 1). The welders were likely to have shorter education, live in smaller cities and have higher exposure to passive smoking and to respirable dusts compared to controls. CpG_2, CpG_4 and CpG_5 were significantly correlated with each other (Pearson, r s =0.24-0.64). Welders had a significantly lower methylation of CpG_5 and higher methylation of CpG_3 than controls (table 1). In linear regression analysis, welders showed a significantly lower methylation of CpG_5 compared to controls, also after adjustment for age and previous smoking (table 2). However, after additional adjustments for passive smoking, education and current residence, the associations became nonsignificant ( p=0.061). Compared to controls, welders had higher methylation in CpG_2 (only in full model) and CpG_3 (table 2). Measured (N=53 welders) and calculated (N=99) concentrations of respirable dust were associated with DNA methylation (table 3A): higher concentrations of respirable dust (both measured and calculated) were significantly associated with hypomethylation of CpG_2 and CpG_4 (figure 1) and (only measured) CpG_5. The effect estimates were in general higher for the measured values than the calculated values. Adjustments with age, previous smoking, passive smoking, education, current residence and respirator use marginally changed the estimates, but not the statistical significance, except for calculated respirable dust versus CpG_2 (table 3A). Furthermore, we analysed the association between years working as a welder and methylation of F2RL3. Methylation of CpG_4 was inversely associated with working years (β=−0.11, p=0.039, adjusted for previous smoking; table 3B), but this association became non-significant in the full model (β=−0.089, p=0.096). When age was included in the full model, the associations became statistically non-significant, which is most likely due to overadjustments, as age and working years as welders were highly correlated (r s =0.75). Both age and working years as welder were associated with CpG_4 methylation. However, when age and working years were included in the same model, both of them became non-significant, which indicates that the model was suffering from collinearity. In order to distinguish the effect of working years as a welder from the general effect of age on F2RL3 methylation, we investigated the association between age and CpG_4 in controls as well. The effect estimate was 23% lower than in the welders, and the p value was rather close to the significance threshold (data not shown). The difference of the effect of age in welders and in controls suggests that the welders may be influenced by other factors strongly correlated with age, such as working years as a welder. Moreover, in another sensitivity analysis, we also adjusted the models presented in tables 2 and 3A, B with CRP and SAA as markers for acute inflammatory response, but these adjustments did not affect the associations substantially (data not shown). We also investigated the association between previous smoking and F2RL3 methylation, and F2RL3 methylation and CVD (table 4A, B). Previous smokers had a significantly lower methylation in CpG_2 (1.5%), CpG_4 (4.7%) and CpG_5 (2.1%) compared to never-smokers. These associations remained significant after adjustments. We found that previous smokers had lower methylation compared with never-smokers both among welders and non-exposed controls, for example, previous smokers showed a 1.6% lower methylation of CpG_2 compared to non-smokers in both groups. For CpG_4, previous smokers showed a 4.0% and 5.3% lower methylation in welders and controls, respectively (data not shown). We also performed logistic regression analysis between CpG methylation and self-reported CVD. None of the CpG sites were significantly associated with history of CVD, although a non-significantly lower risk with more methylation was observed for all CpG sites. DISCUSSION This study shows that exposure to welding fumes, similar to tobacco smoke, is associated with hypomethylation of F2RL3, a risk marker for CVD. The effects on F2RL3 methylation were found at low-to-moderate levels of exposure to welding fumes and suggest that welders might be at risk for CVD, despite precautions taken at lower exposure levels. The welders we studied had a median exposure level of 1.2 mg/m 3 welding fumes over a shift, and the occupational exposure limit (8 h time weighted average) set by the Swedish Work Environment Authority is 5 mg/m 3 . 26 The fact that welding fumes and tobacco smoke were linked to hypomethylation of F2RL3 suggests that a common factor may cause the observed associations. One common factor is ultrafine particles, and indeed we found strong associations between individual exposure to respirable The β values presented for occupation as a welder versus control are derived from general linear models. Unadjusted analyses: F2RL3 methylation=intercept + β (occupation); adjusted analyses: F2RL3 methylation=intercept + β (occupation) + β1 (age) + β2 (previous smoking); and full model: F2RL3 methylation=intercept + β (occupation) + β1 (age) + β2 (previous smoking) + β3 (passive smoking) + β4 (education)+ β5 (current residence). *The residuals for methylation levels for CpG_4 were not normally distributed when comparing welders and controls and log transformation did not improve the distribution. dust, which in a welding setting predominantly consists of ultrafine particles, and hypomethylation. However, it should be mentioned that other common factors, such as different metals, present in both tobacco smoke and welding fumes, may also cause the association with F2RL3 methylation. We previously reported that welders are exposed to high levels of respirable manganese from welding fumes. 20 Apart from manganese, iron is also a major component of black steel, and both iron and manganese may adversely affect cardiovascular health. 27 28 F2RL3 is a key gene for recruitment and behaviour of immune cells and blood coagulation. 10 12-14 29 Hypomethylation is often linked to increased gene expression, and if this is the case for F2RL3, 7 increased expression of this gene could result in increased inflammation and possibly coagulation. Zhang et al 9 reported that hypomethylation of the CpG_4 site was most strongly associated with smoking behaviour, with an average difference of 5% between previous smokers and never-smokers, which is in accordance with what we found in our study. We also found that hypomethylation of CpG_4 had the strongest association with exposure to respirable dust, indicating that this particular CpG site is a main target for the effect of different types of exposure to small particles. The other CpGs showed less consistent associations with occupational exposure to welding fumes. More studies on other types of occupational and environmental particle exposure and methylation of F2RL3 are warranted to identify if aberrant methylation of this gene is a common change in response to exposure to ultrafine particles and, further, the effects of this altered methylation on F2RL3 expression and the cardiovascular system. Moreover, this study identified significant associations between reported CVD and hypomethylation of sites in F2RL3, despite the fact that the study was rather small and crosssectional. These findings further support the link between this gene and CVD. Previous publications did not report results for CpG_3, since the methylation of this CpG could not be well characterised by the matrix-assisted laser desorption/ionisation-time-of-flight mass spectrometry (MALDI-TOF) assay used which showed high technical variability. 3 5 24 However, the pyrosequencing assay we developed identified CpG_3 methylation reliably (VC 3.5% for repeated tests). Surprisingly, we found that welders showed hypermethylation of CpG_3 compared to the nonexposed controls, but we did not find any association of CpG_3 methylation with previous smoking or CVD prevalence. This could indicate that this site is specific for exposure to welding The β values are derived from linear regression models. *Unadjusted analyses: F2RL3 methylation=intercept + β (respirable dust/calculated respirable dust/years working as welder). †Full model: intercept + β (respirable dust) + β1 (age) + β2 (previous smoking) + β3 (passive smoking) + β4 (education) + β5 (current residence) + β6 (respirator use). ‡Full model: intercept + β (calculated respirable dust) + β1 (age) + β2 (previous smoking) + β3 (passive smoking) + β4 (education) + β5 (current residence). For the calculated respirable dust, use of a respirator was included in the calculation as described in the Materials and methods section. §Adjusted analysis: intercept + β (years working as welder) + β1 (previous smoking). ¶Full model: intercept + β (years working as welder) + β1 (previous smoking) + β2 (passive smoking) + β3 (education) + β4 (current residence) + β5 (respirator use). Figure 1 Association between F2RL3 methylation and respirable dust exposure. Scatterplot of F2RL3 CpG_4 methylation (%) versus measured respirable dust in welders (n=53). The DNA methylation was measured using pyrosequencing in DNA isolated from peripheral blood and exposure to respirable dust was measured by air sampling close to the breathing zone on pre-weighed mixed cellulose ester filters. The fit (R 2 ) of the line is 0.31. fumes. However, we did not find any association of CpG_3 methylation with welding fume exposure or length of time working as a welder, suggesting that this is unlikely. The significance of hypermethylation of CpG_3 in welders needs further investigation. A major advantage of this study was the individual measurements of exposure to welding fumes in a large group of workers, and estimated exposure for the rest of the workers. The higher effect estimates in the analyses restricted to workers whose exposure was individually monitored indicate that this substantially reduced exposure misclassification. A further strength of this study was that the participants were currently non-smokers, as this avoids any masking effect of current smoking on F2RL3 methylation observed in previous studies. 6 9 Indeed, previous smoking was here associated with hypomethylation of CpG_2, CpG_4 and CpG_5, and this was adjusted for in the regression models. Moreover, we gathered detailed data about other potentially influential factors. However, this study also had some limitations. CpG methylation was measured in the DNA isolated from peripheral blood and not from individual cell populations. DNA methylation patterns may vary among different blood cell types, 30 and we cannot rule out the possibility that the observed effect on DNA methylation was due to the differences in cell populations. However, we did adjust our data for acute phase response proteins in blood (CRP and SAA), as a proxy for alterations of blood cell composition due to infection or inflammation, but these sensitivity analyses did not change the effect estimates or p values substantially. Metal fume fever, an acute response to exposure to freshly generated and relatively high concentrations (320-580 mg zinc/m 3 ) of metal-rich particles, 31 might potentially influence our results by altering the cell composition in blood. Although the welders in our study who were exposed to low-to-moderate levels of welding fumes did not report any fever, and the cytokine levels in welders were mostly similar to the levels in the controls, 25 we cannot rule out the possibility that the epigenetic effects observed were due to subtle differences in the subpopulation of immune cells caused by particle exposure. The study design is cross-sectional, which limits the conclusions with regard to causality of the observed associations. Some workers used protective devices to protect against inhalation of smoke and we could not measure the respirable dust inside the respirators. Therefore, the measured respirable dust could overestimate the true exposure to inhalable particles. Use of protective masks was, however, adjusted for in the models and this did not change the results significantly. Self-reporting of the personal and family history of CVD was also a limitation of our study and might have blurred the association between F2RL3 methylation and history of CVD. There is an issue of multiple comparisons, as there were four different CpG sites in the linear regression analysis and both measured and calculated data on respirable dust. However, the CpG sites were partly correlated (ie, they were not independent), and therefore we did not make corrections for multiple testing. Patient consent Obtained. Ethics approval Regional Ethical Committee of Lund University, Sweden. Provenance and peer review Not commissioned; externally peer reviewed. Open Access This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/ licenses/by-nc/4.0/
2017-04-05T17:14:07.031Z
2015-09-22T00:00:00.000
{ "year": 2015, "sha1": "85907592a9386be976df8c46eb85cb70cab601dd", "oa_license": "CCBYNC", "oa_url": "https://oem.bmj.com/content/72/12/845.full.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "85907592a9386be976df8c46eb85cb70cab601dd", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
269331861
pes2o/s2orc
v3-fos-license
A molecular epidemiological investigation of contagious caprine pleuropneumonia in goats and captive Arabian sand gazelle (Gazella marica) in Oman Background Contagious caprine pleuropneumonia (CCPP) is a fatal WOAH-listed, respiratory disease in small ruminants with goats as primary hosts that is caused by Mycoplasma capricolum subspecies capripneumoniae (Mccp). Twelve CCPP outbreaks were investigated in 11 goat herds and a herd of captive Arabian sand gazelle (Gazella marica) in four Omani governorates by clinical pathological and molecular analysis to compare disease manifestation and Mccp genetic profiles in goats and wild ungulates. Results The CCPP forms in diseased and necropsied goats varied from peracute (5.8%), acute (79.2%) and chronic (4.5%) while all of the five necropsied gazelles showed the acute form based on the clinical picture, gross and histopathological evaluation. Colonies of Mccp were recovered from cultured pleural fluid, but not from lung tissue samples of one gazelle and nine goats and all the isolates were confirmed by Mccp-specific real time PCR. Whole genome-single nucleotide polymorphism (SNP) analysis was performed on the ten isolates sequenced in this study and twenty sequences retrieved from the Genbank database. The Mccp strains from Oman clustered all in phylogroup A together with strains from East Africa and one strain from Qatar. A low variability of around 125 SNPs was seen in the investigated Omani isolates from both goats and gazelles indicating mutual transmission of the pathogen between wildlife and goats. Conclusion Recent outbreaks of CCPP in Northern Oman are caused by Mccp strains of the East African Phylogroup A which can infect goats and captive gazelles likewise. Therefore, wild and captive ungulates should be considered as reservoirs and included in CCPP surveillance measures. Supplementary Information The online version contains supplementary material available at 10.1186/s12917-024-03969-1. Background Contagious caprine pleuropneumonia (CCPP) is a WOAH-listed, respiratory disease of small ruminants with goats as primary hosts.It is caused by Mycoplasma capricolum subspecies capripneumoniae (Mccp), a member of the Mycoplasma mycoides cluster [1].CCPP causes high morbidity and mortality rates that may reach 100% and 80%, respectively [2,3].The disease results in severe economic losses in many countries of Africa, Asia and the Middle East, including Oman [4][5][6][7].The characteristic clinical signs in acute and subacute CCPP are fever, respiratory distress and coughing with postmortem changes confined to the thoracic cavity in form of unilateral severe pleural effusion and fibrinous pleuropneumonia [3,8]. Mycoplasmal pneumonia was frequently misdiagnosed as CCPP due to the closely related phenotypic and genomic properties of Mccp with others pathogenic mycoplasmas in ruminants, such as M. mycoides ssp.capri or M. ovipneumoniae [15,16].In addition to examination of typical clinical and necropsy signs, isolation and molecular identification of Mccp strains are required for confirmation of CCPP.Because Mccp is very fastidious and difficult to grow, PCR is the method of choice, and several PCR assays have been proposed for molecular detection [17][18][19][20]).Molecular typing tools include multi locus sequence typing (MLST) [21], core genome (cg) MLST [22] and single nucleotide polymorphism (SNP)-based whole genome analysis [23]. The current study aimed to conduct a whole genome SNP analysis on the isolated Mccp from several CCPP outbreaks that occurred in goats and captive Sand gazelles in Oman between 2019 and 2020.This should allow the reconstruction of epidemiological relationships between isolates from different locations and from wild and domestic hosts.Diagnosis of CCPP was performed and confirmed through gross and histopathological examination, immunohistochemistry (IHC) antigen localization, bacterial culture, and real time PCR. Case history A herd of Arabian sand gazelle (Gazella marica) in a private farm in Muscat, Oman, suffered from high mortalities of both, adult and young animals in April 2020.The responsible veterinarian reported a shortness of breath in some animals before they were found dead and no other clinical signs were evident.Animals were treated with oral antibiotics and vitamins (Keproceryl ® , Afrimash, Holland; Tylosin 20%, MUSCATPHARMA, Oman).Animals did not respond to the treatment and 70 out of 120 died within one week from the onset of clinical signs.Five deceased gazelles were submitted to the Central Veterinary Laboratory (CVL), Ministry of Agriculture, Fisheries and Water Resources, Oman for postmortem examination and laboratory diagnosis.The gazelles had been vaccinated against foot and mouth disease (FMD), peste des petits ruminants (PPR), pasteurellosis, theileriosis, enterotoxaemia, and bluetongue disease. Eleven further CCPP outbreaks were investigated in goat herds from 2019 to 2020 in the four Northern Omani governorates Muscat, A'Sharqiyah, A'Dakhiliyah, and Al-Batinah South (Fig. 1). The outbreaks appeared in the peracute form where only sudden death without obvious symptoms of few animals was reported and in the acute form where animals showed signs of dyspnea, nasal discharges, followed by either death or recovery.Thirty-four deceased animals were submitted to the Central Veterinary Laboratory, Ministry of Agriculture, Fisheries and Water Resources, Oman for postmortem examination and laboratory diagnosis.No data were available about the vaccination routine for goat farms.All necropsied animals were obtained from private farms and submitted to the CVL by local veterinarians after obtaining oral consent from the owners. An overview of the farms and cases included in this study is given in Additional Table 1. Clinical signs, pathology, histopathology and immunohistochemistry The recorded clinical signs, post-mortem and histopathological lesions of different forms of CCPP in goats and Arabian sand gazelles are summarized in Additional Table 2.In goats (Fig. 2A-D), pulmonary edema were the only recorded lesions in the peracute form of CCPP evidenced by the frothy fluid in the trachea and the lungs cut sections (Fig. 2A).Microscopically, the pulmonary alveoli were filled with homogenous eosinophilic substance admixed with polymorphonuclear cells.In the acute form, post-mortem examination of the lungs showed unilateral fibrinous pleuropneumonia, marbling appearance and hepatization of the lungs, hydrothorax, and fibrinous pleuritis (Fig. 2B).Histopathology revealed marked proteinaceous fibrin material deposition in the pulmonary alveoli, alveolar hemorrhage and edema, interstitial edema, and pulmonary capillary congestion.The majority of the bronchi and bronchioles were filled with fibrin deposits and acute inflammatory cells mainly neutrophils (Fig. 2D).In chronic form, unilateral pleural adhesion, lung hepatization and severe hydrothorax were observed upon necropsy (Fig. 2C).Histologically, multifocal necrotic areas in the lungs and fibrin deposits admixed with aggregations of macrophages in the alveolar spaces and bronchi were observed. Necropsied gazelles (Fig. 3A-C) exhibited an acute form of the disease evidenced by unilateral pleural adhesion, lung hepatization, and severe hydrothorax.Microscopically, the pulmonary tissues showed completely obliterated bronchioles neutrophils, macrophages and fibrinous deposition. Detection of Mccp using immunohistochemistry in goats and gazelles revealed a diffuse positive reaction in the fibrin deposits and inflammatory cells that were obliterating the bronchioles and alveoli.Intense positive staining was detected in the peribronchial lymphoid aggregates as well (Figs.2E, 3D). Detection of Mccp DNA in clinical samples In total, 29 animals (one gazelle and 28 goats) from eleven different herds, all with acute symptoms of CCPP were examined by means of real time PCR detection of Mccp with a predetermined cut off for positivity at a Cq value of 38 in 24 lung, 13 pleural fluid and 5 nasal swab samples (see Additional Table 3).All lung and pleural fluid samples were shown to be Mccp-positive with high bacterial loads represented by low Cq values with a mean of 18.9 and 19.8, respectively.On the other hand, in nasal swabs from symptomatic animals, high Cq values (33-45) were detected and only 3 of 5 samples were assigned Mccp-positive. Isolation of Mccp Isolation of Mccp was successful only from pleural fluid, but not from lung tissue of the investigated gazelle carcass although bacterial load of the two sample types was similar as determined by quantitative real time PCR.The obtained isolate 20DL0191 presented with typical Mycoplasma colonies on agar plates, with a diameter of 10-50 µm, but without the fried-egg morphology often observed in Mccp isolates.(Fig. 4). Further nine Mccp strains were isolated from goats originating from four farms in A'Seeb Muscat governorate (20DL0058, 20DL0072, 20DL190 and 20DL192), the area of the gazelle farm, and from two farms in two other governorates (20DL0060, 20DL0066, 20DL0070, 20DL0194 and 20DL0195).These isolates were also recovered exclusively from pleural fluid, but not from lung tissue. Species verification of isolates The identity of Mycoplasma isolates from the gazelle and Molecular genotyping of Mccp isolates To elucidate the phylogenetic relationship among the isolate from a devastating CCPP outbreak in Arabian sand gazelles and isolates from local outbreaks in nearby goat flocks as well as to compare these Omani Mccp isolates with recent and historic strains world-wide, whole genome-SNP analysis was performed on ten isolates sequenced in this study and twenty sequences retrieved from the Genbank database (see Additional Table 4).In total, 3399 SNPs were identified, while 2991 were assigned core SNPs that occur in all strains.The inferred tree distributes sequences into eight clades showing some correlation with geographic origin.The gazelle isolate 20DL0191 clusters together with all nine recent goat isolates from Oman in phylogroup A. The closest relatives are isolates 20DL0058, 20DL0070 and 20DL0072 with less than 20 SNPs distance (Figs. 5 and 6).They originate from two goat farms in the same region (A'Seeb Muscat) and from one farm in the neighbouring Bidbid-A'Dakhiliyah governorate.Goat isolates 20DL060, 20DL066, 20DL0194 and 20DL0195 were obtained from animals on one farm in Al-Batinah South governorate and seem to be clonal with less than 10 SNPs.They group in another sub-clade with isolate 20DL0192 from a farm in A'Seeb, Muscat.Isolate 20DL0190 from another goat farm in A'Seeb, Muscat is most distantly related to all recent isolates from Oman and forms its own sub-clade with 93-106 SNPs with the other recent isolates from Oman.The closest so far known relatives to all isolates from this study are two goat strains from East Africa, ILRI181 and Bagamoyo, whereas another strain from wildlife on the Arabic peninsula (Qatar) also clusters in phylogroup A. Interestlingly, two historic goat isolates from Oman (8991, 1986 and C5, 1994) belong to different phylogroups (E and F, respectively) and show 769-807 and 867-904 SNPs with the recent isolates. Discussion With an estimated population of 2.1 million, goats are the most abundant livestock and represent an integral part of the farming system in Oman [24].Contagious caprine pleuropneumonia constitutes a major threat to caprine livestock and thus to the agricultural economy of the country.Recently, an extensive seroepidemiological investigation with risk factor analysis was conducted in 510 small ruminant flocks in Oman revealing seroprevalences of 28.0% in goat flocks and 13.1% in sheep flocks [5].The current study was initiated following a devastating outbreak of CCPP in a herd of captive Sand gazelles in Muscat governorate.It was designed to compare the infection in local wildlife and goats in terms of disease manifestation, pathology and Mccp genotypes in order to clarify a suspected link between the cases. The rates of morbidity and mortality in the investigated goat herds varied between 2 and 100% and 1 and 50%, respectively, whereas these rates were 100% and 58% in the sand gazelle herd.The relatively higher impact of CCPP on herd health and survival of animals in the wildlife farm could be attributed to different levels of past exposure to Mccp.It is believed that the mortality progressively decreases in endemic herds because CCPP develops usually only in naive animals [25].The gazelles on Farm A might have been exposed for the first time showing higher mortality.Also Lignereux et al. noted a high mortality of about 70% in an affected Sand gazelle herd in UAE [12]. In this study, out of the 34 necropsied goats, the CCPP forms varied from peracute (5.8%), acute (79.2%) and chronic (4.5%) based on the clinical picture, gross and histopathological evaluation.However, all necropsied gazaelles, which were collected from the same outbreak, showed a picture of the acute CCPP form.The predominant acute form in all necropsied goats and gazelles was characterized by the pathognomonic unilateral fibrinous pleuropneumonia with pleural effusion in agreement with the previously described lesions in literature [12,25].An intense immunostaining of Mccp was observed in the fibrin deposits and inflammatory cells obliterating both the bronchioles and alveoli in addition to the peribronchial lymphoid aggregates in caprine tissues.In comparison, gazalle lung tissues exhibited a less intense Fig. 5 Whole genome SNP-based phylogenetic tree.The tree was constructed by analyzing Illumina reads from one gazelle (highlighted white on black background) and nine goat (in bold) Mccp strains from Oman as well as twenty international Mccp genomes currently available at Genbank.The maximum parsimony tree is based on 2991 core SNPs and was estimated by using FastTree.Tree visualization was done using iTOL.Phylogroups A-H were inferred from Dupuy et al. and Loire et al. [22,23] immunostaining of the Mccp antigen in the aggregates of neutrophlis and macrophages in agreement with [9] who reported similar findings in infected wild ungulates.This could be attributed to the rapid onset of the disease in gazelles compared to goats probably due to a comporomised immue response in the former. In many countries, the presence of CCPP was suspected for a long time, but confirmation has been difficult due to the fastidious nature of the pathogen impairing its isolation in pure culture [25].With the advent of molecular detection by PCR, the diagnostic situation has improved.Using a Taqman-PCR protocol [20], we could detect high Mccp loads in all examined lung and pleural fluid samples from deceased animals.The confirmation of the disease from nasal swabs of clinically affected animals was much less reliable with high Cq values or negative results in the PCR.Despite the high DNA load in lung and pleural fluid samples, isolation was successful in only 10 of 37 samples.The exclusive isolation of the agent from pleural fluid, but not from lung tissue was probably due to a stronger contaminating microflora in lung samples which interfered with mycoplasma cultivation.The long transport of the samples from Oman to Germany with an interrupted cold chain may also have had a negative effect on the isolation frequency. Our straightforward whole genome-based genotyping approach involved SNP calling as suggested by Loire et al. [23], but without the use of reference genomes.We applied it to the ten Mccp isolates from this study together with a set of strains representing the global distribution of CCPP whose sequences were available at Genbank (in March 2022).The high number of discovered core SNPs (2991) enabled a high-resolution typing and robust reconstruction of phylogenetic relationships between closely related strains with a local and/or temporal context, but also between isolates from different countries and continents.The inferred tree with phylogroups A to H reflected the overall topology known from classical MLST-based [21], cgMLST-based [22] or core genome-based phylograms [23].The recent Mccp strains from Oman all cluster in phylogroup A together with strains from East Africa and one strain from Qatar.However, considering historic isolates from the region, the Arabian Peninsula exhibits a mosaic of Mccp genotypes from phylogroups A, C, E and F. This is not surprising, since Oman, Saudi Arabia and UAE regularly import live goats and sheep from other CCPP endemic regions of Southern Asia, Africa and Turkey [5].It would be interesting to see if this historic diversity of genotypes is still present or whether individual, particularly successful genotypes have prevailed, as has been observed, for example, with Mccp strains from phylogroup A in Tanzania [23]. The variability among the investigated Omani isolates is low with no more than 125 SNPs.Four isolates from Farm 1 were clonal with less than 10 SNPS, whereas isolates from different farms showed slightly higher variability with 10 to 100 SNPs.The impact of spatial distribution within Oman on the genetic variability could not be analyzed in more detail because all isolates came from a limited area in the north of the country. The observation of Loire et al. [23], that four Mccp genome sequences obtained from wildlife on the Arabian Peninsula clustered together in a subclade of phylogroup A, led them to suspect some sort of adaptation to the wild host.Considering the results of our study, this hypothesis is not tenable, because we could prove the close phylogenetic relatedness between the gazelle strain 20DL0191 and strains isolated from goats in neighbouring flocks (e.g.strains 20DL070 and 20DL058) indicating mutual transmission of the pathogen between wildlife and goats.Lignereux et al. [12] also considered the longdistance transmission of infectious droplets from an external goat farm, without direct animal contact the most plausible explanation for the contamination of a gazelle flock in UAE.This implicates that wild ungulates or animals kept in zoos or wildlife parks could develop into reservoirs and represent a risk of CCPP (re-)introduction into goat flocks. Conclusion This study demonstrates that whole genome-based SNP analysis with its high resolution is a valuable tool to dissect the dynamics of local Mccp transmission as well as to trace its global epidemiology.Recent outbreaks of CCPP in Northern Oman are caused by Mccp strains of the East African Phylogroup A which can infect goats and captive gazelles likewise.Therefore, wild and captive ungulates should be considered as reservoirs and included in Mccp surveillance measures. Pathology Thirty-four goats and five adult Arabian sand gazelles (2 males and 3 females) were necropsied and samples from the lungs, pleura, trachea and pleural fluid were collected.Lung samples and pleural fluid were kept at -80 °C and shipped on dry ice to Friedrich-Loeffler-Institut (FLI), Germany for microbial culture and molecular investigations.The other tissue parts were fixed in 10% neutral buffered formalin for 24 h and the specimens were routinely processed for hematoxylin and eosin (H&E) staining [26].Consecutive sections were immunohistochemically stained using indirect immunoperoxidase staining using ImmPRESS Reagent kit, MP-7800 (Vector Laboratories, Ltd.).Tissue sections were deparaffinized in Shandon Xylene Substitute (Thermo Scientific ™ ), dehydrated in 100% ethanol, and slides were incubated in 10 mM Tris buffer (pH 9.0) (H-3301, Vector Laboratories, Ltd.) at 95°C for 30 min for antigen retrieval.Endogenous peroxide activity was eliminated by treatment with Peroxide Blocking Reagent (Biolegend ® ).After blocking of non-specific immunoreactives using the ImmPRESS reagent with 2.5% normal horse serum (Vector Laboratories, Ltd.), slides were incubated with polyclonal anti-M.capricolum subsp.capripneumoniae antibodies from rabbit (BioGenes GmbH, Berlin, Germany) at 1:3000 dilution for 2 h at room temperature in a dark chamber.Sections were washed with buffer and then incubated with ImmPRESS polymer anti-rabbit IgG reagent for 30 min at room temperature.Immunoreactivity was visualized using 3,3′-diaminobenzidine for 5 min and sections were counterstained with hematoxylin for 5 min at room temperature.The slides were examined under the microscope (Olympus B51X microscope; Olympus DP70 camera, Olympus Corporation, Japan) and positive signals appeared as brown color.In negative control slides, the primary antibody was omitted, and slides from a Mccp PCR positive goat was included as a positive control. Bacterial culture Tissue samples from pleuro-pneumonic lung lesions (25 mg) and pleural fluids (0,1 mL) were cultured in mycoplasma specific liquid medium containing a phenol-red pH indicator (Mycoplasma Experience Ltd, UK) at 37 °C and 8% CO 2 under static conditions for 4 to 7 days (until color change of the liquid broth to orange or yellow was observed).Penicillin G (WDT, Garbsen, Germany) was added (1000 IU/ml) to suppress other bacteria.In addition, agar plates with MS Solid Media, (Mycoplasma Experience Ldt, UK), were seeded.From these plates pieces with colonies were transferred to liquid medium when direct cultivation in liquid medium was not successful. Nucleic acid extraction DNA extraction from broth culture, lung tissue, pleural fluid and swab samples for subsequent PCR testing was done using High Pure PCR Template Preparation-Kit (Roche Deutschland Holding GmbH, Mannheim, Germany).To this end, two to four mL of broth culture were centrifuged at 13.000 × g at 6°C for 30 min; the pellet of bacteria was washed with PBS and then incubated with 200 µL PBS and proteinase K. 50 mg lung tissue or swabs were incubated with lysis buffer and proteinase K provided in the kit and 200 µL pleural fluid was incubated with binding buffer and proteinase K.All subsequent steps were identical for the four different matrices and conducted according to the instruction manual of the DNA preparation kit. Total DNA extraction for whole genome sequencing was done using Qiagen Genomic DNA Preparation Kit (Qiagen, Hilden, Germany).100 mL broth culture with approx.10 9 CFU in total were centrifuged at 16.000 × g at 6 °C for 30 min.The pellet of bacteria was washed with PBS and resuspended in 1 mL of the recommended buffer of the preparation kit.RNAse (Qiagen, Hilden, Germany) was added to a final concentration of 200 µg/mL and Proteinase K (Qiagen, Hilden, Germany) with a total amount of 900 µg followed by an incubation step for 30 min at 35°C with shaking.Next steps were conducted according to the instruction manual.The final DNA extracts were recovered in 50 µL buffer and checked qualitatively and quantitatively by means of Nanodrop spectrophotometer (Thermo Fisher Scientific, Medison, USA) and QUBIT 2.0 fluorometer (Life Technologies Holdings PTE Ltd, Singapore) as well as agarose gel electrophoresis. Mccp TaqMan PCR The real time PCR for the specific detection of Mccp was adapted from Settipally et al. [20] and validated in the FLI Mycoplasma Laboratory, Germany.QuantiTect Multiplex PCR Master Mix (Qiagen) was used in a total volume of 15 µL with 800 nM of each primer (Mccp-fwd: TTT TTC AAG TGC AAA CGA CTATG, Mccp-rev: TGA CTT GGG TGT TAG GAC CA), 400 nM probe with LNA ( +) (Mccp-pr: FAM-CGG ATA G + AAC AAT A + GCT TTT ACAGA-BHQ1) and 2 µL template DNA.To test for PCR inhibition in DNA preparations, an internal amplification control was integrated in duplex PCR runs: 400 nM of each primer (EGFP-1F: GAC CAC TAC CAG CAG AAC AC, EGFP-10R: CTT GTA CAG CTC GTC CAT GC) and 200 nM of probe (EGFP-HEX: HEX-AGC ACC CAG TCC GCC CTG AGCA-BHQ1) were used together with 500 copies of a plasmid template per reaction (Intype IC-DNA, Indical Bioscience, Leipzig, Germany) to generate and detect a 177 bp amplicon [27].The samples were tested in duplicate on a Bio-Rad CFX96 instrument (Bio-Rad, Feldkirchen, Germany) with the following cycling conditions: 95 °C for 10 min and 45 cycles with 95 °C for 30 s and 60 °C for 1 min.A mean quantification cycle (Cq) value of < 38 was considered positive.A 100 GE/µl equivalent of Mccp type strain F38 DNA was used as positive control and water instead of template DNA as negative control. Whole genome sequencing and SNP analysis 2-10 µg of genomic DNA of each isolate was sent to GATC/Eurofins Genomics (Konstanz, Germany) for genomic library preparation and Illumina MiSeq 2 × 150bp paired-end sequencing with 5 M read pairs resulting in an average coverage of around 750x.Raw sequencing data were quality-controlled using FASTQC (v0.11.9) and then de novo assembled by applying the Shovill pipeline (v1.0.4) using the SPAdes assembler (v3.15.2).Resulting contig fasta files were annotated using Prokka (v1.14.6).All generated sequencing data have been deposited in the National Center for Biotechnology Information (NCBI) under the accession number PRJNA939501.Additionally, genome assembly data from all available Mccp strains on the NCBI GenBank database (21) were downloaded in fasta format.Phylogenetic relationship of the 30 strains was reconstructed based on SNP calling using the software kSNP v3.0 [28], and a maximum parsimony tree was estimated by using FastTree.Tree visualization was done using iTOL [29]. Fig. 1 Fig. 2 Fig. 3 Fig. 1 Locations of CCPP outbreaks occurring in 2019/2020 in four Northern governorates of the Sultanate of Oman.(Created by https:// www.scrib blema ps.com/) from goats was confirmed by Mccp-specific real time PCR.All DNA extracts from broth culture tested Mccppositive with Cq values in the range of 18-20. Fig. 4 Fig. 4 Colonies of Mycoplasma capricolum ssp.capripneumoniae isolated from the pleural fluid of an Arabian sand gazelle appear on agar plates with MS medium (Mycoplasma Experience) in an unusual centerless form (bar = 50µm) Fig. 6 Fig. 6 Heatmap of pairwise SNP distances of ten Mccp isolates (in bold) recently isolated from a gazelle and goats in Oman and twenty historical isolates of worldwide origin
2024-04-25T13:10:19.384Z
2024-04-25T00:00:00.000
{ "year": 2024, "sha1": "f343ba3de8a18483ad34d50624f802029254ab18", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "8e3310e38dbd27c3c7ffd9e0501832e7c639f6ab", "s2fieldsofstudy": [ "Environmental Science", "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
210869820
pes2o/s2orc
v3-fos-license
A Ten-Stage Protocol for Assessing the Welfare of Individual Non-Captive Wild Animals: Free-Roaming Horses (Equus Ferus Caballus) as an Example Simple Summary Vital for informing debates about the ways we interact with wild animals and their associated habitats is knowledge of their welfare status. To date, scientific assessments of the welfare of free-roaming wild animals during their normal day-to-day lives are not available, in part because the required methodology had not been developed. Accordingly, we have devised, and here describe, a ten-stage protocol for systematically and scientifically assessing the welfare of individual non-captive wild animals, using free-roaming horses as an example. Applying this ten-stage protocol will enable biologists to scientifically assess the welfare of wild animals and should lead to significant advances in the field of wild animal welfare. Abstract Knowledge of the welfare status of wild animals is vital for informing debates about the ways in which we interact with wild animals and their habitats. Currently, there is no published information about how to scientifically assess the welfare of free-roaming wild animals during their normal day-to-day lives. Using free-roaming horses as an example, we describe a ten-stage protocol for systematically and scientifically assessing the welfare of individual non-captive wild animals. The protocol starts by emphasising the importance of readers having an understanding of animal welfare in a conservation context and also of the Five Domains Model for assessing welfare. It goes on to detail what species-specific information is required to assess welfare, how to identify measurable and observable indicators of animals’ physical states and how to identify which individuals are being assessed. Further, it addresses how to select appropriate methods for measuring/observing physical indicators of welfare, the scientific validation of these indicators and then the grading of animals’ welfare states, along with assigning a confidence score. Finally, grading future welfare risks and how these can guide management decisions is discussed. Applying this ten-stage protocol will enable biologists to scientifically assess the welfare of wild animals and should lead to significant advances in the field of wild animal welfare. Introduction There is a growing awareness of how human activities, including wildlife population management and rehabilitation, land management and other conservation activities, may influence the welfare of free-roaming animals in the wild [1][2][3][4][5][6][7][8]. Conservation and wildlife management practices have traditionally focused on assessing animal populations, using metrics like abundance, density and 1. Acquire an understanding of the principles of Conservation Welfare 2. Acquire an understanding of how the Five Domains Model is used to assess welfare status 3. Acquire species-specific knowledge relevant to each Domain of the Model 4. Develop a comprehensive list of potential measurable/observable indicators in each physical domain, distinguishing between welfare status and welfare alerting indices 5. Select a method or methods to reliably identify individual animals 6. Select methods for measuring/observing the potential welfare indices and evaluate which indices can be practically measured/observed in the specific context of the study 7. Apply the process of scientific validation for those indices that are able to be measured/observed, and insert validated welfare status indices into the Five Domains Model 8. Using the adjusted version of the Model that includes only the validated and practically measurable/observable welfare status indices, apply the Five Domains grading system for grading welfare compromise and enhancement within each Domain 9. Assign a confidence score to reflect the degree of certainty about the data on which welfare status has been graded 10. Including only the practically measurable/observable welfare alerting indices, apply the suggested system for grading future welfare risk within each Domain. Stage 1: Acquire an Understanding of the Principles of Conservation Welfare A new discipline of Conservation Welfare has recently been proposed to align traditional conservation approaches that historically focused on measures of 'fitness' (physical states), with more contemporary animal welfare science concepts which emphasise 'feelings' (mental experiences or affective states), that result from physical states. This enables a more holistic understanding of animals' welfare states [52]. A common language and understanding relating to wild animal welfare are important starting points, since the way in which welfare is conceived influences the way it is evaluated and the emphases put on its different features [52]. The reader is referred to Beausoleil et al. 2018 [52] for a more detailed consideration of the value of seeking a shared welfare-related understanding between conservation scientists and animal welfare scientists under the heading of Conservation Welfare. Animal welfare is characterised mainly in terms of an animal's mental experiences, in other words, how the animal may be experiencing its own life [52][53][54][55][56]. In animal welfare science, welfare is conceptualised as a property of individuals, belonging to species considered have the capacity for both pleasant (positive) and unpleasant (negative) mental experiences, a capacity known as sentience [52,[57][58][59][60][61][62]. Contemporary animal welfare science aims to interpret indicators of biological function and behaviour in terms of the mental experiences that those indicators are likely to reflect [52]. Mental experiences, or affective states, are subjective and cannot be measured directly, but indirect indices can be used to cautiously infer affective experiences [48][49][50][51][52]63]. Negative Affective States There is a growing body of neurophysiological and behavioural evidence in non-human animals regarding the basis of negative affective states such as breathlessness, thirst, hunger, pain, fear, nausea/sickness, dizziness and weakness, and there are also validated links between measurable indicators of physical/functional states and some of these mental experiences [36,58,[62][63][64][65][66][67][68][69][70][71]. For example, body condition is a measurable physical state that can be used as an indicator of hunger in some situations [72][73][74][75]. Likewise, certain behaviours can be used as indices of pain. For example, in horses, the combination of rolling, gazing and/or kicking at the abdomen along with inappetence may be interpreted as reflecting abdominal pain [76]. Some affective experiences are generated by the animal's brain processing sensory inputs that register specific features of their internal physical/functional state. For example, water deprivation causes dehydration which leads to osmoreceptor-stimulated neural impulses passing to the brain generating the affective experience of thirst [67]. Thirst elicits the behaviours of seeking water and drinking, in order to correct dehydration, after which the mental experience of thirst ceases. Other affective experiences may arise from externally stimulated sensory inputs that contribute to the animal's perception of its external circumstances. For example, threatening situations such as the presence of predators or humans, separation from conspecifics, or environmental hazards such as fire, are registered via cognitive processing of sensory inputs from visual, auditory and/or olfactory receptors giving rise to anxiety and fear [52,64,66,67,69,71]. Whilst some negative experiences such as thirst and hunger motivate the animal to be behaviourally active in order to achieve resolution of the experience, others motivate the animal to reduce its activity. For example, weakness, sickness and pain often induce inactivity and seeking to be isolated from other animals [50]. These and other types of behaviour are referred to as 'sickness' behaviours and may facilitate recovery from disease and injury thereby enhancing survival [49,77]. Experiencing negative emotions to some degree is therefore essential in order to motivate life-sustaining behaviours, but it is the incidence, intensity and duration of these experiences that are important in determining the overall impacts on an animal's welfare state. It is when negative experiences become extreme, prolonged or unavoidable, that an animal experiences the most severe compromises to its welfare [3,49,50]. Positive Affective States Animals can also experience a range of positive affective states, and when experienced, these may enhance the animal's welfare state [50,66,[78][79][80][81]. Some positive mental experiences may occur as a result of behaviours that are directed at minimising negative affects [50]. For example, the smell, taste, textural and masticatory pleasures of eating a range of foods and the comfort of post-prandial satiety may occur with eating that is directed at relieving hunger [50,77,82,83]. Alternatively, other positive experiences may replace negative experiences when an animal is able to express more of its behavioural repertoire [50,51,55,[78][79][80]. For example, foraging, affiliative social interactions, adolescent play behaviour, maternal behaviour and sexual activity are behaviours that infer positive mental experiences [50,55,64,69,83,84]. Despite living in stimulus-rich environments, expression of rewarding behaviours can be hindered in wild free-roaming animals. For example, in malnourished horses, more time and energy is spent searching for food. Hunger is also likely to dominate awareness and this, in turn, may reduce motivation to undertake rewarding behaviours [50,51]. Conversely, when food is plentiful, relief from the negative experience of intense hunger may re-motivate animals to utilise existing opportunities to engage in a range of rewarding behaviours [51]. Therefore, it is important to consider indicators of positive, as well as negative welfare states in wild free-roaming animals and to understand particular features of their 'natural' circumstances may compromise or enhance their welfare [85]. Stage 2: Acquire an Understanding of How the Five Domains Model Is Used to Assess Welfare Status The Five Domains Model [48][49][50][51] is consistent with, and structurally represents, the understanding that physical and mental states are linked ( Figure 1). It is a device that facilitates systematic and structured welfare assessment of individual sentient animals, based on current understanding of the functional bases of negative and positive subjective experiences that animals may have [48][49][50][51]. Originally developed to assess welfare compromise in animals used in research, teaching and testing [48], it has since been broadened for use in companion animals, livestock, captive wild animals and animals designated as 'pests' [27,36,[49][50][51]55,[86][87][88][89][90]. The Five Domains Model comprises four interacting physical/functional domains of welfare; 'nutrition', 'environment', 'health' and 'behaviour', and a fifth domain of mental state (affective/mental experience) ( Figure 1). The physical/functional domains focus on internal physiological and pathophysiological states (Domains 1-3) and external physical, biotic and social conditions that may alter the animals' behavioural expressions (Domain 4) [49][50][51]. Following measurement of animal-based indices within each physical domain, the anticipated negative or positive affective consequences are cautiously assigned to Domain 5. It is these experiences that contribute to descriptions of the animal's welfare state [49][50][51]. It is imperative that a sound understanding of the principles of Conservation Welfare (Stage 1) and the Five Domains Model (Stage 2) is gained prior to progressing to the next stages of the protocol. Stage 3: Acquire Species-Specific Knowledge Relevant to Each Domain of the Model In order to appropriately apply the Five Domains Model to assess animal welfare, detailed species-specific knowledge is required. Table 1 illustrates the species-specific information within each of the four physical/functional domains, that is required to enable assessment of the welfare of free-roaming horses. Without a thorough understanding of what is normal for a species under optimal conditions, it is not possible to identify or interpret abnormalities. Acquiring species-specific knowledge will likely require extensive reading and advice from others having species-relevant practical experience, in addition to species-relevant nutritional, environmental, health and behavioural expertise. Accordingly, such holistic welfare assessments require multidisciplinary input [49][50][51]. All of the information required to make an informed assessment of the animal's welfare status may not be available for the wild species of interest. However, systematically undertaking Stage 3 will help to identify knowledge gaps and related limitations in welfare assessments, thus guiding further research. Table 1. Illustration of the species-specific information required to assess welfare of free-roaming horses. Domain Species-Specific Information Required 1: Nutrition Water requirements: volume, frequency, preferred water sources, factors influencing water requirements, adaptations to and impacts of water restriction Nutritional requirements and preferences Common nutritional deficiencies and excesses and their causes, plant toxicities Assessing body condition, body condition scoring systems, optimal body condition score, factors affecting body condition 2: Environment Habitat preferences, and factors affecting habitat selection and use Preferred underfoot substrate and terrain Thermoneutral zone, impacts of extreme climate events, signs of thermal stress Based on knowledge of the theory of animal welfare and its importance in a conservation context (Stage 1 and 2), and on species-specific knowledge (Stage 3), the next stage is to develop a list of potential indicators of various physical and thus affective states (both positive and negative) that the animals might experience. Measurable or observable indicators can be animal-based, such as body condition score and behaviour, or resource-based, such as forage quality and weather conditions ( Table 2). Some indices (specifically animal-based indices) will be direct indicators of physical states, and therefore reflect aspects of welfare status. Others will be indicators of the risk of particular states occurring, or welfare alerting indices (all resource-based indicators and some animal-based indicators). Welfare alerting indices do not directly reflect the animal's current welfare state, but they can direct attention in future assessment towards specific animal-based indices (e.g., Figure 2). All assessments are made on individuals, but some resource-based indicators may apply to a number of individuals and therefore have group applications. Table 2. Examples of animal-based and resource-based indices that may be measured or observed in free-roaming horses and which measures directly reflect mental experiences, i.e., welfare status, compared to welfare alerting indices that reflect welfare risk. Domain Animal Environmental conditions that may predispose to certain health conditions (e.g., heavy rain, moist substrates) General demeanour, mobility, gait, posture Hazards that may predispose to injury (e.g., fencing, roads, terrain) Sickness behaviours Presence and abundance of toxic plants Faecal quality Dentition of any skulls found (e.g., dental pathology and age at death) Welfare alerting Faecal egg counts, Strongylus vulgaris molecular diagnostics (PCR) Welfare alerting 4: Behaviour Quantitative (e.g., time-budget behaviours, frequency/duration of positive affiliative interactions) and qualitative (e.g., alert, relaxed, weak) assessment of behaviours Welfare status Opportunities to express complete range of normal behaviours; affected by environment and conspecifics Population dynamics and social organisation Welfare alerting Figure 2. Ingestion of plants such as Fireweed (Senecio madagascariensis), can cause pyrrolizidine alkalosis in horses resulting in chronic liver failure and eventual clinical signs of diarrhoea, weight loss, subcutaneous oedema, neurological disease and ultimately death [91]. Observing an abundance of these plants within a wild horse's habitat should act as a welfare alerting factor to prompt further monitoring of horses for presence of these clinical signs, and/or to consider this as a potential cause of any unexplained mortalities. Image A.M. Harvey. Search for Previously Described Indices Literature searches should be performed to develop a list of potential indices that may have already been described for use in welfare assessments of the species of interest, either in a free-roaming context or in a domesticated/captive context, and to evaluate their suitability. For example, various horse welfare assessments have been described and some of the indices used may be practical to apply to wild free-roaming horses [16,19,20,22,23,92,93]. Published information may also exist with regard to methods for measuring or observing some of these indices. For example, in horses there are well described protocols for assessing body condition score [94,95], and behavioural [76] and facial [96] signs of pain have been described, with development of a horse grimace scale for assessing some types of pain [97]. Some Animal-Based Indices Provide Welfare Status Information Only animal-based indices can contribute information to the assessment of overall welfare status, since they provide the most direct evidence of what the animal may be experiencing [48][49][50][51]98]. Animal-based indices may be externally observable, or internally measurable, as illustrated in Table 3. Externally observable indices can provide easily observable evidence of welfare compromises in each domain, and are the most practical indices to use in free-roaming animals ( Figure 3, Table 3). Quantitative measures of behaviour, such as time budgets, have been most commonly applied by wildlife biologists [99][100][101][102]. However, since behaviour reflects a complex level of functioning, qualitative assessment can also inform assessment of the animals' affective state and whether positive or negative mental experiences are occurring [103][104][105][106] (Figure 4). To date, qualitative behavioural assessments do not appear to have been scientifically studied in free-roaming wild animals, and it is important that the context of the behaviour is considered carefully when making such assessments [107]. Internally measurable indices relate to physiological, pathological or clinical conditions (Table 3). These indices are not routinely used for day-to-day welfare assessments, and are problematic to measure in free-roaming wild animals. Some indices such as cortisol and reproductive hormones can be measured in faeces, which makes this more feasible for use in wild animals. However, interpretation of many of these indices is not straightforward. For example, while faecal [108][109][110] and hair [111,112] cortisol concentrations have been employed as a physiological index of stress [108][109][110][111][112], the significance of non-specific stress for an animal's mental experience is unclear [52,113]. Cortisol and many other physiological parameters are non-specific and do not indicate if the experience was positive (e.g., excitement, arousal) or negative (e.g., pain, fear, hunger). Further, lack of elevated cortisol concentrations does not mean that the animal is not experiencing something unpleasant. Cortisol concentrations are also affected by many other variables (e.g., species, sex, reproductive status, circadian rhythms), further hindering interpretation [52]. Accordingly, an absence of detailed contextual information limits how informative cortisol measurements are in wild free-roaming animals. Table 3. Examples of animal-based indices that may provide information about welfare status. Externally Observable Indices Internally Measurable Indices Growth rates and achievement of developmental milestones in young animals Measurement of heart rate and core body temperature Reproductive success Measurement of various blood parameters such as complete blood count and serum biochemistry Body weight and/or body condition score Presence of injuries, wounds, lameness, diarrhoea, nasal discharge, food pouching, quidding Measurement of cortisol and reproductive hormones in urine, faeces and hair Coat condition and presence of skin lesions Social behaviours, sickness or pain behaviours Faecal egg counts Some Animal-Based Indices Provide Welfare Alerting Information Animal-based indices traditionally collected by wildlife biologists (e.g., population dynamics, home range features and size, reproductive rates and survival rates), may not directly reflect the mental experiences of individuals, however, they may provide relevant contextual information. For example, low reproductive success, smaller herd sizes and/or larger home ranges, may reflect physiological states (e.g., chronic malnutrition) that would generate negative affective states of relevance to welfare [94,[114][115][116][117] (Figure 5a). Consequently, such indices may provide information about future welfare risks, and thus become important welfare alerting indices. Some other animal-based indices, such as faecal egg counts (FECs), may also only provide welfare alerting rather than welfare status information (Figure 5b, Table 2), because, when FEC is high, free-roaming horses frequently do not exhibit overt clinical signs of disease [118,119]. Hence, interpreted in isolation they do not necessarily indicate presence of intestinal pathology and any related negative experience. [118]. However, faecal egg counts (FEC) give no indication of the severity of any associated pathology and cannot be used directly to make inferences about the animals' mental experience. FECs therefore are welfare alerting indices, with a high FEC raising awareness that gastrointestinal pathology and subsequent clinical signs (e.g., diarrhoea, abdominal pain) may be more likely to arise in the future. Images A.M. Harvey. Some Animal-Based Indices Can Be Interpreted in Combination with Resource-Based Indices In some situations a combination of resource-based and animal-based indices may provide indirect relevant information about current welfare status and future risk. For example, dental disease can be an important cause of both morbidity (e.g., pain, malnutrition) and eventual mortality (malnutrition) in horses [120,121], and several externally observable indices can be suggestive of clinically significant dental disease ( Figure 6). if an individual horse is in poor body condition when feed is plentiful, conspecifics are in good body condition, and there is no obvious alternative reason for the individual to be in poor condition (e.g., not lactating or injured); (b) Quidding (dropping food from the mouth whilst chewing) and/or food pouching in the mouth lateral to the cheek teeth (as shown on the horse's right cheek in the photograph) are associated with pain from dental disease; (c) Long unchewed grass fibres in the faeces are suggestive of reduced chewing ability with dental disease [120,121]; (d) Information on the incidence of dental disease in a population as a whole (i.e., alerting information) may be provided by examination of the dentition of skulls found in the horses' habitat. Images A.M. Harvey. Stage 5: Select a Method or Methods to Reliably Identify Individual Animals In order to assess animal welfare at an individual level, individuals need to be identifiable. Non-interventional identification methods may be suitable for some species. For example, in horses a combination of coat colour and natural markings may be used [122][123][124]. Where such approaches are not possible, alternative methods may be required, such as marking with paints or dyes, or applying tags [125]. Factors such as distance from the animal during observations and visibility are important considerations in choice of identification method. Animal welfare impacts associated with capture/handling/restraint, application of any marks/tags, wearing of the mark, and impacts of observations should be assessed. The welfare impacts of different methods of marking have been previously reviewed, and should be considered along with other advantages and disadvantages of the marking method, before deciding upon those most appropriate for identification [125][126][127][128][129][130]. Stage 6: Select Methods for Measuring/Observing the Potential Welfare Indices and Evaluate Which Indices Can Be Practically Measured/Observed in the Specific Context of the Study Having decided, based on species-specific knowledge (Stage 3), what resource-based and animal-based indices are important for assessing welfare in the species of concern (Stage 4), and how individual animals are going to be identified (Stage 5), the methods of practically measuring/observing the required indices then need to be considered. Collecting information on the welfare of wild free-ranging individual animals is logistically challenging: their habitats may be difficult to access; the animals may be difficult to observe because of natural features such as vegetation and topography, in addition to fear of humans, and they may be unobservable for significant periods or at repeated intervals. In some situations it may also be challenging to locate the individuals that may be experiencing the worst welfare impacts, as they may hide, be less mobile, more distant from conspecifics and in habitats/terrain that make visualising them difficult. Historically, data on free-roaming animals have been obtained using methods such as direct observations (e.g., herd size, behaviour, body condition score), trapping (e.g., sex, weight, size) and GPS collaring (e.g., home range, distance travelled) [99][100][101][102][122][123][124][125][126][127][128][129][130][131]. Although these methods can yield useful information, they themselves often have significant welfare implications [125][126][127][128][129][130], provide a very narrow range of data, and there may be bias of the individuals sampled (e.g., direct observation is likely biased to those individuals within habitats where direct visualisation is possible). With more recent advances in technologies, it is now possible to obtain a wider range of information about free-roaming animals, and for longer periods of time, using techniques such as camera traps and drones [132][133][134][135][136][137] (Table 4). Advantages and limitations of each potential method need to be considered for the species and context of the research, and the highest yielding methods may vary. For example, for free-roaming horses residing on open grassland or desert habitat, direct observations or drones may be the most effective way to obtain animal-based data. In contrast, in a woodland habitat, where trees may interfere with direct visualisation of animals, camera traps may be more appropriate. Combined, these methods can provide complementary information (Figure 7). Table 4. Summary of methods that may provide information relevant for the welfare assessment of free-roaming wild horses. Method Relevant Information Assessment of maps In some situations, direct animal-based indices may be impractical, but there could be alternative indices that indirectly provide relevant information. For example, it is not practical to assess the dentition of free-roaming wild horses, but some indices can be observed that are indirectly suggestive of clinically significant dental disease ( Figure 5). Methods should be evaluated by undertaking pilot studies to identify which of the potential indices are practically feasible to measure/observe in the context of the study. Indices that are not practically able to be measured/observed with currently available methods should be archived. This enables them to be considered at a later stage when evaluating the limitations of the welfare assessments (Stage 9), and to be revisited when future technological advances may make them more feasible to measure or observe. Stage 7: Apply the Process of Scientific Validation for Those Indices that Are Able To Be Measured/Observed, and Insert Validated Welfare Status Indices into the Five Domains Model Once it has been established which indices can be practically measured/observed in the species and context of interest (Stage 6), these indices then need to be scientifically validated. Ideally, validation of welfare indices requires prior demonstration of the relationship between an observed indicator and the physical/functional impact (Domains 1-4), and of the relationship between the physical/functional impact (Domains 1-4) and the inferred mental experience (Domain 5). These steps of scientific validation have been described in detail elsewhere [63]. For example, detection of raised plasma osmolarity by osmoreceptors increases water-seeking and drinking behaviour, and drinking eliminates water-seeking behaviour [67], validating the link between the externally observable indicator of water-seeking behaviour/drinking, and the internally measurable indicator of dehydration, plasma osmolarity. Affective neuroscience provides evidence of the link between the physical state of dehydration (increased plasma osmolarity) and the mental experience of thirst, via neurohormonal pathways transmitting afferent inputs from osmoreceptors to higher brain centres associated with emotions [67]. Ideally, evidence of these relationships should relate to the species and context of interest, but where this is not available, evidence from the same species in a different context (e.g., in captivity), or a similar species, can be cautiously extrapolated. In many situations, the complete body of evidence to achieve such validation is not available and the level of confidence in the validation of indices should be indicated [63]. Thus, this process will also highlight further knowledge gaps, and what further evidence may be required to strengthen the confidence between the suggested animal-based indices and inferred mental experiences. In some cases, a direct animal-based indicator may not be practical to measure/observe in free-roaming animals, but there may be scientific evidence to support the use of an indirect indicator, which may be resource-based. For example, in free-roaming animals, water seeking or drinking behaviours can be difficult to observe. Therefore, thirst may be indirectly judged based on the resource-based indices of how available water sources are in relation to required frequency of drinking, based on the best available data for the species of interest. In the absence of direct measures, strength of motivation to drink could also be assessed by the distance the animal is willing to travel to reach a water source. Factors other than location of water sources would also need to be considered since impaired water access may occur for other reasons, such as illness or injury. Indices that cannot be scientifically validated as indicators of the animals' mental experience (e.g., poor hoof condition in the absence of an abnormal gait), should be archived for consideration in future validation studies. Some of these archived indices may still provide valuable alerting information. All welfare alerting indices (Table 3) should be evaluated and graded separately from welfare status indices, as described in Stage 10. Stage 8: Using the Adjusted Version of the Model that Includes Only the Validated and Practically Measurable/Observable Welfare Status Indices, Apply the Five Domains Grading System for Grading Welfare Compromise and Enhancement Within Each Domain Once the indices that can be practically measured/observed (Stage 5), which are deemed to be sufficiently validated (Stage 7), have been inserted into the Five Domains Model, the next stage is to apply the grading system. In order to standardise the assessment of animal welfare across different individuals and/or different assessors, and to monitor animal welfare over time, a reliable, repeatable and practical method of grading is required. Grading welfare compromise and welfare enhancement, and the operational details of the Five Domains Model have been previously described [50,51,87,88]. It should be noted that such grading does not necessarily provide a comprehensive assessment of welfare status; rather it provides an assessment of those indices of welfare that can be assessed and interpreted in terms of the mental experience they are associated with, in the particular species and context of interest. In the case of free-roaming animals the range of welfare-relevant indices that can be assessed will usually be more limited than that for animals in captivity. Grading the impact of mental experiences on welfare status involves a different approach depending on whether the experiences are negative (welfare compromise) or positive (welfare enhancement) [50,51,87] (Table 5). Table 5. A conceptual matrix of combined grading of welfare compromise and welfare enhancement (adapted from Mellor and Beausoleil 2015 [50]). Welfare Compromise Grade Welfare Enhancement Grade None (0) Low Level (+) Med Level (++) High Level (+++) A None The grading system applies a five-tier scale (A-E) to each of the Five Domains, representing increasingly severe impacts, ranging from none to very severe (Table 6) [50,51,87]. Information from the scientifically validated measurable/observable indices decided upon in Stage 7 is used to assign the grade of physical impact (A-E) in the first 4 domains. Knowledge of the association between those physical impacts and the associated mental experiences is used to infer the type of unpleasant experiences in Domain 5. The grades assigned in Domains 1-4 are used to infer the severity and duration of those experiences in Domain 5. The grade assigned in Domain 5 is usually the same as the highest of the grades in Domains 1-4, to reflect the most severe negative mental experience. This grade is the overall welfare compromise grade (Table 6). Table 6. An example of grading welfare compromise in a horse with a lower limb injury resulting in severe lameness 1 . The lameness has been observed to moderately impact on behaviour (inability to keep up and interact with the rest of the herd), leading to C grade in Domain 4 (Behaviour). Observations of reduced ability to forage and graze and a body condition score of 3/9, led to a C grade in Domain 1 (Nutrition). The horse's environment is unchanged and the horse has easy access to shade and shelter, however the steep terrain is more challenging for the injured horse to negotiate, leading to a B grade in Domain 2 (Environment). The inferred mental experiences from these physical states include pain, hunger, and likely exhaustion, and possibly frustration and isolation. These are integrated to assign a grade in Domain 5 (Mental status). As the pain associated with the degree of lameness is considered to be severe, and of chronic duration, grade D has been assigned to Domain 5. This is the overall welfare compromise grade. Domain of Potential There may, however, be insufficient information to define impacts with the degree of precision implied by a five-tier scale, and in this case the grading matrix can also be adapted to a simpler three-tier scale to represent 'no to low', 'moderate', and 'severe' compromise [51] (Table 7). Table 7. An example of a modified three-tier grading system for assessing physical impacts in free-roaming horses within Domain 1 and associated negative experiences in Domain 5. The described grading system applies a four-tier scale (0, +, ++, +++), representing 'no', 'low-level', 'medium-level' and 'high-level' enhancement [50,51,87], but as above could also be simplified to a two-or three-tier scale when information relating to positive mental experiences is sparse. Grading of welfare enhancement has three elements; (i) the availability of opportunities for the animal to engage in self-motivated rewarding behaviours, (ii) the animals actual utilisation of those opportunities, (iii) making a cautious judgement of the degree of 'positive affective engagement'. For example, in free-roaming horses, when grading positive mental experiences (Domain 5) associated with impacts in Domain 4 (behaviour), opportunities for horses to engage in free movement, exploration, foraging a range of vegetation of varying tastes and textures, to have affectionate social interactions with bonded conspecifics and engage in maternal, sexual or play behaviour, would be expected. However, for a variety of reasons, a horse may not be able to utilise these opportunities, and consequently will not exhibit behaviours that would provide evidence of positive mental experiences. This may occur where there is welfare compromise. For example, malnutrition, dehydration, hypothermia, injury and illness may all impair an animal's ability to engage in activities that may otherwise be pleasurable [36,50,70,71]. The ability to engage in positive social interactions may also be impacted by aspects of social organisation and group composition [139] (Figure 8). Table 5 illustrates one way in which the interaction between compromise and enhancement has been conceptualised, i.e., severe compromise hinders enhancement. Figure 8. The images of the two groups of horses in (a) and (b) were taken from a large population of horses, but due to the social organisation of herds, illustrate the difference in the ability of some horses to engage in affectionate social interactions, maternal, sexual and play behaviour much more than others: (a) These horses, 2 bachelor stallions, would be graded as '+' for welfare enhancement associated with opportunities in Domain 4, whereas; (b) These horses, being in a large mixed age/sex herd with multiple foals, would be graded as '+++' for welfare enhancement associated with such opportunities in Domain 4. Images A.M. Harvey. Measurable/Observable Indices The use of numerical scores in the grading system is explicitly rejected in order to avoid scientifically unjustified aggregation of scores and to avoid implying a degree of precision that is not achievable when qualitatively assessing subjective affective states [48][49][50][51]. Scientifically informed best judgement is an important aspect of grading with the Five Domains Model, and so the grading scheme should act as a guide only, but be utilised alongside informed interpretation [50,51]. Detailed examples of species and situational specific grading matrixes and application of this grading system can be found elsewhere [50,51,87,88]. 2.9. Stage 9: Assign a Confidence Score to Reflect the Degree of Certainty about the Data on Which Welfare Status Has Been Graded When the grading system is applied to assess individual animal welfare (Stage 8), a confidence score should then be assigned to the overall welfare status grade, to reflect the degree of certainty about the data upon which the grade was based [88]. We recommend a three-tier scoring system where L = low confidence, M = moderate confidence and H = high confidence. The confidence score should reflect the knowledge gaps and limitations of the assessment, including gaps in species-specific knowledge (Stage 3), any challenges with individual animal identification (Stage 5) and the archived indices that could either not be practically measured/observed with currently available methods (Stages 6), or which could not be sufficiently validated (Stage 7). These are critical actions both for directing further research to improve future welfare assessments, and for informing the level of confidence with which individual welfare can currently be assessed in the species and context of interest. In addition, a range of other factors should be considered including: whether all indices in the grading scheme could be measured/observed in the individual being assessed; the number of and/or duration of observations of the animal; whether indices were measured/observed from several methods combined or a single method; the implications if all methods could not be applied (e.g., still images only vs. video recordings vs. direct observations); and the distance of the assessor from the animal/image/video recordings when measurements/observations were made. The importance of some of these factors may also vary depending on the degree of welfare compromise. For example, if a welfare compromise status grade of E is assigned to a horse with a body condition score of 1/9, or a horse with a broken leg, the confidence in that score may be high despite the possibility that the grade was based on data from a single still image of the horse. In contrast, if a welfare status grade of A was assigned to a horse based on a single still image, the confidence in that score would likely be low. From the comprehensive list of potential welfare alerting indices (Stage 4), select only those that can be measured/observed and interpreted (Stage 6). Some of these may be animal-based measures that were not able to be scientifically validated as indicators of mental experiences (Stage 7). Assessing such alerting indices separately from assessment of welfare status (Stage 8), can draw attention to risks of future welfare compromises and to what, if any, actions may be taken to mitigate these risks (e.g., Figure 9). This is particularly relevant to the situation of free-roaming animals, as unlike animals in captivity, immediate action based on a single welfare assessment or routine frequent monitoring of welfare may be impractical. Figure 9. These images illustrate the value in grading welfare alerting indices. Both of these mares have the same grading for physical impacts in Domain 1 based on the animal-based measure of a body condition score of 3/9. However, alerting indices suggest that: (a) This mare has a low risk of further welfare compromise (and high likelihood of future improvement). This is because forage availability is good, it is the end of winter so forage quality and availability are likely to improve, and her yearling foal will soon be weaned, reducing nutritional demands on the mare. Accordingly, immediate intervention is not required, but body condition and forage availability should preferably be reassessed after another 6-12 weeks as intervention may be required if there was no improvement in body condition; (b) Conversely, this non-lactating mare has a high risk of further welfare compromise of increasing severity. This is because forage availability is poor and unlikely to improve because it is the end of Spring, and the mare is already in poor body condition despite the absence of additional nutritional demands from nursing a foal. In this case, therefore, the recommendation may be for immediate intervention or closer monitoring with intervention if her body condition were to decreased below 3/9 within the following month. Images A.M. Harvey. We therefore propose the use of an additional three-tiered scale for the overall grading of welfare alerting indices, representing 'no to low', 'moderate' and 'high' risk of further welfare compromise of increasing severity (Figure 9). Welfare alerting indices interpreted in combination with welfare status (Stage 8), should enable recommendations to be made relating to: (i) whether any immediate intervention is required, or (ii) whether further assessment or ongoing monitoring should be implemented, and what form that should take and (iii) the point at which intervention would be required to ameliorate increasing welfare compromise, where the risk of further compromise occurring is high. Concluding Remarks The ten-stage protocol described here illustrates how the well-established Five Domains Model can be systematically applied to assess the welfare of individual free-roaming wild animals. This paper therefore forms a template for making such welfare assessments in free-roaming wild terrestrial species by applying the principles outlined here. Applying the Model to such animals will help to identify previously unrecognised features of poor and good welfare by more precisely characterising scientifically validated negative and positive mental experiences, and their evaluation, as opposed to the commonly used imprecise and non-specific descriptors such as 'suffering' and 'stress' [6]. Utilising qualitative grading allows the monitoring of the welfare status of animals in different circumstances and at different times, thus providing scientifically informed and evidence-based guidance for decisions to intervene or not, in addition to enabling assessment of responses to any interventions that are implemented. Nevertheless, it is important to recognise the limitations of the Model and its use in the assessment of wild animal welfare. Only specific indices and mental experiences that can be identified and interpreted can be assessed; there will be variable levels of confidence with which particular experiences may be inferred to be present in different circumstances, and differing precision with which each mental experience may be graded, as well as an inability to determine relative impacts of those different experiences on welfare status [51]. For some species, in some contexts, it may become evident that very few welfare indices can be assessed and interpreted, significantly hindering welfare assessments. However, this then highlights and identifies the knowledge gaps that need to be filled. As such, it provides a sound foundation for further research into the welfare of wild free-roaming animals.
2020-01-23T09:20:34.151Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "8d0910792344905ccf1bad9a3bedc30207fc50e9", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-2615/10/1/148/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "de3495c6ab87c009a7d7c12560f7b7da88bd26f3", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
253598660
pes2o/s2orc
v3-fos-license
Five years follow up of patient receiving prolonged mechanical ventilation: Data for a single center in Taiwan Background The National Association for Medical Direction of Respiratory Care recommended tracking 1-year survival rates (the most relevant outcome) in patients treated with prolonged mechanical ventilation. However, patients treated with prolonged mechanical ventilation had higher mortality rates within the first 2 years after weaning. More knowledge regarding long-term mortality would help patients, families, and clinicians choose appropriate interventions and make end-of-life decisions. In this investigation, we attempted to determine the rates of long-term mortality for all patients treated with prolonged mechanical ventilation over a period of 10 years. Objective The purpose of this investigation was to enhance the overall survival outcomes for patients receiving prolonged mechanical ventilation by identifying the factors affecting the 5-year mortality rates for these patients. Design Retrospective observational study. Materials and methods In this retrospective study, we explored the influential factors related to the overall survival outcomes of all patients treated with prolonged mechanical ventilation. We enrolled every individual admitted to the weaning unit between January 1, 2012, and December 31, 2016. The length of survival for each patient was estimated from admission to the weaning unit until death or December 31, 2021, whichever came first. We analyzed the data to investigate the survival time, mortality rates, and survival curves in these patients. Results Long-term follow-up information was gathered for 296 patients who received prolonged mechanical ventilation. There was better mean survival times in patients treated with prolonged mechanical ventilation with the following characteristics (in order): no comorbidities, tracheostomies, and intracranial hemorrhage. Successful weaning, receipt of tracheostomy, an age less than 75 years, and no comorbidities were associated with better long-term overall survival outcomes. Conclusion Prolonged mechanical ventilation patients had abysmal overall survival outcomes. Even though prolonged mechanical ventilation patients’ long-term survival outcomes are tragic, medical professionals should never give up on the dream of enhancing long-term outcomes. Conclusion: Prolonged mechanical ventilation patients had abysmal overall survival outcomes. Even though prolonged mechanical ventilation patients' long-term survival outcomes are tragic, medical professionals should never give up on the dream of enhancing long-term outcomes. KEYWORDS prolonged mechanical ventilation, respiratory care center, 5-year mortality rate, weaning unit, successful weaning Background A conference on the treatment and management of prolonged mechanical ventilation (PMV) patients was held in 2004 by the National Association for Medical Direction of Respiratory Care. The most relevant outcome, according to consensus, is the 1-year survival rate (1). Many PMV patients were discharged from the hospital but were again readmitted after a year. Some patients needed long-term ventilator support for over a year. According to the study conducted by Stoller, patients who were discharged from the weaning unit had a mortality rate that fell by 68% within the first 2 years and then fell slower after that (2). In accordance with Aboussouan's study, 40% of PMV patients were still living by the second and third years after discharge (3). The clinical course of PMV patients continues to progress after 1 year. The medical profession pays little attention to the long-term survival rates of PMV patients, and there is less information available on these patients' long-term mortality. The evaluation of the long-term outcome of patients undergoing PMV should take more than 1 year. The comprehensive care program for ventilator-dependent patients encompasses mechanical ventilator care in four settings: the intensive care unit (also known as the acute critical care stage), the respiratory care center (also known as the RCC), the respiratory care ward (also known as the RCW), and home care services (a stable period in which the patient is cared for directly by family caregivers or nurses who work in nursing homes) in Taiwan (4). In this study, throughout a 10-year follow-up period, we aimed to enhance the overall survival outcomes for patients receiving prolonged mechanical ventilation. We investigate the long-term mortality of all PMV patients at a single weaning unit in the acute care hospital by identifying the factors affecting the 5-year mortality rates for these patients. Our study on long-term mortality is significant because it advances our clinical knowledge of patients receiving PMV, which is useful for making decisions about their long-term care and end-oflife options. Study design In this retrospective study, we explored the influential factors related to the overall survival outcomes of all PMV patients. We enrolled every individual admitted to the weaning unit between January 1, 2012, and December 31, 2016. The length of survival for each patient was estimated from admission to the weaning unit until death or December 31, 2021, whichever came first. This means that every patient has been followed up for at least 5 years and confirmed mortality data are available for a minimum of 5 years. We documented their age, sex, comorbidities, weaning status, receipt or non-receipt of tracheostomy, causes of respiratory failure led to PMV, survival time, mortality rates, and longterm survival outcomes. Retrospective data collection from the patients' medical records was performed. We compared clinical variables and receipt or non-receipt of tracheostomy between PMV patients who survived < 5 years and those who survived ≥ 5 years. In addition, six survival curves were generated for all PMV patients, including the following: (1) Successful weaned PMV patients versus unsuccessful weaned PMV patients; (2) Receipt or non-receipt of tracheostomy PMV patients; (3) Aged < 75 years versus aged ≥ 75 years patients; (4) All PMV patients among the causes of respiratory failure led to PMV; (5) All PMV patients among different medical comorbidities; (6) All PMV patients among the number of comorbidities. Definitions and outcomes We recently published six PMV articles in the literature. The hospital details, patient details, comorbidities, causes of acute respiratory failure leading to PMV, and eligibility criteria for RCC admission were the same as those in previous studies (5). Causes of death in PMV patients: All patients who had PMV had their causes of death compiled. Our PMV patients had seven major causes of Huang 10.3389/fmed.2022.1038915 Clinical outcomes of 296 prolonged mechanical ventilation patients in the weaning unit of an acute care hospital. RCC: respiratory care center. Discharged prolonged mechanical ventilation patient was defined as a successfully weaned prolonged mechanical ventilation patient who was discharged from the hospital. Ward mortality prolonged mechanical ventilation patient was defined as a successfully weaned PMV patient who died in the hospital before discharge. RCC mortality prolonged mechanical ventilation patient was defined as a patient who died in the RCC. mortality recorded, including (1) pneumonia, as defined by the Infectious Diseases Society of America (6); and (2) sepsis, with an infection etiology other than pneumonia; (3) respiratory failure, wherein the patient did not acquire pneumonia but experienced sputum impaction, respiratory distress, and hypoxemia; (4) sudden death, wherein the patient experienced a sudden onset of apnea and asystole; (5) cardiogenic shock, wherein the patient died due to underlying heart disease; (6) malignant disease, wherein the patient died due to underlying malignant disease; and (7) chronic obstructive pulmonary disease (COPD). We then gathered the pneumonia pathogens from PMV patients whose cause of death was pneumonia. Statistical analysis To assess differences in age, weaning status, tracheostomy receipt or non-receipt, causes of acute respiratory failure requiring PMV, medical comorbidities, and a number of comorbidities, Student's t test for continuous variables and Pearson's chi-square test and Fisher's exact test for categorical variables were used. Student's t test and one-way analysis of variance were performed to compare survival times. Using Fisher's exact test, mortality rates were compared. Using logistic regression, the correlation of each variable with less than 5year and more than 5-year survival rates in PMV patients was studied. The association of each variable between the two patient groups was examined using univariate analysis. In the univariate analysis, the factors had a P value less than 0.05 and were incorporated into the multivariate analysis to determine the impact of each variable on the two patient groups. In this study, six survival curves were generated using the Kaplan-Meier method to determine the cumulative likelihood of survival as a function of the number of months. The following six survival curves for the patient groups were: (1) successfully weaned patients versus unsuccessfully weaned patients, (2) receipt versus non-receipt tracheostomy patients, (3) patients aged < 75 years versus those aged ≥ 75 years, (4) PMV patients among the causes of respiratory failure led to PMV, (5) PMV patients among different medical comorbidities, and 6. PMV patients are among the number of comorbidities. Using the log-rank test, the survival rates of the six survival curves were compared. Using Cox proportional hazards models, the link between the survival rates of the six survival curves was determined. Results We were able to collect long-term follow-up information on 296 PMV patients over the course of the 10-year research period. Of these, 189 (63.9%) were men, and 107 (36.1%) (1) Successfully weaned patients and unsuccessfully weaned patients (P < 0.001); (2) Receipt or non-receipt of tracheostomy patients (P < 0.001); (3) Age < 75 years and ≥ 75 years patients (P < 0.001); (4) PMV patients among the causes of respiratory failure led to PMV (P = 0.045) (main related to ICH patients); (5) PMV patients among the number of comorbidities (P < 0.001). No differences in survival time were found among PMV patients with different medical comorbidities. With regard to the mortality rate of PMV patients ( Table 2), the 3-month mortality rate, 6-month mortality rate, 1-year mortality rate, 3-year mortality rate, and 5year mortality rate among the following patient categories varied statistically significantly: (1) Successfully weaned patients and unsuccessfully weaned patients; (2) Receipt or nonreceipt of tracheostomy patients; (3) Aged < 75 years and aged ≥ 75 years; (4) PMV patients among the number of comorbidities. There were only statistically significant differences in the 3-year mortality rate and 5-year mortality rate among patients in the causes of respiratory failure led to PMV (main related to ICH patients). No differences in the mortality rate were found among PMV patients with different medical comorbidities. Discharged PMV patients had the best 1-year mortality rate, 3-year mortality rate, 5-year mortality rate, and mean survival time, as measured by their discharge status in relation to the mortality rate ( Table 3). According to age cohorts, the 1-year mortality rate was 50% in the youngest cohort (< 45 years) and 92.1% in that oldest cohort (≥ 85 years) (P <0.001). The 3-year mortality rate was 66.7% in the youngest cohort (< 45) years and 100% in the oldest cohort (≥ 85 years) (P < 0.001). The 5-year mortality rate was 66.7% in the youngest cohort (< 45 years) and 100% the oldest cohort (≥ 85 years) (P < 0.001). Ages greater than 75 exhibited significantly shorter mean survival times ( Table 4). Univariate analysis of 5-year survival rates revealed statistically significant differences between patients who survived < 5 years and those who survived ≥ 5 years with regard to age ≥ 75 years, successful weaning, no comorbidities, receipt or non-receipt of tracheostomy, pneumonia patients and ICH patients ( Table 5). Multivariate analysis revealed that those with successful weaning, those with no comorbidities, and those who received tracheostomy had better 5-year survival rates, whereas those aged ≥ 75 years had poorer 5-year survival rates ( Table 6). Kaplan-Meier survival curves for successfully weaned patients versus unsuccessfully weaned patients, patients who received or did not receive tracheostomy, patients aged < 75 years versus patients aged ≥ 75 years, PMV patients among the causes of respiratory failure led to PMV, PMV patients among different medical comorbidities, and PMV patients among the number of comorbidities are illustrated in Figure 2 to Discussion There is no article exploring the survival time of PMV patients. Our study showed that successfully weaned PMV patients, PMV patients undergoing tracheostomy, patients aged less than 75 years, ICH patients, and patients without comorbidities had better survival times. In addition, the 5year mortality rates of patients receiving PMV have not been investigated extensively. The pooled mortality between 2 and 4 years among acute care hospital weaning units was 56% (range 4-66%) (7). Stoller et al., showed that the 1-year mortality rate, 3-year mortality rate, and 5-year mortality rates for162 PMV patients were 57, 73, and 81%, respectively. Younger age was significantly associated with longer survival (2). Our series showed worse 1-year mortality rates, 3-year mortality rates, and 5-year mortality rates than the series in the literature. Our study showed that patients who were successfully weaned from PMV, who received a tracheostomy, were less than 75 years old and had no comorbidities had a lower 1-year mortality rate, 3-year mortality rate, and 5-year mortality rate. The differences between PMV patients who survived more than 5 years and those who survived less than 5 years have not been investigated in the literature. Our research demonstrated that factors such as age under 75, the absence of comorbidities, successful weaning from PMV, and tracheostomy placement contributed to PMV patients who survived more than 5 years. Studies indicate that older age, failure to wean, four or more comorbidities and end-stage renal disease (ESRD) comorbidity were associated with poor 1-year survival rates for PMV patients (2,(8)(9)(10)(11)(12)(13)(14). No comorbidities and successful weaning of PMV patients were two factors related to a better 1-year survival rate. The influential factors of 1-year and 5-year survival rates were similar except for tracheostomy. With regard to tracheostomy, in a study by Engoren et al., patients liberated from mechanical ventilation and discharged from the hospital with a tracheostomy had worse long-term outcomes (15). In a study by Warnke et al., successfully weaned PMV patients with a closed tracheostomy had a higher survival rate than patients with a permanent tracheostomy (16). Successfully weaned from PMV patients and decannulation of tracheostomy after discharge from the hospital means The Kaplan-Meier curves of aged <75 years and aged ≥75 years prolonged mechanical ventilation patients. Cox proportional hazards regression analyses of 296 prolonged mechanical ventilation patients, in patients aged <75 years were correlated with a reduction in the risk of death by 37.9% [P < 0.001; hazard ratio (HR) = 0.621; 95% CI 0.484-0.797]. that the procedure will leave a wound on the patient's neck, Frontiers in Medicine 08 frontiersin.org The Kaplan-Meier curves of prolonged mechanical ventilation patients among different medical comorbidities. No statistically significant differences in Kaplan-Meier curves of prolonged mechanical ventilation patients among different medical comorbidities. be removed permanently, leaving the patient bedridden for the rest of their lives. As a result, relatives may think it would be preferable to let the patient to experience adverse effects and suffering following endotracheal tube intubation than to permit tracheostomy (19). According to Taiwan's Clinical Performance Indicators data, 39% of PMV patients at the RCC medical center underwent tracheostomies. Compared to the US, Taiwan has a lower proportion of PMV patients that undergo require tracheostomy. Our previous study showed that only 37 PMV patients (9.7%) underwent tracheostomy during a 3-year period (19). The clinical situation of PMV patients receiving or not receiving tracheostomy in Taiwan is different from that in Western countries. Most PMV patients underwent tracheostomy because these patients needed permanent tracheostomy after discharge from the hospital. In actuality, a large number of PMV patients require a permanent tracheostomy but do not receive one. Therefore, receiving tracheostomy positively influences the long-term survival outcomes of PMV patients in Taiwan. Furthermore, when compared to Taiwanese medical centers, our hospital has a low rate of tracheostomy. Medical centers are located in cities such as Taipei, Taichung, Kaohsiung, Tainan. The majority of people have excellent levels of education and living standards in city. Our hospital is located in the remote community. The majority of the population is elderly. In older generations believed that when someone passes away, their body should be free of any wounds. Family members are opposed to the treatment because they do not want their loved ones to have a permanent tracheostomy wound in their necks. With regard to survival cures, our study showed that successfully weaned from PMV patients, receipt of tracheostomy PMV patients, age less than 75 years old patients, no comorbidities patients and ICH PMV patients had better overall survival outcomes. The survival cure analysis was advanced to confirm the findings of influential factors related to overall survival outcomes in PMV patients. Pneumonia was the leading cause of death in our PMV patients. Carbapenem-resistant gram-negative bacilli account for 59.6% of known pneumonia pathogens, especially Carbapenem-resistant Acinetobacter baumannii. In a study by Chien, prior exposure to piperacillin/tazobactam and imipenem was found to be strongly linked with the rise in multidrug-resistant Acinetobacter baumannii. In patients receiving PMV, multidrug-resistant microorganism pneumonia was linked to an elevated 6-month mortality rate (20). Another Taiwanese study found that prior use of imipenem, meropenem, piperacillin/tazobactam, or fourth-generation cephalosporins was an independent risk factor for extensive drug-resistant Acinetobacter baumannii infections in hospitals (21). For PMV patients, carbapenem-resistant gram-negative bacilli infection is a potentially lethal problem, so we must utilize broad-spectrum antibiotics wisely, notably the stringent usage of carbapenems in our hospital. Limitation The long-term survival outcomes of PMV patients were the focus of this study. Acute Physiology and Chronic Health Evaluation II scores, laboratory data, respiratory measurements, or any other similarly pertinent characteristics were not gathered from the patients. As a result, we were unable to identify which of these metrics, if any, was associated with the long-term survival results of patients who underwent PMV. It is crucial to understand the acute critical stage of PMV patients' laboratory data and Acute Physiology and Chronic Health Evaluation II scores but less crucial to comprehend the patients' long-term survival outcomes. The respiratory parameters are connected to a patient's ability to wean from PMV. Although few, no previous publications have thoroughly covered the long-term mortality rate of PMV patients. Our study's findings can provide the medical community with advanced long-term survival outcomes for PMV patients. Because this study was retrospective in nature and conducted at a single weaning center, it is important to interpret the findings about the long-term survival outcomes of PMV patients with caution. This may not be applicable to other weaning units. A better understanding of longterm outcomes is needed, and efforts are urgently needed to improve survival. Randomized controlled trials are needed to compensate for the shortcomings associated with a single-center retrospective study. Conclusion The 5-year mortality rates of PMV patients were significantly dependent on successful weaning, receipt of tracheostomy, age less than 75 years and no comorbidities. According to our study, successful weaning was linked to a risk of death reduction of 51.6%, tracheostomy placement was linked to a risk of death reduction of 41.2%, patients younger than 75 years old were linked to a risk of death reduction of 37.9%, and patients with no comorbidities were linked to a risk of death reduction of 60.5%. For PMV patients, carbapenem-resistant gram-negative bacilli infection is a potentially lethal problem, so we must utilize broad-spectrum antibiotics wisely, notably the stringent usage of carbapenems in our hospital. Despite the tragedy of long-term survival outcomes of PMV patients, clinicians should never give up on the dream of improving long-term outcomes. Data availability statement The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. Ethics statement The studies involving human participants were reviewed and approved by the Buddhist Dalin Tzu Chi Hospital Research Ethics Committee (Approved IRB No. B10802009). Written informed consent for participation was not required for this study in Frontiers in Medicine 11 frontiersin.org
2022-11-18T15:02:44.314Z
2022-11-18T00:00:00.000
{ "year": 2022, "sha1": "91d0e796b63354a65a76cf1f5107ac78045c2e49", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "91d0e796b63354a65a76cf1f5107ac78045c2e49", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
250691774
pes2o/s2orc
v3-fos-license
Properties of light quarks from lattice QCD simulations1 By numerical study of the simple bound states of light quarks, in particular the π and K mesons, we are able to deduce fundamental quark properties. Using the "improved staggered" discretization of QCD, the MILC Collaboration has performed a series of simulations of these bound states, including the effects of virtual quark-antiquark pairs ("sea" quarks). From these simulations, we have determined the masses of the up, down, and strange quarks. We find that the up quark mass is not zero (at the 10 sigma level), putting to rest a twenty-year-old suggestion that the up quark could be massless. Further, by studying the decays of the π and K mesons, we are able to determine the "CKM matrix element" Vus of the Weak Interactions. The errors on our result for Vus are comparable to the best previous determinations using alternative theoretical approaches, and are likely to be significantly reduced by simulations now in progress. Introduction Quantum Chromodynamics (QCD) is the theory of the Strong Interactions, which are responsible for binding quarks into protons and neutrons and holding them together in the atomic nucleus. QCD describes quarks interacting through the exchange of gluons. At high energy or short distances (much less than the radius of a proton), QCD becomes weakly coupled: The coupling constant (the QCD analogue of electric charge) becomes small. This property of QCD is called "asymptotic freedom." Gross, Politzer, and Wilczek received the 2004 Nobel Prize in Physics for its discovery. Short distance QCD is well described by a perturbative expansion in the small coupling constant. The flip side of asymptotic freedom is that QCD becomes strongly coupled at long distances (or low energy). This is responsible for the property called "confinement": Quarks cannot be observed separately from their low energy bound states (hadrons). The large coupling constant also implies that the properties of quarks in bound states cannot be studied by perturbative 1 presented by C. Bernard methods. Nonperturbative, numerical simulations by the methods of lattice gauge theory are needed. Such simulations can determine the fundamental parameters of QCD (quark masses and the coupling constant) and then predict the properties of hadrons. The basic steps in the numerical simulations are: 1. Replace continuous space and time by a discrete set of points (the lattice), separated by lattice spacing a. 2. Discretize the field equations of quarks and gluons on the lattice. The number of degrees of freedom of the fields are then finite (in a finite volume). 3. Generate "typical" lattices (configurations) of gluon fields using Monte Carlo methods. The backeffect of quarks on the gluons is included. Numerical limitations imply that the masses of the lightest (up and down) quarks in this step (and next one) must be taken to be heavier than in the real world. 4. Compute the quark propagators through background gluon lattices (requires a sparse matrix inversion). A "propagator" is the amplitude for a quark to move between space-time points. 5. Combine the quark propagators to find bound-state (hadron) propagators. 6. Analyze the hadron propagators to find the properties (e.g., masses) of the hadrons. 7. Extrapolate the hadronic properties to the physical regime: (a) Extrapolate to lower masses of the up and down quarks: the "chiral extrapolation." (b) Extrapolate the volume of space-time → ∞: the "infinite volume extrapolation." (c) Extrapolate the lattice spacing a → 0: the "continuum extrapolation." 8. Comparing some computed hadron masses to experimentally known masses determines the physical values of the quark masses and the strong coupling constant. Once quark masses and coupling constant are known, we can make predictions for hadronic properties such as decay amplitudes or masses of other hadrons. As an example, consider a π + meson, which is made up an up (u) and an anti-down (d) quark. These two quarks are called the "valence quarks" in the π + ; it is their propagators that we calculate in step 4 above. Putting the two valence quark propagators together and averaging over gluon backgrounds computes the effects of gluon exchange between the quarks. The exchanged gluons, in turn, can interact with virtual quark-antiquark pairs, which, by the laws of quantum mechanics, are always popping in and out of existence in the vacuum. Such quark-antiquark pairs are known as "sea quarks," and they are responsible for the back-effect of quarks on gluons mentioned in step 3 above. Including the sea-quark effects is the step that is the most demanding of computer resources. It requires computing the determinant (or at least the change of the determinant) of a large matrix. For many years, the determinant was so computationally daunting that it was simply left out: a brutalization of the theory known as the "quenched approximation." This stumbling block has now been overcome: The solution we implement involves a discretization of the quark field equations that is very fast ("staggered quarks" [1]), the addition of terms to the equations to reduce discretization artifacts on gluons and quarks [2] ("improved staggered quarks"), and an efficient algorithm for computing sea-quarks effects [3]. Improved staggered quarks allow one to include sea-quark effects without losing control of the systematic errors due to the chiral, infinite volume, and continuum extrapolations. Using these quarks, the MILC Collaboration [4,5] has been generating lattices including the effects of the three relevant sea-quark "flavors": u, d, s (up, down, strange). The project began in 1999, but the pace of these simulations has been sped up greatly by application of SciDAC resources. MILC makes its lattices publicly available: http://qcd.nersc.gov/ In 2003 a lattice QCD milestone was reached. MILC joined with the Fermilab, HPQCD, and UKQCD groups to show that a wide range of simple quantities could be computed with high accuracy (1-3% errors) in lattice QCD using the MILC lattices [6]. Details of the Calculation The computation described here is based on two sets of MILC lattices: • "coarse" runs with lattice spacing a ≈ 0.125 fm and a wide range of sea-quark masses, with lowest average up and down quark massm ∼ 10 MeV, about 3 times the physical value. • "fine" runs with a ≈ 0.09 fm, and lowestm ∼ 15 MeV, about 5 times the physical value. Extensions in progress include a fine run withm ∼ 10 MeV, which is nearly half finished, and a "super fine" set with a ≈ 0.06 fm, and lowestm ∼ 10 MeV. The super fine run will begin in earnest once the DOE QCDOC comes on line. All the above lattices have volume ≥ (2.5 fm) 3 We compute valence-quark propagators with many different quark mass values for every seaquark mass choice. This procedure allows us to get the maximum amount of information out of the gluon lattices, which are so expensive to generate when sea-quark back-effects are included. At the end of the calculation, we set valence and sea mass values equal and recover the true theory, namely "full QCD." The valence-quark masses are called m x and m y . We take the masses of the u and d sea quarks equal, which is a good approximation; the sea-quark masses in the simulation are calledm ≡ m u = m d and m s . After the simulations are performed, we interpolate m s to its physical value m s ; while we extrapolatem to its physical valuem = (m u + m d )/2. For the valence-quark masses, the extrapolation/interpolation depends on which bound state we are studying. For the π + , we could extrapolate m x → m u and m y → m d ; in practice, however, taking both m x and m y tom gives an "average" π meson, calledπ, whose mass is very close to the π + mass (up to electromagnetic corrections). For a K + (K 0 ) meson, a bound state of a u (d) and ans, we extrapolate m x → m u (m d ) and interpolate m y → m s . In the K system, it is convenient to look first at an fictitious "average" K meson, calledK, with mass squared equal the average mass squared of K + and K 0 . For aK, we extrapolate m x →m and interpolate m y → m s . The errors in the chiral extrapolations/interpolations can be controlled if we know the functional form of the mass dependence. In the continuum, the functional form is given by an effective field theory, called "chiral perturbation theory" ( χ PT). The form of the mass dependence of many interesting quantities has been calculated; see, e.g., Ref [7]. On the lattice, the errors introduced by discretization modify the formulas of χ PT. For staggered quarks, the modified form of χ PT has been worked out [8]: It is called "staggered chiral perturbation theory" (S χ PT). S χ PT also gives the leading corrections from finite volume. Further, since S χ PT includes discretization errors, it helps control the extrapolation to the continuum. We use S χ PT for our fits to lattice data. All data shown below have already been corrected for finite volume effects with S χ PT. Figures 1 and 2 present the analysis that determines the quark masses. The symbols ×, 3, • , and 2 show a small subset of our lattice data, with errors too small to be visible. The 3 and 2 areπ points, with valence masses m y = m x ; while the × and • areK points, with m y held fixed to one of three different values, giving three sets of points for each symbol. In Fig. 1, the dark solid lines are the result of a single S χ PT fit to all the data, with confidence level CL = 0.28. Setting valence and sea quark masses equal and extrapolating to the continuum gives the lighter solid lines. We then adjust m s , the simulated mass of the strange quark, until both theK and theπ hit their physical masses at the same value of m x . This gives the two dotted lines, and from them we get a determination of m s and the average u, d mass,m. To obtain m u and m d separately, we continue the upper dotted line in Fig. 1 until the mass of the K + is reached. The region of the continuation is magnified in Fig. 2. This is an extrapolation in the valence mass m x , with the other masses (sea massesm, m s and valence mass m y = m s ) fixed at their physical values. The up quark mass, m u is the value of m x that gives the K + its experimental mass. In principle, the up and down sea quark masses should also be adjusted away from the their average,m, but we cannot do this since all our simulations have m u = m d =m . However, the error introduced can be shown to be negligible. The "decay constant" of the K or π meson, f K or f π , is the "wave function at the origin": It determines the probability that the two quarks in the bound state are close. When they are close, the quarks can annihilate by the Weak Interactions, ultimately producing a muon (µ) (or electron) and a neutrino (ν). If f K and f π are computed in QCD, then the experimentally measured decay rates for K → µν and π → µν determine parameters of the Weak Interactions: "CKM matrix elements" V us and V ud . Knowing V us , V ud and other CKM matrix elements is crucial to testing the Standard Model of particle physics and searching for new physics. Figure 3 shows f π vs. the sum of valence masses m x +m y , in units of r 1 , a known length scale. Coarse lattice values of the quark masses have been adjusted by a calculated renormalization constant, Z m , to correspond to those of the the fine lattices. The five sets of 3's are all coarse lattice points with different sea quark massm ; the two sets of 2's are fine lattice points with differentm . The lines through the points are the results of the same S χ PT fit that was shown in Fig. 1. Data for masses and decay constants are fit simultaneously to give better numerical control. The dotted line is the result after extrapolating to the continuum, setting valence and sea quark masses equal ("full QCD") and adjusting m s to its physical value m s . The • shows our result for f π after extrapolation to 2m, the physical value of m x + m y for a π meson. The experimental point shown assumes an alternative determination of V ud ; the agreement of our lattice result with experiment is excellent. Results and Outlook Our results for quark masses [9,5] (in MS scheme at scale 2 GeV) and decay constants [5] are: The first two errors in each case are from statistics and lattice systematics; while the additional errors on the masses are from perturbation theory and electromagnetic effects, respectively. The result for m u /m d rules out, at the 10σ level, the possibility of m u = 0. This puts to rest a longstanding proposal [10] that the up quark could be massless. A massless up quark could have solved the "Strong CP Puzzle" [11]. Alternative solutions are now more likely: e.g., the "axion" [12], a possible component of Dark Matter. Our result for f K /f π implies |V us | = 0.2219(26), which is consistent with the world average value |V us | = 0.2200(26) [13] from alternative methods. Runs planned for the near future, as well as those now in progress, should reduce the error on our result for |V us |, making it significantly more precise than the current world-average determination. Similar methods can be used to study mesons with one heavy (charm or bottom) and one light quark. Such studies, now in progress in collaboration with the Fermilab and HPQCD groups, promise to give a wealth of information about other crucial CKM matrix elements. Finally, we note that there is still a theoretical issue with the use of staggered quarks in this context. However, the agreement of existing results with experiment, as well as a growing body of direct studies of the issue [14], give us confidence that no fundamental problem exists. Further studies, by us and others, are in progress.
2022-06-28T02:30:04.259Z
2005-01-01T00:00:00.000
{ "year": 2005, "sha1": "a9a58828a21b778298421b18d468bdd43083e008", "oa_license": null, "oa_url": "http://iopscience.iop.org/article/10.1088/1742-6596/16/1/020/pdf", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "a9a58828a21b778298421b18d468bdd43083e008", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
227290512
pes2o/s2orc
v3-fos-license
A Research Agenda for the Massage Therapy Profession: a Report from the Massage Therapy Foundation The Massage Therapy Foundation (MTF) serves as a primary steward of massage therapy research; working to fund and advance the science and art of massage therapy for the entire massage community. The development of an updated research agenda is an essential part of furthering the MTF’s responsibility to help grow the massage therapy knowledge base and increase support for the application of quality research. Integrative health and massage community stakeholders are called upon to help move this MTF 2020 Massage Therapy Research Agenda forward. Together we must strive to continue to advance and disseminate new knowledge to all stakeholders including practitioners, students, instructors, researchers, and policy makers. INTRODUCTION Massage therapists are a part of a broad community of allied health-care providers. Member groups within this community develop and maintain specific research agendas and strategic plans to guide the development of the knowledge base that serves as the foundation of practice for their profession. (1)(2)(3) The Massage Therapy Foundation (MTF) is a primary steward of massage therapy research, funding studies to advance the science and art of massage therapy over the past 30 years. The first massage therapy research agenda was developed by the MTF in 1991 and outlined five key goals: 1) Build a research infrastructure within the massage therapy profession; 2) Fund research into the safety and efficacy of massage therapy; 3) Fund studies on physiological and other mechanisms by which massage therapy achieves its effects; 4) Fund studies stemming from a wellness paradigm; and 5) Fund studies on the profession of therapeutic massage. This effort, combined with the efforts of international massage organizations (2) facilitated the growth and development of the massage therapy profession in each of the goal areas, and greatly expanded the knowledge base that now supports the science-informed practice of therapeutic massage. The MTF evaluated the extensive progress and remaining knowledge gaps, then put together a team to develop a new research agenda to guide and advance the therapeutic massage knowledge base into the future. Process Description The MTF 2020 Massage Therapy Research Agenda was developed by a team of experienced researchers, massage therapists, business owners, administrators, educators, and other allied health-care providers. The agenda was also developed to align with the United States National Institutes of Health's National Center for Complementary and Integrative Health (NCCIH) Strategic Framework, (3) presenting a unified strategy for advancing the profession. The MTF 2020 Massage Therapy Research Agenda The MTF 2020 Massage Therapy Research Agenda has four key objectives, each with specific goals (Figure 1). The agenda aspires to support the work of every member of the massage therapy community. Following each objective is a section called "Moving the Agenda Forward", with a few ideas on how researchers, practitioners, educators, students, and organizations can become involved in progressing the agenda and the profession. MTF: Prioritize funding for research studies to advance understanding in these key areas. Facilitate MTF: Seek out and fund research studies to discover massage methods that improve health-related care. Support massage practice-based networks conducting "real world" research. Researchers: Develop longitudinal studies to assess massage for disease prevention and condition management. Researchers Practitioners: Work to help ensure access to massage therapy to underrepresented populations. Students: Volunteer for programs providing massage for diverse populations or promoting wellness for populations under stress (e.g., health-care providers). Educators: Ensure that courses include research information on wellness and prevention for all populations. Enhance efforts to recruit students who are underrepresented in massage therapy (e.g., men, individuals who are Black or African American, Native American, and those of Hispanic/Latinx ethnicity). Develop and assess massage education curricula that includes content areas that address the social determinants in population health, health equity, and cultural humility. Clients/Patients: Document the impact of massage therapy treatment over time. Organizations: Make massage therapy services available to patients, clients, and employees. Partner with research organizations to improve the massage knowledge base in areas that benefit the organization and advance knowledge about the effectiveness and return on investment for massage therapy interventions. Support large, longitudinal studies on prevention. Objective 4: Support the Establishment and Continuation of Educational Research Focused on Massage Therapy Pedagogy/ Andragogy Goals: 1. Determine best practices to convey information to learners of diverse cultures and educational backgrounds. 2. Assess best practices for delivery of therapeutic massage and bodywork to support career longevity and wellness. Volunteer as a participant in massage therapy research. Organizations: Add massage therapy care for your patients and partner with researchers to improve knowledge on massage therapy interventions for specific conditions. Objective 3: Foster Health Promotion, Cultivate Well-Being, and Support Disease Prevention Goals: 1. Investigate safety, efficacy, cost-effectiveness, and mechanisms of action of therapeutic massage and bodywork compared to, and in conjunction with, standard clinical practice, in supporting health resilience and physical and mental well-being across the lifespan. 2. Investigate the effects of inclusion of therapeutic massage and bodywork into interdisciplinary health-care settings on improving health-care resource management and patient outcomes. 3. Study the effectiveness of therapeutic massage and bodywork in promoting health and wellness among diverse populations over the short-term and across the lifespan. 4. Support the enhancement of health equity and access to care in marginalized populations. 5. Engage in studies to assess the effects of regular therapeutic massage and bodywork sessions on a "healthy" population. 6. Explore research opportunities to study and assess the safety, efficacy, and costeffectiveness of therapeutic massage and bodywork in nonclinical settings such as community and employerbased wellness programs. a. Examine the potential for regular therapeutic massage and bodywork sessions to impact the f requency, duration, and associated cost of injury and illness. b. Examine the impact of therapeutic massage and bodywork on worker satisfaction, mental health, and satisfaction at home.
2020-12-06T05:03:25.243Z
2020-09-03T00:00:00.000
{ "year": 2020, "sha1": "cca77031a7e4f03a0edb071c22962a65e63a1e74", "oa_license": "CCBYNCND", "oa_url": "https://ijtmb.org/index.php/ijtmb/article/download/595/645", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "cca77031a7e4f03a0edb071c22962a65e63a1e74", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
219970501
pes2o/s2orc
v3-fos-license
Multiple Genes of Symbiotic Plasmid and Chromosome in Type II Peanut Bradyrhizobium Strains Corresponding to the Incompatible Symbiosis With Vigna radiata Rhizobia are capable of establishing compatible symbiosis with their hosts of origin and plants in the cross-nodulation group that the hosts of origin belonged to. However, different from the normal peanut Bradyrhizobium (Type I strains), the Type II strains showed incompatible symbiosis with Vigna radiata. Here, we employed transposon mutagenesis to identify the genetic loci related to this incompatibility in Type II strain CCBAU 53363. As results, seven Tn5 transposon insertion mutants resulted in an increase in nodule number on V. radiata. By sequencing analysis of the sequence flanking Tn5 insertion, six mutants were located in the chromosome of CCBAU 53363, respectively encoding acyltransferase (L265) and hypothetical protein (L615)—unique to CCBAU 53363, two hypothetical proteins (L4 and L82), tripartite tricarboxylate transporter substrate binding protein (L373), and sulfur oxidation c-type cytochrome SoxA (L646), while one mutant was in symbiotic plasmid encoding alanine dehydrogenase (L147). Significant differences were observed in L147 gene sequences and the deduced protein 3D structures between the Type II (in symbiotic plasmid) and Type I strains (in chromosome). Conversely, strains in both types shared high homologies in the chromosome genes L373 and L646 and in their protein 3D structures. These data indicated that the symbiotic plasmid gene in Type II strains might have directly affected their symbiosis incompatibility, whereas the chromosome genes might be indirectly involved in this process by regulating the plasmid symbiosis genes. The seven genes may initially explain the complication associated with symbiotic incompatibility. INTRODUCTION Symbiotic relationships between legume plants and soil bacteria, collectively termed rhizobia, are characterized by the formation of root nodules, a specialized plant organ, in which rhizobia differentiate into nitrogen-fixing bacteroids and reduce nitrogen to ammonia as nutrient for plant. In exchange, plants provide specialized environment and carbohydrates to rhizobia (Krishnan et al., 2003;Nguyen et al., 2017). The association between legumes and rhizobia is highly specific, meaning that each rhizobial species establishes symbiosis with only a limited set of host plants and vice versa; this specificity led to the definition of cross-nodulation groups, which is used for description of symbiotic diversity and rhizobial species (Yang et al., 2010). The symbiotic specificity is determined by a fine-tuned exchange of molecular signals between host plant and its bacterial symbiont (Perret et al., 2000). Rhizobial specificity-related factors, such as NodD, exopolysaccharides, lipopolysaccharides, secreted proteins, Nod-factors, and so on, have been reported to affect the nodulation and host specificity (Radutoiu et al., 2007;Okazaki et al., 2013). Mutations in these related genes can cause incompatible symbiosis between rhizobia and legumes with the phenomenon that a rhizobium is unable to nodulate a particular host plant or forms nodules that are incapable of fixing nitrogen (Faruque et al., 2015;Wang et al., 2018). This incompatible relationship takes place at the early stages of the interaction and is demonstrated to result from signal changing between the host plants and bacteria, which is the molecular basis for the recognition mechanisms evolved in the process of coadaptation (Tang et al., 2016;Fan et al., 2017). This phenomenon also frequently happens at the later stages of nodule development with causing nitrogen-fixing efficiency difference between various plant-bacterium combinations Yang et al., 2017). Studies have shown that genes with different functions participate in the control of incompatible symbiosis between rhizobia and plants. In Bradyrhizobium elkanii USDA 61, T3 secretion system (T3SS) participates in its incompatible symbiosis with Vigna radiata plant (Nguyen et al., 2017). In Bradyrhizobium diazoefficiens USDA 110, the metabolic pathways, transporters, chemotaxis, and mobility negatively influence the nodulation with Glycine max (host of origin) and Sophora flavescens (incompatible host) (Liu et al., 2018a). Cell surface exopolysaccharides (EPS) in Sinorhizobium meliloti and lipopolysaccharide (LPS) in Mesorhizobium loti 2231 were reported to affect the incompatible symbiosis with Medicago sativa (Barnett and Long, 2018) and Lotus corniculatus (Turska-Szewczuk et al., 2008), respectively. It is generally believed that peanut (Arachis hypogaea L.) and mung bean (V. radiata) belong to the same crossnodulation group; therefore, the peanut bradyrhizobia have the ability of establishing effective symbiosis with V. radiata (Zhang et al., 2011;Li et al., 2019). However, our previous study revealed that the majority of peanut bradyrhizobia (Type I) could establish normal symbiosis with V. radiata and the minority of strains (Type II) showed incompatible symbiosis with the same plant, and all the Type II strains contained a symbiotic plasmid (Li, 2019). In detail, Type I strains formed efficient and numerous nodules, and Type II strains formed ineffective and less nodules with V. radiata (Li, 2019). Genotype-specific symbiotic compatibility in interactions between legumes and rhizobia is an important trait for the use of root nodule bacteria to improve the crop yield (Triplett and Sadowsky, 1992). The incompatible symbiosis between the peanut rhizobia and V. radiata offered a valuable model for investigation of the mechanisms involved in the symbiotic efficiency of rhizobia, which is not clearly described up to date. In order to understand the causes for the incompatible interaction between Type II strains and V. radiata, we performed the present study. A genetic approach of Tn5 transposon mutagenesis was taken with Type II representative strain B. guangxiense CCBAU 53363 to construct a mutant library for screening the potential genes that regulate its effective nodulation on V. radiata plant. The mutants with compatible symbiotic phenotype with V. radiata were selected by nodulation experiments. Mutational analysis identified seven genes associated with the symbiotic incompatibility, and subsequently, the 3D structures of their predicted proteins were compared between the Type I and II strains. The results in this study would improve our understanding about the symbiotic incompatible mechanisms in legume-rhizobium interactions. Tn5 Mutant Library and Positive Clones Screening A Tn5 insertion mutant library of CCBAU 53363 was built by triparental conjugation method reported by Liu et al. (2018b) with some modifications. Tn5 transposon was introduced into CCBAU 53363 (recipient) by conjugative transfer of the plasmid pRL1063a-2 (donor) with the help of plasmid pRK2013 (helper). Due to the low Tn5 transposition efficiency (1.25%), using pRL1063a plasmid as donor in CCBAU 53363 chromosome, pRL1063a-2 was constructed in this study by inserting the sacB gene (sucrose sensitive gene) in the EcoRI site of pRL1063a by seamless cloning, which was located downstream of the Tn5 transposon gene of pRL1063a (see plasmid structure in Wolk et al., 1991). Then, the plasmid pRL1063a-2 was used as the donor in triparental conjugate test, and the transposition efficiency was significantly increased to 18%. After 4 days' triparental conjugating, transconjugants were selected on TY medium containing Tmp, Km, and sucrose. Colonies grown on plates were collected and washed with 0.8% of NaCl solution, resuspended to the concentration of OD 600 = 0.2, and inoculated to V. radiata seedlings at the dose of 1 ml/plant. A total of 400 plants were grown in Leonard jars filled with vermiculite moistened with low-N nutrient solution (Vincent, 1970) at 25 • C in greenhouse with a daylight illumination period of 12 h. Nodules were harvested in 30 days postinoculation (dpi) and sterilized by three steps of washing orderly with ethanol (95%, v/v) for 30 s, NaClO (2%, w/v) for 5 min, and sterile distilled water for eight times. Each sterilized nodule was crushed in a sterilized tube, and the crude extract was streaked onto YMA plates supplied with Tmp and Km. After being fostered for nearly 15 days in a 28 • C incubator, isolates were tested by PCR method with two primer pairs 415L/415R (inner primer of Tn5 transposon) and T5BF/T5BR (external primer of Tn5 transposon, designed on the base of the sacB gene located in downstream of Tn5 transposon) ( Table 1). The strains with the positive amplification reaction by primer pair 415L/415R and the negative reaction by T5BF/T5BR were identified as positive mutants. Screened positive mutants were verified using colony purification and nodulation validation for twice or thrice in order to confirm their symbiotic stability on nodulation and nodule numbers with V. radiata. Mapping and Sequencing Analysis of Transposon Insertion Sites For identifying the genes mutated by Tn5 insertion, the transposon insertion sites including the mutated genes were investigated with the following procedure. Knockout of Tn5-Transposon-Inserted Genes With pJQ200SK Plasmid In order to exclude false positive of compatible nodulation resulted by the polarity effect derived from Tn5 transposon insertion mutation, knockout of Tn5-transposon-inserted genes were conducted with the triparental conjugation method mentioned above (Liu et al., 2018b), with some modifications, in which the plasmid pRL1063a-2 was replaced with the reformed suicide plasmid pJQ200SK (donor) with the ability of homologous double-crossover recombination. For example, in order to knockout Tn5-inserted L82 gene of CCBAU 53363, pJQ200SK-L82 was constructed using the described methods (Quandt and Hynes, 1993;Sha et al., 2001). First, L82 gene with its upstream and downstream sequences were searched and acquired from the complete genome database of CCBAU 53363 using BioEdit and IGV 2.3, respectively (Li et al., 2015). Based on the obtained gene sequences, the two primer pairs L82-1F/L82-1R and L82-2F/L82-2R (Supplementary Table S1) were designed and used to amplify the upstream and downstream DNA fragments of L82 gene, respectively, by PCR method. Second, the two fragments were connected to the SmaI restriction site of the suicide plasmid pJQ200SK by seamless cloning, and then, the constructed pJQ200SK-L82 was transformed into DH5α-competent cells of E. coli. Third, this plasmid was verified by PCR amplification with primer pair M13F/L82-2R (Supplementary Table S1) to ensure that there was no point mutation in the inserted two fragments and then used as donor in the following triparental conjugation experiment. During the triparental experiment, the constructed plasmid pJQ200SK-L82 (donor) was introduced into CCBAU 53363 (recipient) with the help of pRK2013 (helper). After triparental conjugating for 4 days, single-crossover transconjugants were selected on TY agar plates containing Gen and Tmp and verified by PCR amplification using the detection forward primer and M13R (L82-F/M13R). The succeeded single-crossover isolates were cultured in TY broth containing Tmp with agitation at 180 rpm for 5 days, and subsequently coated on TY agar supplied with Tmp and sucrose for double-crossover filtering. Doublecrossover transconjugants were verified by PCR amplification with external (L82SF/L82SR, positive) and intra (L82NF/L82SR, negative) PCR primers. Isolates were purified three times on TY agar with Tmp and sucrose. Symbiotic Phenotype Analysis Symbiotic phenotypes on V. radiata were tested by inoculating separately with the wild-type strain CCBAU 53363, acquired Tn5-inserted mutants and gene knockout mutants, Type I strain CCBAU 51778 (as positive control), and 0.8% NaCl solution (as negative control). V. radiata seeds were dipped 1 min in 95% ethanol solution for surface dehydration and then sterilized in 2.5% (w/v) NaClO solution for 8 min. After being rinsed in sterile distilled water for eight times, seeds were transferred onto 0.6% agar-water plates and germinated for 2 days at 28 • C. Seedlings in Leonard jars were inoculated with 1 ml of rhizobial suspension with the concentration of OD 600 = 0.2. Plant chlorophyll content, shoot dry weights, nodule numbers, and nodule fresh weights of all treatments were recorded 30 dpi (Jiao et al., 2015), and nitrogenase activity per plant of each treatment was also measured with exception for that of Tn5inserted mutants' treatments . Each treatment consisted of 10 plants in triplicate. Data were processed with Duncan's t test (P = 0.05) by SPSS. Phylogenetic Analysis and Modeling of Proteins For understanding the mutated genes' function and phylogenetic correlations, the authorized mutated genes' protein sequences of CCBAU 53363, homological protein sequences of the closely related strains, and the two representative strains for types I and II were acquired by searching corresponding genes through BLASTX in National Center for Biotechnology information (NCBI) website. Phylogenetic tree, based on each mutant' protein sequences of the strain CCBAU 53363 and the homological sequence of the closely related strains, was built respectively by maximum likelihood (ML) method in MEGA 5.05 (Tamura et al., 2011), and the identity percentages were calculated by Poisson correction model. In the same way, phylogenetic trees based on each mutated genes and corresponding protein sequences of the strain CCBAU 53363 and the representative strains for types I and II were separately constructed as well. Bootstrap analyses were performed using 1,000 replicates, and only the bootstraps values > 60% were indicated in the corresponding nodes of the trees. Protein 3D models were predicted by SWISS-MODEL web server and Pymol software. Characterization of the Seven Mutants of CCBAU 53363 To investigate molecular mechanisms underlying unstable nodulation of B. guangxiense CCBAU 53363 on V. radiata plants, a library containing about 4.5 × 10 7 Tn5-transposon-inserted mutants was created. From 400 V. radiata plants inoculated with Tn5 transposon mutant library, 647 Tn5-transposon-inserted mutants of CCBAU 53363 presented increase in nodule numbers comparing with that of the wild-type strain CCBAU 53363, and they were preliminary isolated and purified. Then, 53 out of the 647 mutants were verified to have a better nodulation capability than the others through reinoculation to this plant twice three times, since they showed stable compatibility with V. radiata. The knockout mutants of the 53 genes were further constructed through triparental conjugation method, and 7 of the 53 genes were ultimately demonstrated to be responsible for the incompatible symbiosis with V. radiata by nodulation tests. By mapping and sequence analysis of the seven mutants of CCBAU 53363, the characteristics including seven mutation gene length and product, protein accession number, and amino acid sequence identities (%) with that of the closed related strains are shown in Table 2. The mutants L265 and L615 were tentatively considered as acyltransferase and hypothetical protein due to their low amino acid sequence identity of 27-35.9% and 10.4-27.8%, respectively with the known proteins of some strains of Bradyrhizobium spp. and Phenylobacterium zucineum. Another two protein products derived from mutated genes L4 and L82 shared 88.5-92.1% and 77.9-84.2% amino acid identities with some hypothetical proteins of Bradyrhizobium spp. Mutant L147, Tn5 insertion in the 1,113-bp open reading frame (ORF) encoding alanine dehydrogenase, shared the highest identity of 91.5% with AlaDH protein sequence of Bradyrhizobium sp. WSM4349 (WP_018459455.1). A product of gene L373, a Tn5 insertion in the 978-bp ORF encoding tripartite tricarboxylate transporter substrate binding protein (TTT SBP), had the greatest identity of 96.1% with the TTT SBP protein sequence of Bradyrhizobium sp. BK707 (WP_130362841.1). The predicted protein of L646 mutant, a Tn5 insertion in the 867-bp ORF encoding sulfur oxidation c-type cytochrome SoxA, shared 96.1% amino acid sequence identity with SoxA of B. zhanjiangense CCBAU 51787 (WP_164934866.1). Symbiotic Phenotypes of Tn5-Transposon-Inserted Mutants on V. radiata Plant In symbiotic test, the symbiosis between Type I strain CCBAU 51778 (positive control) and V. radiata was stable or effective, which formed deep red interior nodules and dark green leaves, and the plants showed that chlorophyll content, nodule numbers, and fresh weight and shoot dry weight were significantly higher than those of the other plants. On the other hand, wild-type strain CCBAU 53363 showed incompatible symbiosis, as expressed by the following: (1) no nodules appeared in ∼40% of the inoculated plants, and the other 60% plants formed one to three pink nodules, which evidenced the incompatible nodulation and (2) it showed significantly lower chlorophyll content and shoot dry weight than that of CCBAU 51778, which were similar to that of the non-inoculated controls. Significantly, the seven Tn5-transposon-inserted mutants increased nodule number and nodule fresh weight on the inoculated plants, indicating the stable or effective nodulation capacity when compared with the wild-type CCBAU 53363, but still a little bit lower than that of the Type I strain CCBAU 51778, except of the L373-T mutant. Generally, the rhizobial gene mutations did not influence plant chlorophyll content and shoot dry weight compared with the wild-type strain. Therefore, these mutated genes were preliminarily speculated to participate in negatively regulating nodulation of CCBAU 53363 with V. radiata (Supplementary Figures S1, S2). Symbiotic Phenotypes of Gene Knockout Mutants on V. radiata Plant Seven mutated genes mentioned above were completely knockout by plasmid pJQ200SK, and symbiotic phenotype verification was performed with newly constructed mutants separately inoculated on V. radiata. Results showed that, with the exception of L373-P, symbiotic phenotypes of V. radiata inoculated with the other six gene knockout mutants (L4-P, L82-P, L147-P, L265-P, L615-P, L646-P) were the same as that of their Tn5-transposon-inserted mutants, demonstrating that they were responsible for the stable or effective nodulation of CCBAU 53363 with V. radiata. Nodule number and nodule fresh weight of V. radiata induced by L373-P mutant were remarkably lower than that of L373-T mutant but still more than that of wild-type strain CCBAU 53363, suggesting that L373-T mutant resulted in a polar effect to some extent but the knockout mutant L373-P were verified to negatively regulate nodulation of CCBAU 53363 with V. radiata. The result confirmed the association of the seven mutated genes with nodulation incompatibility on V. radiata; however, comparing with CCBUA 53363, the increased nodule number and nodule fresh weight induced by the seven mutants had no significant effects on plant chlorophyll content and shoot dry weight, implying that the problem of plant nitrogen deficiency had not been thoroughly solved (Figures 1, 2A-D and Supplementary Figure S2). In order to determine relations between mutated genes and the nitrogen fixation efficiency of nodules, as well as the increased nodule number and plant nitrogen deficiency phenotypes, we tested nitrogenase activity of nodules induced by CCBAU 51778, CCBAU 53363, and mutant inoculated plants, respectively. The results ( Figure 2E) showed that nitrogenase activity per plant inoculated with mutants were significantly higher than that of the wild-type CCBAU 53363, with exception of mutant L373-P, but lower than that of Type I strain CCBAU 517787. It might explain that nitrogen fixed by the mutant-induced nodules could not completely meet the necessity for plant growth. As to nitrogenase activity per nodule (Figure 2F), CCBAU 53363 and its seven mutants showed significantly lower level of activity than CCBAU 51778, and the four mutants L4-P, L82-P, L147-P, and L615-P presented similar nitrogen-fixing capacity with the original strain CCBAU 53363, implying that the four mutants were not associated with nitrogen-fixing efficiency. L265-P showed significantly higher nitrogenase activity, whereas L373-P was lower than CCBAU 53363, implying that L265-P or L373-P might have positive or negative correlations with nitrogenfixation efficiency. Nucleotide Sequence Analysis of Mutation Genes of Type I and II Strains The seven symbiotic-related genes detected in CCBAU 53363 were identified to have negative regulatory effects on its nodulation with V. radiata in this study. Furthermore, we collected and aligned these gene sequences of CCBAU 53363 with those of the other Type I and II strains to find the differences separately shared by the strains in each type, which might be the foremost reason for incompatible symbiosis of CCBAU 53363 and the other Type II strains with V. radiata. For this analysis, genomes of the Type I strains CCBAU 51757, CCBAU 51778, CCBAU 51787, and CCBAU 53390, and the Type II strains CCBAU 51649 and CCBAU 51670 were used. Results (Supplementary Table S2) displayed that L4, L82, L373, and L646 genes were located in the chromosomes of all the tested strains with one copy, except for CCBAU 51649 in which the L373 gene was missing, while L265 and L615 were recognized as unique genes of CCBAU 53363 with the ability of causing restrictive nodulation on V. radiata plant. Through sequence comparison in this study, we found that Type II strains possessed two copies of L147 gene, which were located in the symbiotic plasmid (L147-p, identical to that of the inserted/knockout gene of CCBAU 53363) and chromosomal symbiotic gene cluster (L147-c, 66.3-66.9% identity with L147-p of CCBAU 53363), respectively. However, only one copy of its homolog (L147c) was identified in the chromosomal symbiotic gene cluster of type I strains. Further phylogenetic analyses based on the nucleotide sequences of these genes were performed to verify the evolutionary correlations between Type I and II strains. Results showed that genes L4, L82, L373, and L646 of CCBAU 53363 shared high level identities of 81. (Table 3), respectively, indicating that there was no major difference (identities ≥ 81.8%) of the chromosome genes within or between the Type I and II strains. Results for L4 and L82 were not shown due to similar properties of high-level gene nucleotide identities (≥ 81.8%) and one copy gene in the chromosome among two types of strains compared with that of the genes L373 and L646. In the same analysis for L147-c or L147-p of the Type II strains and L147-c of the Type I strains, the situation was complicated to some extent ( Phylogenetic Tree and 3D Structure Prediction of Symbiotic-Related Proteins To further analyze the effects of genetic differences between Type I and II strains on protein phylogenetic relationships, 3D structures, and functions, we performed amino acid sequence alignments and constructed phylogenetic trees, as well as predicted 3D structures for L147, L373, and L646 proteins. It was shown that the phylogenies for L373 and L646 amino acid sequences were very similar, and all strains were divided into two branches: one consisted of the Type II strain CCBAU 53363 and all the Type I strains with the identities of 87.9-100% for L373 proteins and 94.6-100% for L646 proteins; another one included the remaining Type II strains CCBAU 51670 and CCBAU 51649 (only for L646) with identities of 92.4% for L646 proteins (Figure 3A, Supplementary Figure S3, and Supplementary Tables S4, S5). However, protein 3D structures in the two types were not significantly affected, and only minor difference in one α-helix (red arrow) was found (Figure 4 and Supplementary Figure S4). These results indicated that the differences in amino acid sequences deduced from the mentioned chromosome genes in the Type I and II strains had no great effects on the 3D structure of the proteins. Phylogeny of amino acid sequences deduced from the L147 genes classified the tested strains into three clades represented respectively by the Type I strains, symbiotic plasmid copy of the Type II strains, and chromosome symbiotic gene copy of the Type II strains, with inner-clade identities of 87.6-96.7%, 100%, and 93.3-94.7% amino acid sequences (Figure 3B and Supplementary Table S6). However, analysis of protein 3D structures (subunit and hexamer) identified them into two categories: category 1 covered the Type II strains CCBAU 53363p (gene in plasmid), CCBAU 51649-c (gene in chromosome) and CCBAU 51649-p (the same structure), and CCBAU 51670-c and CCBAU 51670-p (the same structure); category 2 consisted of all the Type I representative strains (CCBAU 51778-c, CCBAU 51757-c, CCBAU 51787-c, CCBAU 53390-c) and the Type II strain CCBAU 53363-c (Figure 5 and Supplementary Figures S5, S6). These results indicated that protein 3D structures were not identical with their amino acid sequences phylogeny, which might be caused by amino acid residues, polarities, and hydrophobicity, which affected the folding of amino acid sequence into the 3D structure. The main distinctions between the two categories of 3D subunit structures were located on the spatial conformation of catalytic binding groove (red circle for category one and black circle for category two) (Figure 5 and Supplementary Figure S5). Similar to the subunits, the 3D structures of the L147 hexamers also indicated two different spatial structures (red or black rectangle gave an indication of a different site) (Supplementary Figure S6). In addition, CCBAU 53363 simultaneously possessed both categories of L147 proteins, representing that it synthesized both the chromosome (Type I strain, L147-c) and symbiotic plasmid (Type II strain, L147-p) encoded proteins. However, the other Type II strains only translated L147-p protein. Considering the same symbiotic phenotypes of CCBAU 53363 as Type II strains on V. radiata (Li, 2019), it could be concluded that, for strain CCBAU 53363, L147-p protein functioned greater than that of L147-c, meaning that L147 in plasmid played a critical role in negatively regulating the symbiotic compatibility on V. radiata. DISCUSSION Contrary to conventional cognition, a previous study demonstrated that Type II peanut bradyrhizobia strains possessed incompatible symbiotic phenotypes with V. radiata, a plant belonging to the A. hypogaea cross-nodulation group (Li, 2019). Due to the differences between the genomes of strains in types I and II (Li, 2019), reasons for the incompatibility of Type II strains with V. radiata plants appear to be a genetic barrier. It is that the inactivation of genes associated with the rhizobial negative factor allowed the mutants to overcome the nodulation restriction conferred by plant and successfully achieve symbiosis, similar to the incompatible symbiosis between soybean plants carrying Rj4 and USDA 61 strain (Faruque et al., 2015). This study used Tn5 mutagenesis to screen for mutants of Type II strain CCBAU 53363 compatible with V. radiata to investigate the genetic mechanisms of its incompatibility with V. radiata. Successful isolation of seven mutants with the ability of stable nodulation with V. radiata (Figures 1, 2 and Supplementary Figures S1, S2), comparative analysis results of the mutation genes' sequence (L373, L646, and L147) ( Tables 3, 4 and Supplementary Tables S2, S3) and corresponding amino acid sequences (Supplementary Tables S4-S6) Figures S4, S5) in the present study initially supported our speculation that genetic barrier, caused by the presence of seven genes in the Type II strains, is a crucial cause of the incompatible symbiosis. However, generally, mutants' lower efficient nitrogen fixation ability in nodules showed that other barriers also play roles in the incompatible symbiosis between the Type II strains and V. radiata, which needed to be further studied. The mutation of L147 was located in the gene of encoding L-alanine dehydrogenase (AlaDH, EC 1.4.1.1) that participates in producing alanine from pyruvate and NH 3 /NH 4 + in B. japonicum strain 110 bacteroids inside the soybean nodules, but it is not essential for symbiosis (Lodwig et al., 2004). This enzyme influences amino acid cycle and pyruvate metabolism level in Rhizobium leguminosarum cells through the alanine synthesis, which in turn affects the plant nitrogen content and the rhizobial system of tricarboxylic acid cycle and ultimately affects the plant biomass accumulation in soybean plant and bacteroid metabolism in pea nodules (Smith and Emerich, 1993;Lodwig et al., 2004;Dave and Kadeppagari, 2019). Thus, we can conclude that AlaDH participates in the complex metabolic regulation network of rhizobia. However, it remains unclear how AlaDH regulates nodulation or host infection of rhizobia. In this research, comparison analysis demonstrated that L147p functioned stronger than L147-c in CCBAU 53363. Symbiotic test found that L147-p gene knockout mutant of CCBAU 53363 did not affect chlorophyll content, shoot dry weight, and nodule's nitrogenase activity, indicating that both L147-p and L147c did not promote plant biomass accumulation or enhance nodule N 2 -fixation efficiency in the case of plant nitrogen deficiency. However, this mutant enhanced nodule number and nodule fresh weight, illustrating that L147-p negatively regulated rhizobial nodulation with V. radiata. The role of AlaDH in regulating rhizobial nodulation has not been reported before; therefore, we first found that AlaDH plays a significant role in regulating rhizobial stable nodulation with legume under the premise of plant nitrogen deficiency, which may be affected by the cell regulatory network, and detailed mechanisms needed to be further detected. Furthermore, the mutation of L147-c or double mutation of L147-c-p gene of CCBAU 53363 will help us better understand the function of L147 gene and its regulation mechanism. The L373 mutant was located in the gene of translating TTT SBP. TTT family is a poorly characterized group of prokaryotic secondary solute transport systems, which employ a periplasmic SBP for initial ligand recognition and present in many bacteria (Winnen et al., 2003;Rosa et al., 2017). SBPs bind with high affinity to diverse classes of substrates, such as tricarboxylates, amino acids, nicotinic acid, nicotinamide, and benzoate (Herrou et al., 2007); terephthalate and other aromatics (Hosaka et al., 2013); 3-sulfolactate (Denger and Cook, 2010); and C 4 -dicarboxylic acids (plant secretions, such as succinate, fumarate, and malate, etc.) (Rosa et al., 2019), which deliver substrates to the trans-membrane domains to be imported into the cell. C 4 -Dicarboxylic acids have been shown to play important roles as substrates and signal compounds for rhizobia and are considered to be the major carbon sources utilized by freeliving Rhizobium species during the colonization on root surface (Robinson and Bauer, 1993). However, there is no research on the relationship between TTT SBP and rhizobial symbiosis with legumes. In this study, with the knockout of L373 gene, mutant significantly stabilized the plant nodulation by enhancing nodule number and nodule fresh weight, but it had no influences on chlorophyll content and shoot dry weight, with decreased nodule nitrogenase activity, when compared with the wildtype strain CCBAU 53363. We speculated that the substrate recognition and absorption system involved in the TTT SBP protein might have no preference for V. radiata plant root secretion substrates. With the mutation of TTT SBP protein, rhizobial affinity substrate recognition and absorption system to the plant root secretions were enhanced by some unknown reasons and followed with affected rhizobial colonization and nodulation efficiency on root surface. It is the first finding that TTT SBP substrate absorption system of CCBAU 53363 is involved in regulating compatible symbiosis with V. radiata and nodule nitrogen fixation efficiency. In the mutant L646, a gene encoding sulfur oxidation c-type cytochrome SoxA protein was found to be mutated, which was a subunit of SoxAX cytochromes, a part of c-type cytochromes that catalyzes the transformation of inorganic sulfur compounds (Bamford et al., 2002;Ogawa et al., 2008). Inorganic sulfur compounds' oxidation capability is frequently found in phylogenetically and physiologically diverse bacteria, including the members of Bradyrhizobiaceae with sox homologs of B. japonicum USDA110 (Masuda et al., 2010). However, the relationship between inorganic sulfur metabolism and symbiosis of rhizobia with plants has not been studied. In this research, soxA knockout mutant significantly increased the nodule number and nodule fresh weight, but V. radiata plant's chlorophyll content, shoot dry weight, and nodule nitrogen fixation efficiency were not influenced, indicating that soxA negatively regulated CCBAU 53363 nodulation on V. radiata, and had no significant effects on the extremely inefficient nitrogen fixation ability. Therefore, a certain regulatory relationship might exist between the inorganic sulfur metabolism of rhizobia and their stable nodulation with legumes Studies have demonstrated that a successful symbiosis depends on the interaction of rhizobia and plant with complex chemical signaling communication (Limpens et al., 2003;Madsen et al., 2003;Radutoiu et al., 2003;Arrighi et al., 2006), recruitment and attachment of rhizobia to growing root hair tips (Murray, 2011), activation and inhibition of plant defense systems during the rhizobial infection (Bright and Bulgheresi, 2010;Tóth and Stacey, 2015), and rhizobial differentiation and bacteroids metabolization in plant cells (Oldroyd and Downie, 2008). In this research, the Type II strains occasionally nodulated with V. radiata, illustrating the incompatible and unsuccessful interactions between these strains and V. radiata. According to the results of mutants' symbiosis with V. radiata, we found that factors with multiple functions in Type II strains participated in the incompatible symbiosis with V. radiata, such as substrate recognition and absorption before infection (L373), inorganic sulfur metabolism (L646), and proteins with unknown functions (L4, L82, L265, L615). At the same time, with the exception of new genes, we also discovered new functions for some well-known genes. For example, during the incompatible symbiosis of CCBAU 53363 and V. radiata (in the case of plant nitrogen deficiency), L147 gene did not function as mentioned in other studies (Lodwig et al., 2004) but participated in the negative regulation of stable nodulation. Our mutant screening analysis identified several genetic factors of CCBAU 53363 involved in the incompatibility with V. radiata and implied that diverse and multiple mechanisms might cause these hostspecific interactions. Further protein prediction revealed that the 3D structure (category 1) deduced by symbiosis-related gene L147-p of CCBAU 53363 and L147-c/-p of other Type II strains was different from that of the corresponding protein (category 2) coded by chromosome genes in Type I and II strains. The difference might be the crucial site affecting L147 protease catalytic activity or efficiency and would be directly related to the symbiotic phenotype divergence between Type II and Type I strains. It could be supposed that the gene L147-p in CCBAU 53363 functioned stronger than the L147-c counterpart, although it needs to be further confirmed through experiments. According to L373 and L646 genes and the other symbiosisrelated genes, the chromosome genes of Type II strains possessed high genetic homology and similar 3D protein structures with the corresponding genes in chromosome of Type I strains, which illustrated that they might indirectly regulate the symbiosis by an unknown way and eventually led to this incompatible symbiosis on V. radiata. It is therefore likely that the genetic barrier exists between Type II strains and V. radiata: the mutated genes associated with rhizobial negative factor directly or indirectly allowed mutants to overcome the condition of unstable nodulation, a part of incompatible symbiotic barrier. In brief, the present study initially demonstrated seven genes of Type II strain CCBAU 53363 responsible for compatible nodulation with V. radiata, in which the regulation mechanism is needed to be further researched. These results partially explained the restrictive nodulation of CCBAU 53363 with V. radiata. Simultaneously, we also found that the L265 gene negatively regulated nodule nitrogenase activity. However, L265-T/P mutant inoculated plants were still in nitrogen deficiency state due to the low number of nodules, indicating that trials on enhancing nodules number will be needed. Besides, with the comparison of original strain CCBAU 53363, L4, L82, L147, L615, and L646 gene knockout mutants did not influence the nodule nitrogen fixation efficiency, which implied that the experiments on improving nodule nitrogenase activity need to be implemented. That is, Tn5 transposon randomly inserted in mutants' genome and selected by V. radiata with the different standards mentioned above will help us to explore the determinants and regulatory networks for incompatible symbiosis between Type II strains and V. radiata; Songwattana et al. (2019) demonstrated that the photosynthetic bradyrhizobial strain ORS278 acquired a broader host range with the ability to form nodules on Crotalaria juncea and Macroptilium atropurpureum through acquiring a symbiotic mega-plasmid from the non-photosynthetic Bradyrhizobium strain DOA9. Otherwise, the "experimental evolution" approaches used by Marchetti et al. (2010) evolved a plant pathogen into legume symbiont after transferring a symbiotic plasmid. All these data verified that the symbiotic plasmid played a great role in rhizobial host range and symbiotic compatibility with plants. Coincidentally, we found that gene in plasmid functioned stronger than the copy in chromosome in CCBAU 53363, and only Type II strains contain symbiotic plasmid with identical nucleotide sequences (Li, 2019). Therefore, symbiotic plasmid might also play a role in Type II strains' symbiotic compatibility with V. radiata. It would be an interesting and positive certification for this supposes to transfer the symbiotic plasmid from Type II to Type I strains, as Type I strains obtain plasmid and show incompatible symbiosis similar to type II strains. AUTHOR CONTRIBUTIONS YW, XS, and YL conceived the study. YW, JS, LC, and BH performed the experiments. YW and XS analyzed the data and wrote the manuscript along with the help of EW. CT, WFC, and WXC provided resources. All authors contributed to the article and approved the submitted version.
2020-06-23T13:09:47.773Z
2020-06-23T00:00:00.000
{ "year": 2020, "sha1": "f3f21201943bb36464080daa1fd7e6d08122fff0", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2020.01175/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f3f21201943bb36464080daa1fd7e6d08122fff0", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
119103854
pes2o/s2orc
v3-fos-license
Electron coherent and incoherent pairing instabilities in inhomogeneous bipartite and nonbipartite nanoclusters Exact calculations of collective excitations and charge/spin (pseudo)gaps in an ensemble of bipartite and nonbipartite clusters yield level crossing degeneracies, spin-charge separation, condensation and recombination of electron charge and spin, driven by interaction strength, inter-site couplings and temperature. Near crossing degeneracies, the electron configurations of the lowest energies control the physics of electronic pairing, phase separation and magnetic transitions. Rigorous conditions are found for the smooth and dramatic phase transitions with competing stable and unstable inhomogeneities. Condensation of electron charge and spin degrees at various temperatures offers a new mechanism of pairing and a possible route to superconductivity in inhomogeneous systems, different from the BCS scenario. Small bipartite and frustrated clusters exhibit charge and spin inhomogeneities in many respects typical for nano and heterostructured materials. The calculated phase diagrams in various geometries may be linked to atomic scale experiments in high T$_c$ cuprates, manganites and other concentrated transition metal oxides. I. INTRODUCTION Strongly correlated electrons in cuprates, manganites and other transition metal oxides exhibit high T c superconductivity, magnetism and ferroelectricity accompanied by spatial inhomogeneities at the nanoscale level [1,2,3,4,5,6,7,8]. Over the past few years there is an increase in interest to electron instabilities in nanoclusters, assembled clusters of correlated materials in various topologies for synthesizing new nanomaterials with unique electronic and magnetic properties [9,10]. Obviously, there is a clear need for an accurate analysis of electron correlations, fluctuations and instabilities in nanoclusters and large complex systems with competing phases. The closed form solution, existing in the Bethe ansatz ground state [11], is difficult to analyze at finite temperatures T > 0 without having to resort to various approximations. Perturbation theory is usually inadequate while numerical methods have serious limitations, such as in the Quantum Monte Carlo method with its notorious sign problem where the resulting approximations often lead to some controversy. On the contrary, exact calculations in small clusters [12,13,14,15,16] give an appealing alternative for the detection of possible phase separations and spatial inhomogeneities especially at finite temperatures. As far as the authors are aware, an exact analysis of level crossing instabilities (degeneracies) in canonical ground state eigenvalues and corresponding competing average energies at finite temperature for a general on-site interaction U and electron concentra-tions have not been attempted in small or moderate size clusters [17]. Exact computations of electron instabilities in various cluster geometries at the nanoscale level can be vital to the understanding of the role of thermal and quantum fluctuations for large pairing gaps and a transition temperature T c in the correlated nanoclusters, nanomaterials and corresponding "large" inhomogeneous systems [18,19,20,21,22,23,24,25]. Although our approach for "large" systems is only approximate, this class of clusters in an ensemble displays a common behavior which we believe is generic for large thermodynamic systems. Our results for typical bipartite and frustrated (nonbipartite) cluster geometries have successfully mapped out scenarios where many body local effects are sufficient to describe spin-charge separation and pairing pseudogaps at the nanoscale level. Spatial microscopic inhomogeneities have been observed in a number of scanning tunelling microscopy (STM) probes in doped high-Tc superconductors (HTSCs). There is growing evidence suggesting that inhomogeneities at the nanoscale level, in the so-called stripes surrounded by essentially neutral correlated MH-like antiferromagnetic insulators [26,27], play a defining role for the electron pairing and the origin of superconductivity at the atomic scale in HTSCs [28,29]. Besides the existence of charge pairing, the inhomogeneities of possible electronic nature can exist in a form of spatially separated magnetic phases in cuprates and manganites under doping [30]. The magnetic inhomogeneities seen in other transition metal oxides at the nanoscale level, widely discussed in the lit-erature [31,32,33,34,35], can be crucial for the spin pairing instabilities, origin of ferromagnetism and ferroelectricity in the spin and charge subsystems [36,37]. A phase separation of the ferromagnetic clusters embedded in an insulating matrix is believed to be essential to the colossal magnetoresistance (CMR) in manganese oxides. At sufficiently low temperatures, the spin redistribution in an ensemble of clusters can produce inhomogeneities in the ground state and at finite temperatures [24]. The non-monotonous behavior of the chemical potential versus electron concentration found in generalized self consistent approximation [38] also suggests possible electron instabilities and inhomogeneities near half filling. From this perspective, exact studies at T ≥ 0 of electron charge and spin instabilities at various U ≥ 0, inter-site couplings and various cluster topologies can give important clues for understanding of charge/spin inhomogeneities and local deformations for the mechanism of pairings and magnetism in "large" concentrated systems whenever correlations are local. It is a generally believed that a strong on-site Coulomb interaction supports ferromagnetism and is detrimental for the electron pairing and superconductivity in clusters and "large" concentrated systems [39]. Our exact studies of gaps and pseudogaps in finite-size systems have uncovered some important answers related to spin-charge separation, pairing and thermal condensation of the electron charge and spin. Despite this, there is still a vast amount of uncertainties that need to be unravelled: (i) What are the conditions for the electron phase separation instabilities and spin/charge inhomogeneities? (ii) What is the role of inhomogeneities and are these spatial spin/charge inhomogeneities crucial for the pairing mechanisms in these compounds? (iii) When treated exactly, what essential features can the Hubbard clusters capture that share similar properties with the "large" concentrated transition metal oxides? A redistribution of excess electron/hole inhomogeneities or spin up/spin down domains in an ensemble of tetrahedrons for all U > 0 depends on the sign of the hopping term [24]. Here we show that in the distorted square pyramids in the perovskite structures, the intersite coupling c between the apex site with the base can be beneficial or detrimental for the electron pairing or ferromagnetism. An unstable "saturated ferromagnetism", existing in frustrated lattices at low temperatures and large U for a particular sign of hopping (t > 0) [36], implies either antiferromagnetism, unsaturated ferromagnetism, or electron coherent pairing for charge and spin pairing (pseudo)gaps. Here it is argued that for one hole off half filling electrons undergo separate thermal condensation of the charge and spin degrees (independent of cluster topology); the system may be divided into two coexisting and dynamically bound bosonic subsystems, where two types of individual bosonic pairs, made up of double electron charges and oppositely oriented (antiparallel) spins, can fluctuate. We shall see that the phase diagram, under some circumstances, is mostly controlled by the changes in the cluster geometry (topology). II. MODEL AND FORMALISM It is possible to assume that the electron pairing and magnetic instabilities of the purely electronic nature is described by a local Coulomb interaction U in a single band Hubbard model The sign of the hopping amplitude t between the nearest neighbor sites in (1) leads to essential changes of electronic structure. In nonbipartite clusters, such as tetrahedron, we considert = ±1. In addition, for the distorted pyramid we take the coupling parameter between the apical site and the atoms in the base equal to ct, with c ≤ 1. Our studies of the quantum and thermal fluctuations of electrons in finite clusters are based on exact diagonalization, analytical and numerical calculations of energy levels and expressions for the canonical and grand canonical partition functions in various cluster geometries. The exact grand canonical potential Ω U for the interacting electrons (U ) in an external magnetic field (h) is where N and s z are the number of particles and the projection of the spin in the n-th quantum state. The first and second order responses of the charge and spin degrees due to the changes in the chemical potential µ (doping) or an applied magnetic field are calculated without taking the thermodynamic limit. The competing energy states, in conjunction with the canonical and the grand canonical ensemble, yield valuable insight into electron instabilities in the real nanoclusters and nanomaterials with the correlated electrons. The introduced formalism allows us to describe the smooth and sharp phase transitions with competing stable and unstable inhomogeneities in the canonical and grand canonical ensembles. Below in Sec. III we provide a detailed description of the general methodology: we define the criteria for the charge and the spin pairing instabilities in the canonical and grand canonical ensembles; formulate the conditions for existence of quantum critical points, coherent pairings and spontaneous transitions in the ground state and corresponding critical temperatures of crossovers for various phases and boundaries in the phase diagrams discussed in Secs. IV and V. A. Canonical charge and spin gaps To facilitate the comparison with the frustrated clusters, we summarize here the main results in the ground state and at the finite temperatures for bipartite and nonbipartite clusters obtained earlier in Refs. [19,20,21,22,23,24,25]. The degrees of freedom for charge and spin, electron and spin pairings, temperature crossovers, quantum critical points, etc. were extracted directly from the thermodynamics of these clusters. One can classify the charge and spin order parameters as an energy difference between the various competing phases by analogy with phase transitions in the thermodynamic limit. In the ground state, the calculated differences in the canonical energy levels between configurations with various numbers of electron charge and spin determine the energy gaps for electron charge and spin excitations. Using the exact partition function in the canonical ensemble, we also analyzed analytical expressions for the average energies for various number of electrons N . For given temperature T and U , we calculated the energy differences µ + = E(N +1)−E(N ) and µ − = E(N )−E(N −1) for the average canonical energies E(N ) by adding or subtracting one electron (charge) in the cluster for a given spin S. The energy difference between the two consecutive excitation energies by adding or subtracting electron can serve as a natural order parameter in a canonical approach. Then the charge gap at finite temperature can be written as The opening of the gap is a local correlation effect, and clearly does not follow from long range order, as exemplified here. The difference µ + − µ − is somewhat similar to the difference I −A for a cluster, where I is the ionization potential and A the electron affinity. For a single "impurity" at half filling and T = 0, I − A is equal to U , which represents a screened local parameter U in the Hubbard model [40] (1). Thus the gap picture is analogous to an inter-configuration energy gap for the crossover between different many body ground state ionic configurations in solids. For example, the charge gap is simply equivalent to the energy of the "reaction" between different cluster configurations (d) at fixed N i.e., the difference in the canonical energies of ionization and affinity for many body cluster configurations in ensemble. However, the configurational change in the ensemble of isolated clusters is supposedly due to the possible spontaneous fluctuations in the electron numbers and electron redistribution via a charge reservoir. The negative spin gap in the canonical ensemble can be treated correspondingly. We calculate a spin gap as the difference in the average energies between the two cluster configurations with various spin S states, ∆ s (T ) = E(S + 1) − E(S), for E(S) being respectively the average canonical energy in the spin sector at fixed N [11]. B. Charge and spin instabilities Many phenomena and phase transitions invoked in the approximate treatments of "large" concentrated systems are seen also in the exact analysis of pairing instabilities in the canonical ensemble of the small clusters in thermodynamic equilibrium [19,20,21,22,23,24,25]. As we shall see, in some circumstances, small changes of the external parameters can lead to level crossing instabilities in various electron configurations with the formation of negative charge and spin gaps. Physically, a positive gap manifests the phase stability and smooth crossover, while a negative gap describes spontaneous transitions from one stationary state to another. Instead of a full phase separation at ∆ c,s < 0, the local inhomogeneities in the clusters can provoke electron redistribution and quantum mixing of the various charge and spin configurations. In the presence of a negative gap, the many-body ground state has an appreciable probability of being found in either of these competing configurations. The collective particle excitations are also reflected in the fluctuations of the pair density in Eq. (3). It is intriguing that these fluctuations make the pair redistribution across the clusters possible even without direct contact between the clusters. These fluctuations play a crucial role of the pair transitions in the absence of electron hopping between clusters in Eq. (1). Near ground state degeneracies, the lowest energy states control the low energy dynamics of the electronic and magnetic transitions over a significant portion of the phase diagram. The possible quantum critical points, phase transitions and nonzero temperature crossovers are described using a simple cluster approach: we define critical parameters for the level crossing degeneracies or quantum critical points from the vanishing conditions for the canonical charge and spin gaps, i.e., ∆ c,s (U, c) = 0. The sign of the gap is also important in identifying the regions for the electron charge and spin instabilities, such as the electron-electron ∆ c < 0, electron-hole ∆ c > 0 pairings in the charge sector or the parallel ∆ s < 0 and opposite ∆ s > 0 spin pairings in the spin sector. The key question here is the exact relationship between the canonical charge ∆ c gap and its corresponding grand canonical spin ∆ s counterpart calculated for various bipartite and frustrated cluster topologies. For charge degrees the negative sign of gap implies phase (charge) separation (i.e., segregation) of the clusters into hole-rich (charge neutral) and hole-poor regions. The quantum mixing of the closely degenerate, hole-poor d N −1 and hole-rich d N +1 clusters for one hole off half filling, instead of causing global phase separation, provides a stable spatial inhomogeneous medium that allows the pair charge to fluctuate. The inhomogeneities favored by the negative gaps are essential for providing the spontaneous redistribution of the electron charge or spin. The inhomogeneities in the charge redistribution for ∆ c < 0 and ∆ s = 0 imply static heterostructure for different electron configurations, close in energy, in an unstable ensemble of clusters. These inhomogeneities are consistent with nucleation of the "negative" charge gap in cuprates above T c [6]. At low temperatures, the dynamic picture for pair fluctuations between different electron configurations ∆ c < 0 and ∆ s > 0 is possible at relatively low temperatures in spatially inhomogeneous coherent state (∆ s ≡ −∆ c , see also Sec. IV A). This result is consistent with the observation of nonlocal superconductivity at low excitation energies and at higher energies, holes localized in an inhomogeneous "stripe" pattern [1]. The negative spin gap describes the possible parallel spin pair binding instability. This picture implies spontaneous ferromagnetism and phase (spin) separation into domains in accordance with the Nagaoka theorem. For the negative gaps, one can introduce the critical temperatures T P c (µ) and T F s (µ) versus chemical potential for boundaries between various phases derived from the condition that the corresponding gaps disappear, i.e., ∆ c,s (T, µ) = 0. C. Charge and spin susceptibility peaks Conventional phase transitions at finite temperature are driven by thermal fluctuations. In the grand canonical approach using exact analytical expressions for the grand canonical potential and partition functions as expressed in Eq. (2), we have analyzed (in Refs. [19,20,21,22,23,24,25]) the variation of the charge, ∂N ∂µ , and spin, ∂s ∂h , density of states or corresponding charge χ c (µ) and spin χ s (µ) susceptibilities, as a function of the chemical potential µ and h in a wide range of temperatures. In a grand canonical approach the energy difference between the two consecutive susceptibility peaks in terms of µ and h at finite temperatures can serve as a natural order parameters for charge and spin degrees respectively. This energy difference for density of states in µ space determines the charge gap in canonical approach. We find (opposite) spin pairing gap by calculating the minimal magnetic field necessary to overturn the spin. In the grand canonical method we define the gap as a magnetic field at which the distance between the subsequent spin susceptibility peaks in µ space vanishes. Using the maxima of zero magnetic field susceptibility, ∂s ∂h | h→0 , we also calculated the boundary curve for the onset of the spin gap for various µ in infinitesimal h → 0 above T P s . To distinguish this from the canonical and grand canonical gaps at finite temperatures we call it pseudogap. The opening of such distinct and separated (pseudo)gap regions for the spin and charge degrees at various fillings in µ space is indicative of the corresponding spin-charge separation. The crossover temperatures and phase boundaries for various transitions can be found by monitoring maxima and minima in charge and spin susceptibilities. We define the critical temperatures T c and T * in equilibrium as the The phase separation at relatively low temperatures below T ≤ 0.0075 manifests a significant suppression of N = 3 clusters close to optimal doping µP = 6.557. In this area N = 2 and N = 4 clusters share equal weight probabilities, while at higher temperatures N ≈ 3 also becomes thermodynamically stable. In equilibrium, the grand canonical value µP at optimal doping at T = 0 reproduces the result µP = (µ+ + µ−)/2 for the canonical approach. temperature at which the distances between the charge or spin susceptibility peaks vanish and corresponding pseudogaps disappear (see Sec. V B). Notice that according to the given definition, the energy pseudogaps obtained in the grand canonical method are positive which is a key difference from the canonical gaps. D. Charge and spin inhomogeneities The developed grand canonical approach can be applied to understand of the electron fluctuations and the spatial inhomogeneities to model the behavior of the concentrated systems in bipartite and frustrated structures. An ensemble of bipartite clusters at small and moderate U exhibits typical inhomogeneous behavior in its charge distribution. A normalized probability ω N for the electron distribution in grand canonical ensemble as a function of temperature T for various electron numbers N is the following The calculated probabilities of electrons in competing configurations are shown in Fig. 1 for the 4-site cluster at U = 4. At low temperatures and electron concentration close to µ P , the clusters with N = 2 and N = 4 have equal probabilities, ω 2 = ω 4 ≈ 0.5. In some circumstances electron configurations in equilibrium can have close energies for the clusters in contact with a particle reservoir. This picture shows a mixture of ungapped and partially gapped states. As temperature increases, the probability ω 3 of N = 3 clusters with unpaired spin gradually increases, while the probability of finding spin paired, hole-rich and hole-poor clusters decreases. Qualitatively, the formation of inhomogeneous electron distribution or "stripe" picture can be understood from simple energy considerations (see Sec.II). For a fixed average number of electrons, the charge and spin on each separate cluster in the ensemble can fluctuate. The two configurations close in energy are nearly degenerate, and, as temperature increases, it is energetically favorable to have some clusters with d N −1 and another with d N +1 , instead of having clusters with d N electrons. These results, that depend on the cluster geometries, parameter U as well as on the sign of t, can be directly applied to nano and heterostructured materials, which usually contain many independent clusters, weakly interacting with one another with the possibility of having inhomogeneities for a different number of electrons per cluster. At half filling, the antiferromagnetic state has the lowest energy per electron. Therefore, the energy can be minimized upon small doping by segregration of holes into charged clusters with different number of electrons. The embedded antiferromagnetic background with opposite spin pairing provides a spin rigidity (unperturbed) media that allows inhomogeneities to optimize the coherent pair fluctuations across the clusters [24]. The mixture of the closely degenerate ferromagnetic domains can also lead to the stable spatial magnetic inhomogeneities for spin fluctuations. Interestingly, the quantum and thermal fluctuations in the canonical and grand canonical ensembles display "checkerboard" patterns [5], nanophase inhomogeneities [26] and nucleation of pseudogaps driven by temperature seen recently in nanometer and atomic scale measurements [29] in HTSCs above T P s [6,7]. Microscopic spatial inhomogeneities and incoherent pairing pseudogaps in nanophases measured by scanning tunnelling microscopy (STM) correlate remarkably with our predictions using small 4-site and 2 × 4 nanoclusters [22]. E. Coherent charge and spin pairings The behavior of such clusters near crossing degeneracies in a quantum coherent phase with minimal spin at low temperatures is somewhat similar to the conventional BCS superconductivity (see Sec. IV A). We found that at rather low temperatures the calculated positive pseudospin gap ∆ s in the grand canonical method can have equal amplitude with a negative charge gap ∆ c derived in canonical method, ∆ s = |∆ c |. Such behavior is similar to the existence of a single gap in the conventional BCS state. We call such an opposite spin (singlet) coupling and electron charge pairing as a spin coherent electron pairing in Ref. [24]. However, unlike to the BCS theory, the charge gap differs significantly from the spin pseu- This coherent state is analogous to Phase A in 4-site clusters [24]. The spin gap has been calculated using the grand canonical approach. dogap as temperature increases above T P s . For example, the vanishing of double peak structure in zero spin susceptibility gives a critical temperature T P s , at which the spin pseudogap disappear. The canonical charge gap disappears at higher temperatures, i.e., ∆ c (T P c ) = 0. The BCS-like coherent behavior and possible superconductivity with condensation of opposite spin pairs occur at rather low temperatures (see Sec. III C), while electron charge pairing can be established at relatively high temperatures, T P s < T P c . The positive spin gap calculated in grand canonical approach implies homogeneous electron (opposite) spin spatial distribution below T P s . This picture is consistent with the spatially homogeneous spin pseudogap that opens below T c ≡ T P s in doping dependent STM measurements of Bi 2 Sr 2 CuO 6+x [5]. We also find a close analogy for the coherent electron pairing in clusters with real space singlet pairs in resonance valence bond states or local inter configuration fluctuations in mixed valence states [41,42,43,44,45,46]. A. Bipartite clusters Exact calculations for charge and spin gaps in small clusters in various geometries are important for understanding the electron ground state behavior in bipartite and nonbipartite (frustrated) systems. Below we summarize the results for electron instabilities and phases obtained earlier (see Fig. 1 in Ref. [24]) for square and other bipartite clusters with one hole off half filling at infinitesimal T → 0. The vanishing of gaps at quantum critical points, U c = 4.584 and U F = 18.583, indicates energy level crossings and electron instabilities in 4-site clusters for charge and spin, respectively. The charge ∆ c and spin ∆ s gaps versus U in an ensemble of square clusters at N ≈ 3 exhibit at infinitesimal T → 0 the following phases; Phase A: Charge and spin pairing gaps of equal amplitude ∆ s ≡ ∆ P = −∆ c at U ≤ U c describe Bose condensation of electrons similar to BCS-like coherent pairing with a single energy gap; Phase B: Mott-Hubbard like insulator with ∆ c > 0 and gapless S = 1 2 excitations at U c < U < U F describes a spin liquid behavior; Phase C: Parallel (triplet) spin pairing (∆ s < 0) displays S = 3 2 the saturated ferromagnetism at U > U F in Mott-Hubbard insulator for a positive charge gap, ∆ c > 0. Notice, that incoherent opposite spin pairing |∆ s | = ∆ c , different from the charge pairing at U < U c , suggests spin-charge separation for spin and charge degrees at U > U F . Square clusters at weak and strong couplings share common important features with 2×4 ladders and other bipartite clusters [36]. Negative gaps describe possible hole binding or parallel spin pairing instabilities. For charge degrees at weak coupling, this gives an indication of phase separation (i.e., segregation) into hole-rich (charge neutral) and hole-poor clusters. In contrast, at strong coupling the negative spin pairing gap for parallel spins and positive charge gap reveal ferromagnetic instability in accordance with the Nagaoka theorem. In large bipartite clusters at intermediate U , electrons behave differently from square clusters. For example, in 2×4 ladders we found an oscillatory behavior of charge gap as a function of U [22]. The vanishing of the charge gaps, manifesting the multiple level crossing degeneracies and electronic instabilities in charge and spin sectors, is indicative of possible electron instabilities in bipartite clusters at moderate U . B. Tetrahedrons For comparison with small bipartite clusters in Sec. IV A, we consider here a minimal four site nonbipartite structure. A tetrahedron has a topology equivalent to that of a square with the next nearest neighbor coupling (t ′ = t) and may be regarded as a primitive unit of typical frustrated system. Nonbipartite systems, without electron-hole symmetry, exhibit a pairing instability that depends on the sign of t. Notice that sign of t also leads to essential changes in the electronic structure. The tetrahedral clusters show pairing instabilities for charge degrees at t = 1 and spin degrees at t = −1 that maximizes the amplitudes of negative charge ∆ c < 0 and spin ∆ s < 0 gaps and corresponding condensation temperatures, T P c (µ) and T F s (µ) [47]. The negative gap in the canonical approach displays electron pairing ∆ P = |∆ c | instability for all U . Fig. 3 illustrates the charge and spin gaps at small and moderate U . The negative charge gap in Fig. 2 is indicative of the inhomogeneous charge re-distribution and phase separation of electron charge into hole-rich (charged) and hole-poor (neutral) cluster configurations [20]. The phase diagram for t = 1 is similar to the Phase A in Sec. IV A, but applied for all U values. In contrast, the positive spin gap in the grand canonical approach ∆ s > 0 corresponds to uniform opposite spin distribution in Fig. 2. This BCS-like picture for charge and spin gaps of equal amplitude ∆ s ≡ ∆ P = −∆ c at N ≈ 3, in analogy with the square clusters, will be called coherent pairing (CP) [24]. In equilibrium, the spin singlet background (χ s > 0) stabilizes phase separation of paired electron charge in a quantum CP phase. Fig. 2 illustrates the charge ∆ c and spin ∆ s gaps in tetrahedral clusters at N ≈ 3, T → 0. The unique gap, ∆ s ≡ ∆ P at T = 0, in Fig. 2 is consistent with the existence of a single quasiparticle energy gap in the BCS theory for U < 0 [25]. Positive spin gap for all U provides pair rigidity in response to a magnetic field and temperature (see Sec. V B). Notice that the coherent pairing exists also at large U where Nagaoka theorem for nonbipartite clusters with specific sign of t can be applied. The stability of minimal spin S = 0 (singlet) state in tetrahedron at t = 1 is consistent with that of non maximum (unsaturated) spin in Nagaoka problem. Thus our result shows that Nagaoka instability toward spin flip at large U in frustrated lattices with t = 1 can be associated with the BCS-like coherent pairing applied for general U . The negative spin gap ∆ s in Fig. 3 is shown for canonical energy differences between S = 3 2 and S = 1 2 configurations. Correspondingly, the positive charge gap ∆ c > 0 in a stable MH-like state is derived using grand canonical energies [20]. As in bipartite square clusters, the grand canonical positive charge gap ∆ c > 0 is different (incoherent) from the parallel spin pairing gap, ∆ s < 0. Thus the phase diagram for t = −1 with ∆ c = −∆ s is similar to the Phase C in Sec. IV A, but applied for all U values in the phase diagram. The negative spin gap for all couplings implies parallel spin pairing and Nagaoka-like saturated ferromagnetism with maximum spin in the entire range of U . Spin-charge separation is considered to be one of the key properties of the correlated electrons that distinguishes t = −1 from t = 1. Such behavior at t = −1 is accompanied by spin-charge separation and formation of the magnetic (spatial) inhomogeneities or domain structures [20] in a wide range of parameters. C. Square pyramids From the early days of high-temperature superconductivity, the idea of a possible role of apical sites in p-type superconductors has been controversial. The oxygen atom position at the apex of pyramidal crystalline structure can be altered through the addition of impurities and can be relocated to a lower or sideways position, thus changing the electron interactions or coupling strength c between apex and the planar atoms. There is no significant influence of localized electron charge of apical site on electron pairing and possible superconductivity in CuO 2 planes in Bi 2 Sr 2 CaCu 2 O 8+δ . When excess apex does not exist, i.e., δ = 0, this system is an insulator. However, when excess apex oxygen is introduced, hole carriers are supplied into CuO 2 planes and the material shows superconductivity [48]. Here we try to draw a closer connection to HTSCs perovskites and consider an ensemble of square pyramids of octahedral structure. Fig. 4 shows the charge gap at fixed U = 3 and N ≈ 4 under the variation of the coupling term c between the plane and the apex atoms. This picture gives surprisingly plausible evidence for understanding the detrimental role of excess electron on charge pairing for possible distortions of pyramidal crystalline structure in perovskites. In Fig. 4, the strong distortion of the pyramid structure for c = 0 (with reduced coordination number) reproduces a charge pairing gap in planar square geometries. At N ≈ 4, the electron is localized and there is no charge transfer from apex atom in an ensemble of pyramid clusters at c = 0. The negative charge gap, identical to the spin gap, exists only for c ≤ c 0 , where c 0 = 0.35 is a quantum critical point for level crossing degeneracy. Calculated electron distribution, as a function of c, shows that electron charge residing on the apical site does not contribute to the pairing whenever c is less than c 0 . The coupling in the pyramid structure at c < c 0 for N ≈ 4 leads to charge pairing instability with negative charge and positive spin gaps of equal amplitude as seen in square clusters at N ≈ 3 in Sec. IV A. In contrast, at c > c 0 , the induced charge gap driven by c change leads to electron hole pairing and a transition into insulating Mott-Hubbard (MH) behavior with ∆ c > 0. The apex atom, coupled to square-planar geometry, have shown to have a detrimental affect on the negative charge and positive spin gaps, which are favorable to forming a Bose condensate in the region of instability. We found a coherent pairing in the phase diagram with one hole off half filling also in the ensemble of octahedron clusters (perovskite systems) in Ref. [49]. There is also shown that octahedron threaded by magnetic flux in hole-rich regions can get trapped in stable minima at half integral units of the magnetic quantum flux. Such approach can be applied to understand the detrimental effect of the transverse magnetic field on electron charge and opposite spin pairings for possible superconductivity in HTSCs in planar face centered square (fcs) geometry [24]. V. PHASE T-µ DIAGRAM A. Tetrahedrons at large U and t = 1 The charge and spin susceptibility peaks in clusters, reminiscent of the singularities in infinite systems, display an extremely rich phase diagram at finite tempera- The T -µ phase diagram of tetrahedrons without electron-hole symmetry at optimally doped N ≈ 3 regime near µP = 1.998 at U = 40 and t = 1 illustrates the condensation of electron charge and onset of phase separation for charge degrees below T P c . The incoherent phase of preformed pairs with unpaired opposite spins exists above Ts P . Below T P s , the paired spin and charge coexist in a coherent pairing phase. The charge and spin susceptibility peaks, denoted by T * and Tc, define pseudogap regions calculated in the grand canonical ensemble, while phase boundaries µ+(T ) and µ−(T ) are evaluated in the canonical ensemble. The spin pseudogap region exists for T P s < T < T ′ . Charge and spin peaks reconcile at T ∼ T ′ , while χ c peak below T P s signifies metallic (charge) liquid (see inset for square cluster in Ref. [22]). tures. The realization of a high transition temperature, T c , in clusters and bulk systems depends on the interaction strength U , doping, and the detailed nature of the crystal structure (sign and amplitude of t). As exemplified here, the critical temperatures for various pairing instabilities in frustrated clusters also strongly depend on the sign of the hopping (t) term. Fig. 5 for t = 1 illustrates a number of nanophases, defined in Refs. [20,22], for the tetrahedron at large U = 40, found earlier in tetrahedron and bipartite 2×2 and 2×4 clusters at moderate U = 4 values [24]. This diagram captures the essential electron charge and spin pairing instabilities at finite temperatures. The curve µ + (T ) below T P c signifies the onset of charge pair condensation. The calculated susceptibility peaks in Fig. 5 correspond to the pseudogap crossover temperature T * . As temperature is lowered below T * , a spin pseudogap is opened up first, as seen in NMR experiments [22], followed by the gradual disappearance of the spin excitations, consistent with the suppression of low-energy excitations in the HTSCs probed by STM and ARPES [3,4,5,6,7]. In contrast, the local charge gap, ∆ c , evolves smoothly as temperature decreases below T P c . The opposite spin CP phase, with fully gapped collective excitations, begins to form at T ≤ T s P and spin pairing rigidity gradually grows upon lowering of the temperature. As temperature decreases both charge and spin pseudogaps emerge into one gap at zero temperature. Therefore, at sufficiently low temperatures, this leads to the BCS-like coherent coupling of electron charge to bosonic excitations (see Sec. IV B). However, the spin gap is more fragile and as temperature increases it vanishes at T s P , while charge pseudogap survives until T P c . The charge inhomogeneities [1,2] in hole-rich and charge neutral spinodal regions between µ + and µ − are similar to those found in the ensemble of squares and resemble important features seen in the HTSCs. Pairing and transfer of holes is a consequence of the existence of an inhomogeneous background. In the absence of direct contact between clusters, the inhomogeneities in the grand canonical approach are establishing a transfer of paired electrons via this (thermal) bath media. Fig. 5 shows the presence of bosonic modes below µ + (T ) and T P s for paired electron charge and opposite spin respectively. This picture suggests condensation of electron charge and spin at various crossover temperatures while condensation in the BCS theory occurs at a unique T c value. This result suggests that thermal excitations in the exact solution are not quasiparticle-like renormalized electrons, as in the BCS theory, but collective paired charge and coupled opposite spins [24]. The coherent pairing of holes here is a consequence of the existence of homogeneous opposite spin pairing background, consistent with the STM measurements [29]. This led us to conclude that T P s can be relevant to the superconducting condensation temperature T c in the HTSCs. In the absence of spin pairing above T P s , the pair fluctuations between the two lowest energy states becomes incoherent. The temperature driven spin-charge separation above T P s resembles an incoherent pairing (IP) phase seen in the HTSCs [2,3,4,5,6]. The charged pairs without spin rigidity above T P s , instead of becoming superconducting, coexist in a nonuniform, charge degenerate IP state similar to a ferroelectric phase [25]. The unpaired weak moment, induced by a field above T P s , agrees with the observation of competing dormant magnetic states in the HTSCs [4]. The coinciding χ s and χ c peaks in the vicinity of critical temperature T ′ show full reconciliation of charge and spin degrees seen in the HTSCs above T c . However, in both channels the charge and spin pseudogaps behave differently or independently. Indeed, we find that the variation of the spin pairing gap with temperature does not cause a change in the charge pairing gap. In the absence of electron-hole symmetry in the tetrahedrons, the reentrant phenomenon can be observed at low temperatures [24]. In Fig. 5, as temperature increases near optimal doping µ ≤ µ P , clusters undergo a transition from a CP phase to a MH-like behavior. Notice that the charge and spin pairing do not disappear in the underdoped regime for µ ≥ µ P but are governed predominantly by the physics of antiferromagnets at half filling. In contrast, in the overdoped regime at low temperatures, the charge pairing pseudogap gradually approaches the spin pseudogap as in the conventional BCS theory. Our exact calculations of phase diagrams in various bipartite and nonbipartite clusters provide strong evidence for the existence of a narrow, homogeneous (pseudo)gap ∆ s that vanishes near T P s , coexisting with inhomogeneous, weakly temperature dependent broad gap ∆ c , which disappears at higher temperatures T P c > T P s . These phase diagrams display coherent and incoherent pairing (pseudo)gaps and possible superconductivity in agreement with the recent STM measurements in HTSCs [1,2,3,4,5,6,7]. B. Bipartite clusters at large U As for the cuprates, the bipartite clusters are useful for understanding the magnetic behavior and instabilities in manganites. The phase diagram in Fig. 6 for square clusters at U = 40, quite similar to other bipartite clusters at large U limit [21], displays characteristic features of managanites with strong electron correlations. In the ground state, the cluster at N = 3 exhibits ferromagnetism in agreement with the Nagaoka theorem [36]. However, we observe saturated ferromagnetism, S = 3 2 spin state for N ≈ 3 clusters with one hole off half filling also at finite temperatures. The curve below T N ≈3 F signifies the onset of spontaneous magnetization with S = 3 2 for parallel spin condensation. The positive charge gap (∆ c = 0.7787) for electron-hole (exciton) pairing manifests MH-like insulating behavior. In contrast, clusters show the minimum spin S = 0 antiferromagnetism at N ≈ 2 and N ≈ 4 in the ground state and at finite temperatures. The inset in Fig. 6 displays the variation of spin gap for various regions. At µ = 1.35 the negative spin gap for N ≈ 3 approaches zero as T → T N ≈3 F . Thus the region above T N ≈3 F describes a paramagnetic phase with zero spin gap for unpaired spins. In contrast, the positive spin gap in high doped regime at µ = 0.25 changes its sign at temperatures above T 2≤N <3 F . This picture describes a transition driven by temperature from antiferromagnetism into ferromagnetism with S = 1. At half filling, a MH-like antiferromagnetism is stable at very low temperatures and unsaturated ferromagnetic state with S = 1 becomes more stable at higher temperatures, T ≥ T 3<N ≤4 F . However, T 3<N ≤4 F → 0 as U → ∞ and unsaturated ferromagnetism with S = 1 at half filling can be stabilized at infinitesimal temperatures. The well separated charge and spin susceptibility curves in the entire parameter range near N ≈ 3 show spin-charge separation and decoupling of charge and spin degrees. The susceptibility peak at T * for N ≈ 3 in Fig. 6 displays a spin liquid behavior in the overdoped region for µ ≤ µ P , while well developed negative spin gap in underdoped for µ > µ P regions at low temperatures describes a ferromagnetic insulator. In Fig. 6, the region of metallic-like behavior is manifested by the charge susceptibility peaks along the T c curve. Phase diagram with µ dependent locally inhomogeneous, ∆ s < 0, and homogeneous, ∆ s > 0, spin structures at low temperatures, coexisting with charge ordered homogeneous Mott-Hubbard like gap ∆ c > 0 displays spin-charge separation and characteristic features of the CMR-manganite La 1x Ca x MnO 3 and related materials with alternating insulating ferromagnetic and charge ordered antiferromagnetic regions [51]. VI. CONCLUSION We have studied the dependence of the ground state and thermal properties in the repulsive Hubbard model on cluster size, geometry, electron number and interaction strength to understand the inhomogeneous superconducting elements and stripes. The inhomogeneities found in exact calculations of clusters are promising for the description of geometric stripes with alternating superconducting and antiferromagnetic regions in high-Tc cuprates and magnetic domain structures in manganites. Spatial electron inhomogeneities capture the magnetic and pairing instabilities in clusters and respective bulk materials. The principal conclusion is that the exact solution for optimal inhomogeneities mimicked in small clusters can target essential features relevant to exist-ing inhomogeneities in nanostructured materials on a nanoscale level. We found charge and spin gaps of equal amplitude in the ground state similar to the coherent pairing in conventional BCS theory. However, separate Bose condensation of electron charge and spin degrees with two consecutive transition temperatures into coherent pairing suggests a mechanism different from the prediction of the BCS coherent behavior with a unique critical temperature. This picture is also consistent with the existence of two different energy scales for electron charge and spin pairing condensation temperatures in the HTSCs [1,2,3,4,5,6]. The electronic instabilities in various geometries and in a wide range of U and temperatures will be useful for the prediction of coherent and incoherent electron pairings, ferroelectricity [9,25] and possible superconductivity in nanoparticles, doped cuprates, etc. In contrast to bipartite clusters, the exact solution for the tetrahedron depends on the sign of t and shows relatively weak dependence on U . For example, the tetrahedron exhibits similar features at U = 4 and U = 40 whenever t = 1. On other hand, the behavior of the tetrahedron for t = −1 strongly differs from that of t = 1. These results for frustrated clusters show that the properties are more sensitive to the change of the sign of the hopping term rather than the U parameter. This fact can explain why itinerant ferromagnetism can occur even at relatively weak interactions in frustrated systems. Our findings at small, moderate and large U carry a wealth of infor-mation regarding phase separation, ferromagnetism and Nagaoka instabilities in bipartite and frustrated nanostructures in manganites/CMR materials at finite temperatures. These exact results allow us to understand the origin of level crossings, spin-charge separation, reconciliation and full Bose condensation [52]. The obtained phase diagrams provide novel insight into electron condensation, magnetism, ferroelectricity at finite temperatures and display a number of inhomogeneous, coherent and incoherent nanophases seen recently by STM and ARPES in numerous nanomaterials, assembled nanoclusters and ultra-cold fermionic atoms [10,53]. Finally, we conclude that the use of the chemical potential and the departure from zero degree singularities in the canonical and grand canonical ensembles are essential for understanding the important thermal properties and physics of phase separation instabilities and inhomogeneities at nanoscale level. The article currently in progress is aimed to study the stability of pairing correlations and magnetism in the presence of transverse magnetic field [49]. It will be shown that magnetic flux tube inside the octahedral cluster can get trapped in stable minima at half integral units of the flux quantum in hole-rich regions.
2009-03-14T22:39:20.000Z
2009-03-09T00:00:00.000
{ "year": 2009, "sha1": "2368fa50ab2536dbf2db43b17f47b0d0f397f02a", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "2368fa50ab2536dbf2db43b17f47b0d0f397f02a", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
5281637
pes2o/s2orc
v3-fos-license
Genome-Wide Analysis of Transposon Insertion Polymorphisms Reveals Intraspecific Variation in Cultivated Rice Insertions and precise eliminations of transposable elements generated numerous transposon insertion polymorphisms (TIPs) in rice ( Oryza sativa ). We observed that TIPs represent more than 50% of large insertions and deletions ( . 100 bp) in the rice genome. Using a comparative genomic approach, we identified 2,041 TIPs between the genomes of two cultivars, japonica Nipponbare and indica 93-11.Wealsoidentified691TIPsbetweenNipponbareand indica Guangluai4inthe23-Mbcollinearregionsofchromosome 4.Amongthem, retrotransposon-basedinsertionpolymorphisms wereusedtorevealtheevolutionary relationshipsofthese three cultivars. Our conservative estimates suggest that the TIPs generated approximately 14% of the genomic DNA sequence differences between subspecies indica and japonica . It was also found that more than 10% of TIPs were located in expressed gene regions, representing an important source of genetic variation. Transcript evidence implies that these TIPs induced a series of genetic differences between two subspecies, including interrupting host genes, creating different expression forms, drastically changing intron length, and affecting expression levels of adjacent genes. These analyses provide genome-wide insights into evolutionary history and genetic variation of rice. Transposons were first discovered and characterized in maize (Zea mays;McClintock, 1948).It was found that transposons have a great impact on genome structure and gene function in nearly all organisms (Kidwell and Lisch, 1997).Transposable elements (TEs) occupy a large proportion of nuclear genomes in many plants (Vicient et al., 1999;Meyers et al., 2001).Activities of TEs can affect individual genes, leading to the alteration of gene structure and expression (Bennetzen, 2000).Furthermore, TEs play an important role in unequal homologous recombination events (Kazazian, 2004).Recent insertion and excision of TEs have given rise to a series of transposon insertion polymorphisms (TIPs; polymorphisms consisting of the presence/absence of a TE at a particular chromosomal location) in closely related species, subspecies, and haplotypes and served as ongoing sources of genomic and genetic variation (Bennett et al., 2004). Different from DNA transposons (class II TEs), which can be deleted precisely at a relatively low frequency, the vast majority of retrotransposon insertions (class I TEs) are irreversible, rarely undergoing precise excision. Hence, the absence of retrotransposon is regarded to be the ancestral state.Moreover, the probability that different retrotransposons would independently insert into the exact same location is negligible.Consequently, retrotransposon-based insertion polymorphisms (RBIPs), as an important subset of TIPs, are very useful in the study of deeper phylogeny in wide germplasm pools.RBIPs were developed using the PCR-based method for retrotransposon isolation (Pearce et al., 1999) as well as comparative genomics approaches.RBIPs can detect individual insertions by PCR with flanking host sequence primers and a retrotransposon-specific primer (Flavell et al., 1998).They have been applied in the study of population genetics and phylogenetic analyses of both plants and animals (Stoneking et al., 1997;Batzer and Deininger, 2002;Vitte et al., 2004;Jing et al., 2005). Although TIPs are abundant and also informative (Du et al., 2006), a genome-wide survey of TIPs has remained scarce in plants.In rice (Oryza sativa), indica and japonica represent two major types of rice cultivars with highly diverged genomic backgrounds (Sang and Ge, 2007;Kovach et al., 2007).The sequencing of indica and japonica rice genomes provides a powerful resource for comparative and functional genomic analyses.The International Rice Genome Sequencing Project has generated highly accurate genome sequences of japonica Nipponbare using a map-based strategy (International Rice Genome Sequencing Project, 2005), and the Beijing Genomics Institute (BGI) used a shotgun approach to sequence the indica 93-11 genome with coverage of 6.283 (Yu et al., 2005).Moreover, we sequenced an approximately 23-Mb region on chromosome 4 from another indica cultivar, Guangluai 4, using the bacterial artificial chromosome (BAC)based approach, which allowed for an in-depth com-parative analysis of cultivated rice genome variations and a high-quality assessment of polymorphisms between indica and japonica cultivars.With the available genome sequences, candidate DNA polymorphisms across the rice genome were discerned to develop molecular markers (Feltus et al., 2004;Shen et al., 2004).A handful of polymorphisms were also shown to be important sources of evolutionary changes, such as functional variations in key domestication-related genes cloned in rice (Kovach et al., 2007).Therefore, it is likely that the examination of genome-wide sequence differences between the two subspecies of cultivated rice will help us understand the nature of mutations and their evolutionary potentials (Ma and Bennetzen, 2004;Tang et al., 2006). Recent studies have found that more than 10% of the structural genes contained TEs in rice (Sakai et al., 2007), implying that TIPs would also represent significant sources of genetic variation.Previous work has revealed substantial differences in genome sizes (Han and Xue, 2003), gene content (Ding et al., 2007), and transcript levels (Liu et al., 2007) between the two subspecies.Our question is whether mobile elements played an important role in the genetic differentiation.To address this question, we performed a systematic study of recent transposon insertion events (both class I and class II TEs).In this study, a comparative approach was adopted to detect TIPs between the genomes of indica and japonica, which to our knowledge represents the first genome-wide survey of TIPs in plants.We also used RBIPs in the 23-Mb collinear regions of chromosome 4 to analyze the divergence of three sequenced rice varieties: japonica Nipponbare, indica 93-11, and indica Guangluai 4. We show that transposon insertions affected a large number of genes, potentially serving as an important driving force for intraspecific variation of cultivated rice. The Abundance of TIPs between Cultivated Rice Genomes To investigate the difference between japonica Nipponbare and indica Guangluai 4 genome sequences, we selected an indica-japonica collinear region on chromosome 4, where both cultivars have BAC-based sequences and differ substantially in size (Fig. 1).The total length of this region is 492 kb in Nipponbare and 394 kb in Guangluai 4. We analyzed TEs and non-TE related genes and compared the differences between the two genome sequences in this region.Consistent with the conclusions from studies of other organisms (Britten et al., 2003), the divergence is mainly due to large insertions or deletions (indels).We counted all large indels of greater than 100 bp between the two genomes and found a total length of 147.7 kb (30.0% of the DNA sequence in the region of Nipponbare) for inserts in Nipponbare and 47.5 kb (12.1% of the DNA sequence in the region of Guangluai 4) for inserts in Guangluai 4. Surprisingly, over 67% of these indels resulted from TIPs, which generated a total length of 100.5 kb for inserts in Nipponbare and 42.3 kb for inserts in Guangluai 4. We also examined small indels of less than 100 bp and found no indels resulting from intact transposon insertions. To identify the differences induced by the TIPs, we performed a systematic analysis of approximately 23-Mb sequences of chromosome 4.Our approach for detecting TIPs in rice involved identifying all indels of more than 100 bp between the two genomes and then screening these insert regions to identify de novo transposon insertions.We reasoned that this approach should be effective, because many indels were related to TIPs, and, also, the lengths of most transposon insertions were longer than 100 bp, as indicated in the orthologous region mentioned above. We aligned all of the orthologous regions between Nipponbare and Guangluai 4, and mined all indels of more than 100 bp.The results were the same when individual BACs and constructed contigs of Guangluai 4 were used for alignment.We found that there were 821 insertions (.100 bp) in Nipponbare relative to Guangluai 4 and 751 insertions (.100 bp) in Guangluai 4 relative to Nipponbare, with a total length of 3.2 Mb and 2.4 Mb, respectively (Table I).Overall, the 1,572 insertions were distributed throughout these regions, ranging from 100 to 118,675 bp in length.Large indels of greater than 2 kb were primarily responsible for the different sizes of orthologous regions between Nipponbare and Guangluai 4 (Fig. 2).The homology-based approach was used to identify indels that were caused by de novo transposon insertions.We regarded an indel as a TE insertion by employing the following criteria: first, it should have similarity to a known TE family and possess the structure of a transposon; second, it should be bound by target site duplication (TSD).With these criteria, 691 insertions of transposons were identified in the approximately 23-Mb orthologous regions of Nipponbare and Guangluai 4 (Supplemental Table S1).Among them, the most abundant polymorphisms identified were Ty3/gypsy insertion polymorphisms.A total of 110 insertions of Ty3/gypsy retrotransposons were detected in Nipponbare, while 127 insertions were detected in Guangluai 4, equivalent to 0.95 and 0.89 Mb of the sequences investigated, respectively (Table II).Other abundant transposon insertions included Ty1/ copia, En-Spm/CACTA, and MULE, which were consistent with their content in the Nipponbare genome. Although the total number of TE insertions is nearly equal in Nipponbare and Guangluai 4, the size of long terminal repeat (LTR)-retrotransposon insertions varies substantially between Nipponbare and Guangluai 4. The average length of LTR-retrotransposons is 7.5 kb in Nipponbare and 6.5 kb in Guangluai 4, which may suggest that internal deletions of LTR-retrotransposons occur more frequently in Guangluai 4.Moreover, some DNA transposon families seemed to have insertion bias in the two subspecies.The insertions of En-Spm/ CACTA and MULE were more abundant in Nipponbare than in Guangluai 4, while Tourist/Harbinger insertions were more abundant in Guangluai 4. In the approximately 23-Mb orthologous regions, there are at least 179 ''young LTR-retrotransposons'' in the Nipponbare genome (covering about 1.34-Mb sequences), which accumulated after the divergence of japonica and indica from a common ancestor.As the total length of the rice nuclear genome was calculated to be 389 Mb and chromosome 4 had a relatively modest retrotransposon content (International Rice Genome Sequencing Project, 2005), we estimate that there are more than 3,000 young LTR-retrotransposons with a total length of 22.6 Mb in the rice nuclear genome (equivalent to approximately 6% of the rice genome).Compared with all LTR-retrotransposons in the rice genome, young LTR-retrotransposons occupy less than 10% in number but more than 40% in size, mainly because fewer deletions occurred in the newly inserted LTR-retrotransposons. RBIPs as Reagents to Reveal an Evolutionary History To determine the evolutionary history of three cultivated rice varieties, Nipponbare, Guangluai 4, and 93-11, whose genomic sequences are available, we tested for the presence/absence of RBIPs between Nipponbare/Guangluai 4 in the BGI 93-11 genome by searching against the BGI 93-11 contigs.An insertion of a TE was considered to be present in 93-11 rice when the corresponding region of 93-11 had the TE insertion.Alternatively, an insertion was judged to be absent in the 93-11 genome if the TE sequences did not exist in the orthologous region of 93-11 (see ''Materials and Methods'' for details). In total, 163 retrotransposon insertions present in the Nipponbare genome and 165 retrotransposon insertions present in the Guangluai 4 genome were investigated in the 93-11 genome (Table III).Of the 163 retrotransposon insertions present in Nipponbare, 148 insertions are absent in the 93-11 genome (consistent with Guangluai 4), while only 15 insertions are present in 93-11 (consistent with Nipponbare; Fig. 3, type I).This result indicates that the radiation between gene pools of Guangluai 4 and 93-11 probably occurred after the divergence between indica and japonica.The 15 exceptions reflect introgression between the two genome pools that may have occurred hundreds of years ago, as reported previously (Feltus et al., 2004).Based on these data, we estimated that the introgression rate would be about 9.2% (15 of 163 5 9.2%).As for the 165 retrotransposon insertions in the Guangluai 4 genome, 100 insertions are present in the 93-11 genome (Fig. 3, type II), while 65 insertions are absent in 93-11 (Fig. 3, type III).Furthermore, the two distinct states (presence or absence) are correlated with the ages of the insertions.This is largely based on the following evidence.First, the average length of the former 100 retrotransposon insertions is 5,212 bp, while that of the latter 65 retrotransposon insertions is 7,163 bp.Second, among the former retrotransposons, the ratio of solo LTRs to intact LTR elements is about 2:1, while the ratio of solo LTRs to intact LTR elements among the latter is about 0.8:1.These results suggest that most of the 100 retrotransposon insertions present in the 93-11 genome were inserted into the Guangluai 4 gene pool before the divergence of the Guangluai 4 and 93-11 gene pools and after the divergence of indica and japonica, whereas most of the 65 retrotransposon insertions absent in 93-11 were inserted into the Guangluai 4 gene pool after its divergence from a common ancestor with the 93-11 gene pool, although a few exceptions existed, possibly due to introgression between the two gene pools.It remains unclear whether the other 54 insertions (26 insertions in Nipponbare and 28 insertions in Guangluai 4) are present in 93-11, because there were no corresponding sequences found in 93-11 contigs or the flanking sequences of the insertions were repetitive in the genome. In addition, we examined DNA transposon polymorphisms in 93-11.Because of the possibility of the excision of DNA transposons and the lack of ancestor information, we could not determine whether an individual TIP was an insertion or a precise excision event.However, it was found that, of 119 DNA transposon insertions present in Nipponbare, only 23 insertions were also present in 93-11 (Supplemental Table S5).The 23 insertions present in 93-11 can result from the introgression events or the excision events.Deducting from the introgression portion (9.2%), there were likely only 10.1% resulting from the excision events (23/119 2 9.2% 5 10.1%).According to these results, we propose that the precise excision of DNA transposons is not frequent in rice. Genome-Wide Detection of TIPs between Nipponbare and 93-11 With the availability of two rice whole genome sequences and whole genome alignment, we started our mining from the alignment result of BGI 93-11 contigs with The Institute for Genomic Research (TIGR) Nipponbare pseudomolecule 5.0 (Ouyang et al., 2007).Because of the assembly problem of the 93-11 repetitive regions caused by the whole genome shotgun strategy, we only mined all insert regions in Nipponbare, and those insert regions in the 93-11genome had to be neglected.For each candidate insert region in Nipponbare, we also checked whether any 93-11 contig covers both the partial insert region and its flanking sequence, and, if found, those inserts were excluded from further analysis (see ''Materials and Methods'' for details).Following the algorithm, overall, 4,348 insert regions of more than 100 bp were found in the Nipponbare genome.The average length of the insert regions in Nipponbare is 2,681 bp, with the longest insert of 58,750 bp, which is filled with LTR-retrotransposons around the centromeric region of chromosome 7.After applying the approach to detect transposon insertions described above, we identified 2,041 TE insertions in the Nipponbare genome (Fig. 4; Supplemental Table S2). TIPs are not randomly distributed on five of the 12 rice chromosomes (chromosomes 1, 3, 4, 5, and 8; P , 0.01; Supplemental Table S6).The uneven distribution is to some extent caused by the position bias of TE insertions.It was also found that some regions lacking TIPs are also the regions of low polymorphism between Nipponbare and 93-11.For instance, the longest region lacking TE insertions is also the longest singlenucleotide polymorphism (SNP)-poor region, which is located on chromosome 5, as shown in Figure 5 (9-13 Mb in the pseudomolecules [Feltus et al., 2004]).This may reflect the introgression of chromosomal segments between Nipponbare and 93-11. Types of TE-Induced Genetic Variations TIPs have considerable effect on genome structure and size, as described above.Moreover, they also contribute to the variation of individual genes.Various ways have been discovered in which TIPs can affect the intraspecific variation of individual genes (Fig. 6).To explore the evolutionary significance of TIPs in genetic variation, we examined all of the TIPs in the expressed gene regions and determined whether any variation caused by TIPs existed between indica and japonica (Supplemental Tables S3 and S4).Since EST and cDNA sequences can provide direct evidence for gene expression and because they are currently the most important resources for transcriptome exploration in rice, we considered a TIGR gene locus as an expressed gene region if it had at least one corresponding EST (or cDNA) in the database.The variations in these regions were classified into three types: (1) the alteration of cDNA sequence; (2) the change of intron size; and (3) the rearrangement of the promoter region.We counted the number of TIPs that were associated with the three types of genetic variation and observed that at least 10% of TIPs occurred in the expressed gene regions leading to changes ranging from subtle to dramatic (Tables IV and V). Alteration of cDNA Sequence After the divergence, TIPs within gene regions are likely to result in a variety of outcomes, including the alteration of gene structure and expression.To in-vestigate these TE-induced changes in transcription level, we searched transcripts (including Fl-cDNA and ESTs) around the insertion sites.If there was a cDNA or EST match, gene annotation was inspected in Nipponbare and 93-11 on the basis of rice transcript alignments and TIGR annotation release 5.Then, individual examinations were conducted to identify the difference caused by TE insertion.TE S2. insertions into TE-related genes were excluded manually. Overall, 4.3% of TIPs between Nipponbare and 93-11 and 3.9% of TIPs between Nipponbare and Guangluai 4 resulted in abnormal termination or alterative splicing, respectively.TEs that insert within coding regions are most likely to result in null mutations.For example, in hexaploid wheat (Triticum aestivum), the xylanase inhibitor protein I gene (XIP-I), whose crystal structure, expression pattern, and function have been studied in detail, was shown to function in plant defense against secreted fungal pathogen xylanases by its competitive inhibiting activity against fungal endo-1,4-b-D-xylanases (Elliott et al., 2002;Flatman et al., 2002;Payan et al., 2004;Igawa et al., 2005).Although several XIP-type xylanase inhibitors, riceXIP (Goesaert et al., 2005), OsXIP (Tokunaga and Esaka, 2007), and RIXI (Durand et al., 2005), were recently isolated from rice, there is no orthologous gene of wheat XIP-I reported to date in rice.We found its ortholog on chromosome 6 of indica 93-11, named indica XIP-I here (Fig. 7A).It does not have an ortholog in the Nipponbare genome.The mutation is caused by the insertion of a Dasheng (a type of LTR-retrotransposon) into the coding region of the XIP-I gene locus in japonica Nipponbare (Fig. 6A).Further analysis of Nipponbare's transcripts revealed that the transcription stops at the LTR of the TE, creating a truncated open reading frame (ORF), with the loss of the second half of the host XIP-I gene.As expected, no transcriptional activity can be observed in the second half of the gene in Nipponbare, according to both transcript evidence and Affymetrix microarray data of different rice cultivars (Fig. 7, B-D).Expression analysis of the gene was also carried out by reverse transcription (RT)-PCR, and the result is shown in Figure 8.We further detected whether the XIP-I gene is present in the genomes of other indica, japonica, and wild rice varieties using PCR.The LTR insertion in the XIP-I gene was detected in nearly all japonica varieties (except japonica Xuehehanzao; Fig. 8), and no insertion was found in any indica varieties or three wild Oryza species, indicating that the truncated XIP-I gene is unique to japonica varieties. It was found that 3# untranslated regions (UTRs) in exons are preferentially inserted, which can be easily understood because insertions in 3# UTRs seem to be less destructive than insertions in other locations of the coding region.On the other hand, they also provide the raw material for new protein-coding regions.For example, we found that TE insertions in 3# UTRs created an alternative spliceosome.OsWRKY8, a member of the WRKY gene family encoding transcription factors that are involved in the regulation of various biological processes (Xie et al., 2005), was inserted by a copia in the 3# UTR (Fig. 6B).Two alternative transcript isoforms coexist in Nipponbare: one is identical to the gene isoform of indica 93-11, while the other acquired four additional exons in the transposon region, thus giving rise to a chimeric gene containing both a principal part of the host OsWRKY8 gene and a fraction of the LTR. Insertions in introns could also have an influence on gene splicing sites.For instance, we found that a putative rice purine permease, which is a homologous gene of AtPUP11, shifted its transcription start site to the transposon hAT, thus generating a truncated ORF lacking its original first exon (Fig. 6C). Change of Intron Size We aligned all of the KOME Fl-cDNA and National Center for Biotechnology Information ESTsequences with the genome sequences of Nipponbare using BLASTN and found that 5.3% of TIPs between Nipponbare and 93-11 and 4.1% of TIPs between Nipponbare and Guangluai 4 occurred in intron regions, respectively.TEs that insert in intron regions are less harmful relatively and have a greater chance to survive as a consequence.Generally, this is the cause of intron length polymorphisms (Wang et al., 2005).Despite the fact that small indels are usually found in introns, some transposon insertions could change intron length greatly, engendering an intron longer than 15 kb (Fig. 6D). Modification of Expression Level and Rearrangement of Promoter Region Considering that many promoters have fragments of TEs in plants (White et al., 1994), we investigated the TIPs in the promoter regions.The insertion of TEs could potentially modify the expression of adjacent genes, through the disruption of native promoter regulation or the donation of new regulatory signals (Kang et al., 2001;Pooma et al., 2002;Kashkush et al., 2003).In comparison with exons and introns, which can be identified precisely, it is less certain at defining regulatory regions.Here, we chose the genomic sequences that were 250 bp The number was obtained by multiplying the proportion by total TIPs estimated. b The proportion was derived from two subsets of TIPs identified. c This estimation relies on 23-Mb collinear regions of chromosome 4, which have continuous BAC-based sequences.upstream from the predicted transcription start site of an expressed gene as the potential promoter region.A total of 3.8% of TIPs between Nipponbare and 93-11 and 2.3% of TIPs between Nipponbare and Guangluai 4 were found in such upstream regions (Fig. 6E). We then experimentally compared relative expression levels of 15 genes that possessed TIPs in the defined upstream regions between japonica Nipponbare and indica 93-11.The results of real-time RT-PCR analyses of 14-d-old seedlings are shown in Supplemental Figure S1.Of 15 genes examined, five genes showed greater than 2-fold differences in relative expression levels between Nipponbare and 93-11.In particular, two of them, Os01g49110 and Os12g23754, showed 23-fold downregulation and 18-fold up-regulation with the TE insertion, respectively. Interestingly, the majority of TIPs in the upstream region of expressed genes are DNA transposons (80.5%, i.e. 62 of 77 TIPs between Nipponbare and 93-11 in the promoter region [Supplemental Table S3]), significantly higher than the average proportion (49.9%, i.e. 1,018 of 2,041 TIPs between Nipponbare and 93-11).Of these, MULEs also account for a relatively higher portion (36.4% in the promoter region versus 13.8% on average).Given the report that the vast majority of Pack-MULE transcripts is initiated from promoters in element sequences (Jiang et al., 2004), we proposed that promoters in the terminal inverted repeat (TIR) region of DNA transposons would play a complementary role.We did not find any new non-TE-related genes created by newly inserted transposons themselves, although it has been suggested that some transposons, like MULEs, can pack host gene fragments and form novel protein-coding genes in a new locus of the genome. Utility of the TIPs We have identified 691 TIPs between Nipponbare and Guangluai 4 in the 23-Mb collinear regions of chromosome 4 and 2,041 TIPs between the Nipponbare and 93-11 genomes.These TIPs can be used to develop molecular markers.Of the transposon insertions, about half of them were less than 1.5 kb.For these small TE insertions, a single PCR would be feasible, using primers derived from its flanking regions, resembling simple sequence repeat polymorphisms.For larger TE insertions, two rounds of PCR need to be performed.In the first reaction, amplification is a test using primers flanking the insertion.In the second reaction, one primer is designed from the flanking sequence and the other recognizes the LTR/TIR sequence of the corresponding TE.Then, as a codominant marker system, the different allelic states (presence and absence of the transposon insertion) at a locus will be revealed (Flavell et al., 1998). Although the TIPs identified here are based on differences between only one japonica and two indica varieties, a large portion of those TIPs could be applicable to combinations of japonica and its related wild species (e.g.Oryza rufipogon) or other combinations of japonica and indica cultivars, because RBIPs and numerous DNA TIPs identified here can be regarded as events occurring in the recent past (after the divergence between indica and japonica).For example, among 2,041 TIPs between Nipponbare and 93-11, 94 are located in regions that have corresponding Guangluai 4 BAC sequences.After a comparison with Guangluai 4, we found that 85.1% (80 of 94 polymorphisms) were also polymorphic between Nipponbare and Guangluai 4. The cases in Figure 6. The marker system based on TE insertions offers an ideal tool to evaluate the transposition history, frequency, and timing of mobile elements in rice.Since the patterns of the RBIPs can reveal the relationship among observed cultivars in a phylogenetically meaningful way, phylogenetic and biodiversity studies can be carried out using RBIPs.Vitte et al. (2004) tested 13 RBIPs in 66 rice varieties of both indica and japonica types and suggested that there were at least two independent domestication events of rice in Asia.More RBIPs would be needed to study genetic diversity in Oryza species and to determine the extent to which the introgression has occurred within/between cultivated and wild species in rice. History of Rice Evolution: Early Radiation followed by Introgression TE insertion polymorphisms distribute quite unevenly.It may reflect the local variation in TE insertions caused by differences in chromosome physiology (e.g.chromatin features, euchromatin region, or heterochromatin region).But we also observed that regions of low TE insertion polymorphism appeared to be correlated with regions of low SNPs.Occasional crosses between ancestors of 93-11 and Nipponbare may have happened, leading to the introgression of chromosomal segments.This may explain why there are 15 retrotransposon insertions absent in Guangluai 4 but shared by Nipponbare and 93-11. In this study, two indica varieties, 93-11 and Guangluai 4, were investigated; they were the paternal cultivar of a superhybrid and a cultivar widely grown in China several decades ago, respectively.To our surprise, the TIPs between them are not rare.We found that a number of de novo transposon insertions occurred only in Guangluai 4, most of which date back to more than 0.1 million years ago.Although there may exist limited introgression, it still cannot account for the deep divergence between 93-11 and Guangluai 4 genomes.Therefore, the radiation of the indica genomes occurred unambiguously earlier than the domestication of rice, supporting multiple domestications of O. sativa. Estimating the Level of Genomic Variation Caused by TIPs in Rice After the completion of rice genome sequencing, the content of all types of transposons in the rice genome is estimated to be 35%.Now, our mining provided an opportunity to measure the level of variation caused by TIPs in rice varieties.The 23-Mb collinear regions of Nipponbare and Guangluai 4 are both derived from high-quality BAC-based sequences; therefore, the number of TIPs identified between Nipponbare and Guangluai 4 can be used as a gold standard to estimate the number of TIPs in the rice genome.Because the 23-Mb regions of chromosome 4 represent about 6% of the rice genome, there would be more than 11,517 TIPs in the rice genome on average (691/6% 5 11,517), accounting for 53.5 Mb of DNA sequence (3.21/6% 5 53.5 Mb).Hence, more than 14% of the genomic DNA sequences, which are different between indica and japonica, are due to the movements of TE.We propose that the average density of TIPs is relatively comparable between genomes of different varieties, although the 2,041 polymorphic transposon insertions identified between Nipponbare and 93-11 account for about onesixth of the expected number.This is mainly due to the shotgun assemblies of 93-11.Despite the 6.283 coverage, the International Rice Genome Sequencing Project estimated that the nonredundant coverage of the indica 93-11 assembly was 69%.Moreover, it consists of thousands of small pieces of contigs, and misassembly of large pieces is also likely to happen.In our study, we found that the same contigs of 93-11 can be aligned to different regions in the Nipponbare genome, and several polymorphic contigs of 93-11 can be aligned to one region of the Nipponbare genome.So we had to apply relatively strict selection criteria in order to improve the accuracy of our investigation; consequently, we missed some TIPs.These observations indicate that the draft sequences of 93-11, although providing a genome-wide survey of TIPs, fell short of ascertaining all variation between subspecies. As described above, the approach we used to recognize transposons primarily relied on sequence similarity with known repeats, and the increased improvement of the rice TE database allowed the identification of most TEs.Although this homology-based method with TSD detection performed well here, the genome comparison followed by inner structure analysis provided an innovative and complementary method for TE discovery, especially in detecting new TE families and instances (Caspi and Pachter, 2006).This is because TEs are highly enriched in these insert regions.In fact, we found that at least 56.7% of large insertion regions (.100 bp) are associated with transposon insertions.From 777 large insertion regions with direct repeats whose terminal sequences did not have any similarity to known repeat databases, we found that at least 19 of them showed clear structural features of TE elements (including six LTR-retrotransposons, four MITEs, and one MULE).These small amounts of elements are transposon insertions missed in our survey.Therefore, an integrated approach including both comparative genomic methods and structure-based methods would be desirable, given the existence of transposons with low copy numbers and the anticipated availability of multiple genome sequences of closely related species, subspecies, and varieties (Bergman and Quesneville, 2007). Transposon Insertions as Important Sources of Genetic Variation in Rice In this study, we showed that more than 10% of TIPs occurred in expressed gene regions.We provided a number of cases to exemplify a wide spectrum of changes induced by transposon insertions, involving deleterious effects, alternative splicing, shift of the transcription initiation site, loss or gain of exons, and so on.We estimated that the alterations at the level of the cDNA sequences between rice subspecies could add up to more than 400 (approximately 1% of all rice genes; Table IV).This is still a conservative estimation, because the variations identified in our study were mainly based on rice Fl-cDNA or EST sequences, and those lacking transcript evidence in the database were not examined. Moreover, we used quantitative RT-PCR to examine the relative transcription levels of 15 genes that possessed TIPs in the upstream regions between Nipponbare and 93-11.At least two genes showed dramatic changes in expression levels between the two cultivars.Therefore, the TIP-influenced expression difference could potentially serve as an important source of genetic variation.An explicit experimental evaluation of the impact of TIPs on global gene expression, however, awaits full-scale transcriptional profiling in future work. Among thousands of polymorphic TE insertions identified, we did not find any elements carrying a gene fragment and creating a new gene, if the TE-related genes, like transposases taken along by them, were neglected.To our surprise, two transcription factors regulating light signaling in Arabidopsis (Arabidopsis thaliana) were reported to be co-opted from a transposase (Lin et al., 2007).So we cannot exclude the possibility that some transposases brought by TE insertions have important functions and may explain intraspecific variation. Genomic Sequence Alignments and Identification of Indels Physical mapping of the rice (Oryza sativa) indica Guangluai 4 chromosome 4 was conducted by an integrated approach (Zhao et al., 2002), and the sequenced BACs were assembled, forming 87 contigs (http://www.ncgr.ac.cn/ english/edatabasei_ctg.htm).The overlap regions of Guangluai 4 BACs were noted to avoid double-counting in the following analysis.The BAC sequences and 87 contigs of indica Guangluai 4 that were more than 100 kb in length were aligned with rice pseudomolecules (TIGR release 5) to determine their corresponding regions in japonica Nipponbare by BLASTN search with a threshold e-value of 10 2100 .The identified collinear regions in the japonica chromosome 4 were extracted for further comparison.Candidate indels were identified using the diffseq program (using the default parameter) in the EMBOSS package (Rice et al., 2000), and indels of more than 100 bp were further confirmed by BLAST2 (Altschul et al., 1997).Two types of comparisons with the corresponding japonica sequences, by BACs directly or by 87 assembled contigs, were performed. The alignment results of BGI 93-11 contigs and Nipponbare pseudomolecules, which were generated by the software nucmer, were downloaded using the GFF Dumper on the TIGR Genome Browser.We found that a small quantity of anchor results were self-contradictory; that is, two 93-11 contigs that localized on the same location yielded opposite patterns (insertion or no insertion in japonica).Hence, a perl script was written to wipe off all of these abnormal anchor results.We used only maximal exact matches that were unique in both the query and reference sequences as the alignment anchors to avoid potential errors caused by misassembly or inaccurate anchoring.Then, another script was developed to mine all of the indels of more than 100 bp based on the renewed anchor results.The indels of more than 100 bp were further confirmed by BLAST2. Mining of TIPs in the Rice Genome For each insertion region identified above, the query sequence, composed of the insertion region and its flanking DNA (both 100 bp upstream and 100 bp downstream), was extracted and used to screen against all known TE sequences using RepeatMasker (open version 3.0.5).The known TE sequences included all transposons and transposon-like elements collected by Repbase (volume 12, issue 9; http://www.girinst.orgtheRTEdb (Juretic et al., 2004), the TIGR Rice Repeat Database (ftp://ftp.tigr.org/pub/data/TIGR_Plant_Repeats/TIGR_Oryza_Repeats.v3.3), and the MULE TIR library (Juretic et al., 2005).We used a Smith-Waterman cutoff score of 225 calculated by the cross_match program (other settings: -nolow, -no_is, -nocut).After that, the insertion regions were set aside unless they were recognized as intact transposon elements or both of their terminal sequences belonged to the same transposon family, which were distinct from their flanking sequences.Meanwhile, all indels were examined by a perl script to determine whether potential TSDs (2-18 bp) were present.All candidate transposon insertions satisfying both criteria (i.e. with homology to known TE sequences and the detection of TSDs) were further inspected.The classification of the identified transposon insertions was based on the descriptions in the repeat databases.Those transposon insertions, which had different definitions in different repeat databases, were then removed. Characterization of RBIPs between Nipponbare and Guangluai 4 in 93-11 Targeted Regions To determine the states (presence or absence) of transposon insertions identified between Nipponbare and Guangluai 4 in the corresponding regions of indica 93-11, we conducted sequence comparisons targeting the transposon insertion sites.For each transposon insertion identified between Nipponbare and Guangluai 4, three unique 200-bp sequences were extracted and used to search against the assembled indica 93-11 contigs, using BLASTN with a threshold e-value of 10 220 .For the first two unique 200-bp sequences, each was composed of 100 bp of one transposon terminal sequence and 100 bp of its flanking DNA, from one genome with the TE insertion.The third one was a 200-bp sequence free of transposon insertion, from the other genome without the TE insertion.An insertion of a TE was considered to be shared in the indica 93-11 genome when either of the first two unique sequences was found in assembled indica 93-11 contigs (a threshold identity percentage of 95%).Alternatively, an insertion was judged to be absent in indica 93-11 when the third unique sequence was found in 93-11 contigs, with the same threshold value.We regarded the insertion as not having its explicit target region if the BLAST search did not yield any expected result or yielded two equally perfect hits, indicating both the presence and absence of the insertion in the 93-11 genome.Then, the extracted region and its clear ortholog were aligned using BLAST2 to check for the presence or absence of the insertion.Meanwhile, we also used the anchor result mentioned above to seek the corresponding locations of 93-11 contigs to confirm the states of the TE insertions in 93-11 targeted regions and eliminate all potential artifacts. Classification of LTR-Retrotransposons The LTR-retrotransposon insertions, which were identified between Nipponbare and Guangluai 4, were taken out for further analysis.Sequence comparisons and structural analysis were used to classify solo LTRs, intact LTR elements, and other truncated elements.Intact LTR retrotransposons were identified by the LTR_Finder program (Xu and Wang, 2007; http://tlife.fudan.edu.cn/ltr_finder/, using the default parameter) and the alignment results of their terminal sequences using the BLAST2 program.The paired length between two terminal sequences of a retrotransposon must be longer than 100 bp, and the identity must be greater than 85%.Solo LTR retroelements were identified by sequence homology search against all known TE repeat databases using RepeatMasker, as described above.Those elements, which were composed of a single LTR, were recognized as solo LTR retroelements. EST Analysis and Gene Prediction All publicly available rice ESTs were obtained from the National Center for Biotechnology Information EST database (http://www.ncbi.nlm.nih.gov/projects/dbEST/).Full-length cDNAs of both KOME (http://red.dna.affrc.go.jp/cDNA/; japonica Nipponbare; Rice Full-Length cDNA Consortium, 2003) and National Center for Gene Research (http://www.ncgr.ac.cn/cDNA/; indica Guangluai 4; Liu et al., 2007) were also included.Transposon insertions and their flanking regions were used to search against the EST/Fl-cDNA database using BLASTN with a threshold e-value of 10 220 .The candidate transcripts were then aligned with genomic sequences using GMAP (Wu and Watanabe, 2005) with a cutoff of minimum 95% identity over 70% of the length of the transcript.Gene predictions in Nipponbare were mainly based on the annotation provided by TIGR.The exon-intron structure and various transcript isoforms of the genes were reexamined individually via alignment with their corresponding cDNA/EST.If a cDNA or EST transcribes through an insertion site in one genome without the TE insertion, or possesses a truncated gene and a fragment of its flanking transposon sequences in the other genome with the TE insertion, the gene was considered to have different transcript structures between the two genomes.In addition, if various transcript isoforms around the insertion site were found in the genome with the TE insertion at the site (i.e. if two transcripts from the same genome showed different exon-intron structures in the insertion site; Fig. 6B), it would be determined as alternative splicing.For each expressed gene, the transcription start site was determined by comparing UTR sequences (TIGR release 5) with the corresponding genomic sequence.TIPs in the region of less than 250 bp upstream from the transcription start site were defined to be in the potential regulatory region. Phylogenetic Analysis BLASTp search against all of the annotated proteins in the whole rice genome at TIGR (release 5) was conducted using the wheat (Triticum aestivum) XIP-I protein (GenBank accession no.CAD19479) as the query.The search resulted in the identification of 30 proteins with an e-value cutoff of 1E-5.Among them, Os06g25010 and Os06g24990, as two gene fragments after TE insertion, were replaced by indica XIP-I.The protein is coded by the longest ORF within an indica rice full-length cDNA (GenBank accession no.CT836240), and there are no nucleotide differences between the indica cDNA and 93-11 genome sequences.Os12g18750 was removed because it shows an incomplete domain and low homology when checked individually.Those protein sequences were aligned using ClustalW (Thompson et al., 1994).Unrooted phylogenetic trees were generated in MEGA4 (Tamura et al., 2007) by the neighbor-joining method (Saitou and Nei, 1987) using the Poisson correction method (Zuckerkandl and Pauling, 1965).The 50% majority rule condensed tree is shown in Figure 7.The percentages of replicate trees in which the associated taxa clustered together in the bootstrap test (1,000 replicates) are indicated next to the branches (Felsenstein, 1985).For convenience, we have removed the LOC prefix from all TIGR locus identifiers. Microarray Data Extraction and Statistical Analysis From Rice Multi-platform Microarray Search, we got the two Affymetrix probe set identifiers, OsAffx-27816-1-S1_at and OsAffx.27815.1.S1_s_at, which represent the two gene fragments of indica XIP-I separated by a TE insertion.The probes in the two probe sets were remapped to the rice genomes, Nipponbare pseudomolecules and 93-11 contigs, by BLASTN.We downloaded the microarray data files of each experiment from the GEO Web site (http://www.ncbi.nlm.nih.gov/geo/).Overall, there are 57 chips of indica IR64 (45 from GSE6893 and 12 from GSE6901) and 45 chips of japonica Nipponbare (13 from GSE7951, 4 from GSE6908, 24 from GSE6719, and 4 from GSE6720).The signal intensity data were extracted using a perl script.Pearson's correlation coefficient was applied in linear correlation analysis.The significance of the slope of the regression line was determined by the R language package. Real-Time PCR Analysis RNAs of 14-d-old seedlings of indica 93-11 and japonica Nipponbare were extracted as described above.Quantitative PCR was performed on the Applied Biosystems 7500 real-time PCR System using SYBR Premix Ex Taq (TaKaRa).The PCR thermal cycle conditions were as follows: denaturing at 95°C for 10 s and 40 cycles at 95°C for 5 s and 60°C for 34 s.The two rice genes used as internal reference genes to calculate relative transcript levels were UBQ5 (AK061988) and eEF-1a (AK061464; Jain et al., 2006).The primer efficiency used to calculate the relative quantification was 2.0.The primer sequences are listed in Supplemental Table S7.Three technical replicates were used for real-time PCR analysis.We performed Student's t test (two tailed) to identify relative differences between Nipponbare and 93-11. Figure 1 . Figure 1.Sequence comparison of an orthologous region between japonica Nipponbare and indica Guangluai 4. The region is approximately 492 kb in Nipponbare, from 31,616,657 to 32,108,476 bp on chromosome 4 (TIGR pseudomolecule 5.0), and approximately 394 kb in Guangluai 4. Light gray shading indicates the homologous regions, and the white areas show the indels of more than 100 bp.TEs are represented by bars of designated colors.All non-TE genes are indicated by dark lines with arrows.Exons are depicted as horizontal lines, and introns are depicted as the lines connecting exons. Figure 2 . Figure 2. Contribution of TIPs to large indels.TIPs and indels in the approximately 23-Mb orthologous regions of chromosome 4 are classified into seven groups according to their sizes, as shown at the bottom of the histograms.A, Bars show the number of TIPs and indels, in black and red, respectively.The blue line indicates the proportion of TIPs to indels.B, Bars show the coverage of TIPs and indels.The blue line denotes the ratio of TIPs to indels. Figure 3 . Figure 3.The phylogenetic relationship of three varieties, japonica Nipponbare, indica 93-11, and indica Guangluai 4, characterized by in silico analysis of RBIPs.The first node represents the divergence between two subspecies, while the second node denotes the radiation of ancestral indica into two gene pools, the ancestors of the 93-11 and Guangluai 4 gene pools, which are represented by the green ellipses.The dashed line represents the introgression between the two.I, II, and III are the expected patterns of RBIPs in the three varieties.Type I indicates the insertions that occurred in the Nipponbare genome after the divergence between the two subspecies.Type II occurred in the common ancestor of the Guangluai 4 and 93-11 gene pools, after the divergence.Type III happened in the Guangluai 4 gene pool, after the radiation of indica into at least two gene pools.Copy number, average length, and ratio of solo LTR to intact LTR are also listed. Figure 4 . Figure 4. Distribution of 2,041 TIPs in the rice genome.Individual transposon insertions are represented by horizontal lines, and different kinds of transposons are shown in different colors.The light gray bars on the chromosomes indicate the position of centromeres.Detailed information for each TIP is listed in Supplemental TableS2. Figure 5 . Figure 5. Densities of SNPs, TIPs, and repeats on rice chromosome 5 (approximately 30 Mb).At top, the azure bars indicate the numbers of TIPs per megabase.The red line shows SNP rate (per kilobase) after subtraction of repetitive regions, and the gray line shows the percentage of repetitive DNA.The distribution of TIPs on chromosome 5 is shown at bottom. Figure 6 . Figure 6.Examples of genetic variation types associated with TIPs.A, Two gene fragments were separated by a Dasheng insertion into the coding region of XIP-I.B, The insertion of copia in the 3# UTR of OsWRKY8 created an alternative isoform in Nipponbare, which was a chimeric transcript possessing three additional exons from the TE.C, The first intron of a homolog of AtPUP11 was inserted by a hAT transposon, resulting in the loss of its original first exon and the gain of an additional exon deriving from the TE.D, Transposition of a large gypsy into a rice glucosyltransferase gene generated an intron of 15 kb.E, A 5# upstream region of a gene was inserted by a hAT transposon.Homologous regions are indicated by light gray shading.Horizontal lines and arrows over/ below the genomic region represent the corresponding Fl-cDNA or EST.LTR and internal sequences of transposons, TSDs, and coding regions are indicated by designated colors.The transcripts of indica in E are not found in the rice EST or Fl-cDNA database. Figure 7 . Figure 7. Phylogenetic and GeneChip expression analyses of the XIP-I gene.A, Phylogenetic relationship of wheat XIP-I and its homologous proteins in rice.Both wheat XIP-I and indica XIP-I are highlighted with red.OsXIP, riceXIP, RIXI, OsChib3a, and OsChib3b(Park et al., 2002) are proteins that have been identified and studied in rice.B, The small colored boxes represent the positions of the probes in the two probe sets of the Affymetrix GeneChip.The probes of OsAffx.27816.1.S1_at and OsAffx.27815.1.S1_s_at are shown with red and green, respectively.The insertion position in the gene is indicated with a black triangle.The transcription initiation site is also indicated.C and D, Plot and correlation of hybrid intensity between OsAffx.27816.1.S1_at and OsAffx.27815.1.S1_s_at in different samples from Nipponbare (C) or IR64 (D).The horizontal axis shows the intensity of the probe set OsAffx.27816.1.S1_at calculated based on the hybrid intensity of its 11 probes, while the vertical axis shows the intensity of OsAffx.27815.1.S1_s_at.Pearson's correlation coefficient was used in linear correlation analysis.The significance of the slope of the regression lines is determined from the t statistic. Figure 8 . Figure 8.Detection of the LTR insertion in the XIP-I gene using PCR.A, Small arrows indicate the locations of the primers used in PCR amplification.The expected sizes of PCR products in different patterns (insertion or no insertion) are also shown.B, RT-PCR analysis of XIP-I gene expression in japonica Nipponbare and indica Guangluai 4. C, Detection of the insertion in the genomic DNA of 10 indica, 11 japonica, and three wild rice varieties. Table I . Number of indels and TIPs between the two rice varieties a Insert regions of .100bp.bTE insertions of .100bp.cThe proportion of coverage of insert regions to TE insertions. Table II . Comparison of polymorphic transposon insertions between the two rice varieties Table III . Summary of in silico analysis of RBIP patterns in three varieties ND, Not determined. a See type I in Figure 3. b See type II in Figure 3. c See type III in Figure 3. Transposon Insertion Polymorphisms in Rice Genomes Plant Physiol.Vol.148, 2008 Table IV . TIPs found in the expressed gene region Table V . A partial list of identified genetic variations induced by TIPs Table V . (Continued from previous page.)
2018-04-03T00:33:35.500Z
2008-07-23T00:00:00.000
{ "year": 2008, "sha1": "ed04e4fa68068c393411f4cc14bd5c768bedddb7", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/plphys/article-pdf/148/1/25/37095158/plphys_v148_1_25.pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "11c930f2fd5cbe3d8249b1ad2f7e51594278884c", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
244042705
pes2o/s2orc
v3-fos-license
Retroperitoneoscopic lumbar sympathectomy for the treatment of primary plantar hyperhidrosis Background Primary plantar hyperhidrosis (PPH) is an idiopathic disease, characterized by excessive sweating of the feet. It leads to significant disturbance in private and professional daily lifestyle, due to excessive sweating. The aim of this study is to present the safety, efficacy and procedures of retroperitoneoscopic lumbar sympathectomy (RLS) for treatment of PPH. Methods RLS was performed 60 times in 30 patients (18 men, 12 women) with PPH in our institution from May 2019 to October 2020. All procedures were carried out by laparoscopy with retroperitoneal approach. Clinical data including patient demographics and perioperative, postoperative outcomes were evaluated. Recurrence of symptoms, and any adverse effects of surgery were evaluated after 7 to 30 days in outpatient clinic, and thereafter every 6 months. Results Mean age of patients was 33.6 (± standard deviation 10.8) years. Fourteen and fifteen patients were previously treated with medical therapy or endoscopic thoracic sympathectomy (ETS) respectively. Mean preoperative quality of life (QoL) score of patients was 91.8 (VERY BAD), but postoperative 12 months (QoL) score decreased to 29.1 (MUCH BETTER). There was no serious postoperative complication. During the mean 22 months of follow-up period, no compensatory sweating was observed. Conclusions RLS can be a safe and effective surgical treatment for severe PPH, especially for the patients with persistent plantar sweating even after conservative management and ETS. RLS also could be offered to surgeons who are familiar with retroperitoneal space anatomy as feasible surgical treatment for PPH. Background Primary plantar hyperhidrosis (PPH) is characterized by excessive sweating of the feet. The etiology of the disease is not fully understood yet, but stimulation of sympathetic nerve seems to activate sweating from eccrine sweat gland. The onset of disease is usually young age. PPH is known to have a genetic factor, and its estimated prevalence is 1-16% in the general population [1,2]. In many cases, excessive secretion of sweat from the hands and feet causes moderate to severe disturbance in daily social life. Also, patients often present with bromhidrosis, medical condition caused by the bacterial decomposition of sweat, which results in an unpleasant sweat odor. These problems can lead to significant impairment of the quality of life (QoL). Conservative treatments are used in dermatology department, which consist of the application of anticholinergics, aluminum chloride solutions, iontophoresis, and subdermal injection of botulinum toxin [3][4][5]. However, these therapies have disadvantages of short-term effects and high rates of recurrence [6,7]. For hands, if medical therapies are not satisfactory, endoscopic thoracic sympathectomy (ETS) is available for the surgical treatment of primary hyperhidrosis [8,9]. Excessive sweating of the hands can be eliminated with this procedure in more than 90% of the patients. However, compensatory sweating and plantar hyperhidrosis remains unchanged after ETS in up to more than 50% of patients [9,10]. In plantar hyperhidrosis, lumbar sympathectomy can be an effective surgical treatment for eliminating excessive sweating of the feet. However, there are not many reports regarding the safety and efficacy of lumbar sympathectomy for plantar hyperhidrosis in Asian countries. Although thoracic sympathectomy has been introduced as surgical treatment for palmar hyperhidrosis and numerously reported with systematic investigations, lumbar sympathectomy for plantar hyperhidrosis has become clinically utilized only until recently due to previous worries of postoperative complications such as compensatory sweating and retrograde ejaculation [10,11]. Although high postoperative patient satisfaction has been proven in general surgery department of European groups [12][13][14], the outcome of lumbar sympathectomy has not been reported much in Asian groups. The objective of this study is to present the safety, efficacy and operation procedures of retroperitoneoscopic lumbar sympathectomy (RLS) in Asian groups. Patient selection Prospective study of all patients who underwent RLS in our department from May 2019 to October 2020 was performed. All patients were referred from chest surgery department, due to compensatory plantar sweating which persisted after medical treatment or endoscopic thoracic sympathectomy. Medical records including patients' demographics and previous medical history were evaluated. Preoperative and postoperative plantar sweating symptom and quality of life were evaluated using the standardized hyperhidrosis quality of life (QoL) questionnaire developed by de Campos et al. [15] in outpatient clinic, after 1 month and thereafter every 6 months after surgery. Postoperative outcomes evaluation Preoperative and postoperative plantar skin temperature were measured using infrared thermal camera (FLIR C5 ™ , FLIR ® Systems, Inc.) to check the change of plantar skin temperature. Postoperative skin temperature was measured on the day after surgery, and every 6 months thereafter in the clinic. Operation procedure All operations were performed bilaterally under general endotracheal anesthesia. Patient is placed in lateral position. After hyperflexion of the table, we marked the projection of the third lumbar spine level onto the flank skin of the patient under a C-arm fluoroscopic view (Fig. 1A). After routine draping, about 1 cm length of skin incision was made at 2 cm medial and superior spot of anterior superior iliac spine. After skin incision, external oblique muscle, internal oblique muscle, and transveralis abdominis were dissected consecutively with splitting using hemostat. The transversalis abdominis was separated from the peritoneum. Retroperitoneal space was opened with blunt finger dissection. During this procedure, we made sure that there is no peritoneal opening. The round shape balloon device (AutoSuture ™ , COVI-DIEN) was inserted through the incision site to be ballooned up to 500 cc, for dilating retroperitoneal space. A 10 mm blunt tip balloon trocar was inserted into dilated space and ballooned with 30 cc of air. CO 2 gas was insufflated to make pneumoretroperitoneum upto 15 mmHg. Camera device was inserted to the first trocar. We used two additional ports. 2nd port (5 mm) was inserted at the middle clavicular line between the iliac crest and rib. 3rd port (11 mm) was placed at the posterior axillary line at the same level (Fig. 1B). Initially, we confirmed the identification of L3 level with C-arm view after dissection of L3 nerve. But later on some learning experiences, we inserted 2nd and 3rd trocars at the L3 level which was identified by the C-arm view, and approached vertically to find the L3 nerve directly. After exposure of the retroperitoneal space, Gerota's fascia was dissected. On the right side, the sympathetic chain lies posterior to the inferior vena cava, which was retracted anteriorly ( Fig. 2A). On the left side, the sympathetic chain lies posterior and lateral to the infra-renal aorta. Pushing all structures anteriorly, retroperitoneal fat tissue was dissected. The sympathetic nerve was identified in the medial aspect of the psoas muscle (Fig. 2B). Both upper and lower 2 cm margin of sympathetic nerve were clipped using titanium clip (Ligaforce), and the segment was resected with metzembaum scissor. After position change to contralateral side, same method was done. Muscle layer and subcutaneous tissue were closed layer by layer. Statistical analysis Data were analyzed using IBM SPSS version 24.0 (IBM, Chicago, Illinois, USA). Values are presented as mean ± standard deviation. Continuous variables of groups were compared using paired t-tests. Multiple linear regressions were used to determine the influencing factors of postoperative patient's satisfaction and QoL score. P-value was considered statistically significant if less than 0.05. Ethical considerations This study was approved by the Institutional Review Board of Seoul saint Mary's hospital, the Catholic university college of medicine (Ethics approval number: KC20RISI0086) in accordance with the recommendations of the Helsinki Declaration. Informed consents were obtained from all the participants enrolled in the study. Patients' demographics RLS was performed 60 times in total of 30 patients (18 men, 12 women) in our institution from May 2019 to October 2020 by single surgeon. Of the 30 patients, their mean age was 33.6 (years) ranged from 18 to 66. Twenty two patients (73.3%) were young age, less than 40 years old. Mean BMI (Body mass index) was 22.9. Among them, 14 patients (43.7%) had medical 1st camera port (balloon trocar) was inserted at medial and superior from the anterior superior iliac spine (ASIS). 2nd port was inserted at the middle clavicular line between the iliac crest and lowest rib. 3rd port was placed at the posterior axillary line Fig. 2 A Exposure of L3 sympathetic nerve during RLS. Sympathetic nerve is identified in the medial aspect of the psoas muscle. On the right side, the sympathetic chain lies posterior to the inferior vena cava. B On the left side, the sympathetic chain lies posterior and lateral to the infra-renal aorta therapies before, while 15 patients (46.8%) underwent ETS previously. Three patients (10%) had past history of percutaneous alcohol injection therapy, but plantar hyperhidrosis persisted. Patients' demographics and characteristics are summarized in Table 1. Among all of the enrolled patients, eleven patients underwent simultaneous RLS and ETS with the chest surgery department, due to compensatory sweating of the trunk and feet. Nine patients underwent RLS alone. Perioperative outcomes All procedures were carried out by laparoscopic surgery with retroperitoneal approach. There was no conversion to an open surgery. Mean operation time was 65 min including position change. Operation time decreased over learning experiences. Mean estimated blood loss was 18 ml, which was minimal. Twenty five patients (83.3%) discharged from the hospital within 24 h postoperatively. Seven patients (16.7%) discharged the day after surgery due to moderate wound pain, but no other specific postoperative complication was observed. Postoperative sweating symptom measurement All patients experienced immediate improvement of plantar sweating postoperatively. Mean preoperative quality of life (QoL) score of patients was 91.8 (QoL score, 20; Excellent ~ 100; very bad), but late postoperative (12 months) QoL score decreased to 29.1 (Improved evolution of quality of life (QV) score, 20: Much better ~ 100; Much worse) ( Table 2). One case of postoperative priapism occurred, but it resolved spontaneously. Also, we confirmed increase of postoperative plantar skin temperature compared with preoperative one, due to decreased perspiration of plantar after sympathectomy. Patient's gender was significant factor in postoperative QoL score. However, age and body mass index (BMI) did not affect QoL score. Also, female group showed more higher QoL score than male group both preoperative and postoperatively (Tables 2, 3). Discussion Primary focal hyperhidrosis is defined as excessive sweating from focal parts of human body. Common involved sites are face, soles, axilla and inguinal region. Primary hyperhidrosis is not caused by other underlying conditions. Secondary hyperhidrosis may be focal or generalized, caused by underlying medical disease or medicine use [1,2]. Disease onset is usually from childhood to adolescent. The disease is not life threatening, however, bromhidrosis can cause bad smells in the involved area. It can cause excessive sweating from hands and feet, and interfere with quality of life. The first line treatment is conservative therapies, such as anticholinergics, topical use of aluminum chloride solutions and iontophoresis. They can improve sweating symptoms as long as the methods are used, but require continuous use, which does not occur in most patients. The intradermal injection of botulinum toxin, which were used successfully for axillary hyperhidrosis, also can be tried [3][4][5]. However, injections into the sole of the foot are very painful and local anesthesia to hands and feet is difficult to be induced by the application of anesthetic cream. In addition, the effect of botulinum toxin is limited to a few months [6]. Percutaneous alcohol injection is also applied to treat PPH. It shows good responses after treatment, but also has disadvantages of moderate rates of recurrence, and complications that include complex regional pain syndromes as well [7]. Permanent cure can be achieved by surgical treatment. Endoscopic thoracic sympathectomy (ETS) is available as surgical treatment for primary palmar hyperhidrosis. ETS is effective for treating palmar sweating in more than 90% of the patients [8,9,16,17]. However, plantar hyperhidrosis and compensatory sweating remains even after ETS in up to 28-70% of the patients [18,19]. In plantar hyperhidrosis that is persistent even after ETS, endoscopic lumbar sympathectomy (ELS) can be an effective surgical treatment for eliminating plantar hyperhidrosis. ELS has been increasingly accepted for patients with severe plantar hyperhidrosis over the last few years in European countries. Rieger [24]. However, performance of ELS has not been reported in urology department, especially in Asian group yet. Compensatory sweating is the main reason for dissatisfaction after ETS. Severe compensatory sweating after ETS can occur in about 3-5% of patients [11]. After lumbar sympathectomy alone, however, severe compensatory sweating was rare [22]. In our study, 11 patients underwent simultaneous ETS and RLS. Plantar hyperhidrosis was improved immediately after surgery. Potential sexual dysfunction is worrisome postoperative complication in lumbar sympathectomy in male patients [20,25]. Erection is mainly controlled by parasympathetic nerves, whereas ejaculation is controlled by sympathetic nerve stimulation. The lowest thoracic (T10) and first lumbar (L1) ganglia play key roles in ejaculation [26]. In similar studies, Rieger et al. reported ejaculation disorders after lumbar sympathectomy in 6-54% of the men, but were usually observed when the upper lumbar ganglion was resected in the sympathectomy, especially on both sides [22,23]. The incidence of ejaculation disorders after RLS is low, which indicates that permanent sexual function disorders are not common after lumbar sympathectomy at the level of the third or fourth lumbar vertebral body with resection of the sympathetic trunk. Accordingly, adverse effects related to sexual function after lumbar sympathectomy should be informed to male patients before surgery. Although, these adverse effects are not common, and self-limiting according to up todate postoperative reports. Theoretically, secretion function from the eccrine glands is controlled by the sympathetic nervous system. The preganglionic neurons responsible for the innervation of the eccrine sweat glands in the feet originate from the lateral horn of the lower thoracic and upper lumbar spinal cord. The nerve fibers go through the white ramus communicans to the sympathetic trunk where they run caudally in interganglionic branches, and are switched to postganglionic neurons at levels of the sympathetic trunk ganglia. Postganglionic neurons travel through the grey rami communicans to spinal nerves L4 to S3, over which they arrive in the peripheral portion of sweat glands of the feet. Therefore, resection of the sympathetic trunk from the upper segment of the third lumbar vertebral body to the lower edge of the fourth lumbar vertebral body results in blocking of the sympathetic conduction to the spinal nerves L4-S3, thereby eliminating sweat secretion of the feet [27,28]. Determining the appropriate extent of sympathetic ganglion resection is questionable. Cheng A, et al. reported the recurrence rate of compensatory sweating after sympathectomy. The results varied according to the type of sympathectomy and extent of nerve resection [29]. Persistent skin moisture was more common after the resection of an interganglionic resection reported in previously published studies. We mainly resected ganglionic segments of the lumbar sympathetic nerve, and complete anhidrosis was not common. Most patients showed moderate improvement in sweating postoperatively with satisfactory quality of life. Retroperitoneal approach facilitated efficient identification of the lumbar sympathetic chain for urologists, who are familiar with retroperitoneal organ surgery such as kidney, ureter and bladder. Psoas muscle, which is an important landmark to find sympathetic nerve chain, is one of the structures composing retroperitoneal space. Adjacent structures, including ureter, inferior vena cava and Gerota's fascia also compose retroperitoneal space anatomy. Another advantage of our approach is that the port placement at the L3 level provided accurate identification of sympathetic ganglion and direct approach to it. This method also resulted in reducing unnecessary adjacent tissue damage. Also, minimally invasive retroperitoneoscopic approach resulted in favorable perioperative and postoperative outcomes, including minimal blood loss, early mobilization and start of oral intake of patient. All patients discharged from the hospital within 24-48 h postoperatively. Postoperative wound pain was minimal because muscle layers were dissected with hemostat, not incised. None of our patients required intravenous analgesia postoperatively. Additionally, infrared thermography is based on the emission of infrared radiation from a certain object or region as projection through an infrared thermal camera [30]. This instrument enables to detect the heat distribution on the object's surface, and at the same time to measure peripheral temperature. Researches with the use of infrared thermography have been reported in the various medical fields, including musculoskeletal & vascular departments, obstetrics, gynecology, and nephrology [31][32][33][34]. We applied infrared thermography for measuring the differences between preoperative & postoperative plantar temperature to strengthen our results. Infrared thermography can be non-invasive and objective tool to evaluate the efficacy of sympathectomy, because decreased perspiration from plantar will lead to increase of skin temperature. The optimal change of plantar temperature after sympathectomy is not strictly settled yet, but it is notable that most patients showed satisfactory symptom score in the clinic with increase of plantar skin temperature during the follow up period. Several limitations are present in our study. First, the numbers of enrolled patients is small, and postoperative follow up period is intermediate. Second, the standard of optimal change of plantar temperature after sympathectomy is not fully established yet. Further studies with larger numbers of cases and long term postoperative follow-up will be required. Although retroperitoneoscopic lumbar sympathectomy was performed in the urology department, it could be applied in other general surgery department, because retroperitoneal approach could facilitate operation for urologists and other surgeons who are familiar with retroperitoneal surgery.
2021-11-13T08:40:42.168Z
2021-11-12T00:00:00.000
{ "year": 2021, "sha1": "1f0406fd6873d36b4da53c28600329073dd5d64d", "oa_license": "CCBY", "oa_url": "https://bmcsurg.biomedcentral.com/track/pdf/10.1186/s12893-021-01393-y", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "1f0406fd6873d36b4da53c28600329073dd5d64d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
258955253
pes2o/s2orc
v3-fos-license
Nore1 inhibits age-associated myeloid lineage skewing and clonal hematopoiesis but facilitates termination of emergency (stress) granulopoiesis Age-associated bone marrow changes include myeloid skewing and mutations that lead to clonal hematopoiesis. Molecular mechanisms for these events are ill defined, but decreased expression of Irf8/Icsbp (interferon regulatory factor 8/interferon consensus sequence binding protein) in aging hematopoietic stem cells may contribute. Irf8 functions as a leukemia suppressor for chronic myeloid leukemia, and young Irf8−/− mice have neutrophilia with progression to acute myeloid leukemia (AML) with aging. Irf8 is also required to terminate emergency granulopoiesis during the innate immune response, suggesting this may be the physiologic counterpart to leukemia suppression by this transcription factor. Identifying Irf8 effectors may define mediators of both events and thus contributors to age-related bone marrow disorders. In this study, we identified RASSF5 (encoding Nore1) as an Irf8 target gene and investigated the role of Nore1 in hematopoiesis. We found Irf8 activates RASSF5 transcription and increases Nore1a expression during emergency granulopoiesis. Similar to Irf8−/− mice, we found that young Rassf5−/− mice had increased neutrophils and progressed to AML with aging. We identified enhanced DNA damage, excess clonal hematopoiesis, and a distinct mutation profile in hematopoietic stem cells from aging Rassf5−/− mice compared with wildtype. We found sustained emergency granulopoiesis in Rassf5−/− mice, with repeated episodes accelerating AML, also similar to Irf8−/− mice. Identifying Nore1a downstream from Irf8 defines a pathway involved in leukemia suppression and the innate immune response and suggests a novel molecular mechanism contributing to age-related clonal myeloid disorders. Age-associated bone marrow changes include myeloid skewing and mutations that lead to clonal hematopoiesis. Molecular mechanisms for these events are ill defined, but decreased expression of Irf8/Icsbp (interferon regulatory factor 8/interferon consensus sequence binding protein) in aging hematopoietic stem cells may contribute. Irf8 functions as a leukemia suppressor for chronic myeloid leukemia, and young Irf8 −/− mice have neutrophilia with progression to acute myeloid leukemia (AML) with aging. Irf8 is also required to terminate emergency granulopoiesis during the innate immune response, suggesting this may be the physiologic counterpart to leukemia suppression by this transcription factor. Identifying Irf8 effectors may define mediators of both events and thus contributors to age-related bone marrow disorders. In this study, we identified RASSF5 (encoding Nore1) as an Irf8 target gene and investigated the role of Nore1 in hematopoiesis. We found Irf8 activates RASSF5 transcription and increases Nore1a expression during emergency granulopoiesis. Similar to Irf8 −/− mice, we found that young Rassf5 −/− mice had increased neutrophils and progressed to AML with aging. We identified enhanced DNA damage, excess clonal hematopoiesis, and a distinct mutation profile in hematopoietic stem cells from aging Rassf5 −/− mice compared with wildtype. We found sustained emergency granulopoiesis in Rassf5 −/− mice, with repeated episodes accelerating AML, also similar to Irf8 −/− mice. Identifying Nore1a downstream from Irf8 defines a pathway involved in leukemia suppression and the innate immune response and suggests a novel molecular mechanism contributing to age-related clonal myeloid disorders. Myeloid skewing and hematopoietic stem cell (HSC) expansion are found in the bone marrow of some aging humans and in mice (1). In human subjects, this may be associated with mutations in various leukemia-associated genes, defining "clonal hematopoiesis of indeterminate potential" (CHIP) if mutations are present with a variant allelic frequency (VAF) of greater than 2% (1). Molecular mechanisms for such bone marrow changes are undefined, but recent transcriptome profiling of HSCs from aging humans or mice suggests possibilities (2)(3)(4)(5). For example, these studies defined an age-related decrease in expression of interferon regulatory factor 8 (Irf8) (also referred to as interferon consensus sequence binding protein [Icsbp]). This is of interest since the net effect of this transcription factor is to inhibit proliferation, enhance apoptosis, facilitate DNA repair, and modulate phagocyte effector functions during myeloid differentiation (6)(7)(8)(9)(10)(11)(12). Relatively decreased expression in the bone marrow of subjects with chronic myeloid leukemia (CML) compared with nonleukemic subjects suggests that Irf8 functions as a leukemia suppressor (13). In CML, expression of Irf8 increases with remission, decreases with relapse, and is lowest during progression to myeloid blast crisis (BC or acute myeloid leukemia [AML]) (13). Consistent with this, leukemogenesis is delayed in mice transplanted with bone marrow cotransduced with vectors to express both Bcr-abl and Irf8 compared with recipients of bone marrow-expressing Bcr-abl alone (14,15). Irf8 −/− mice phenocopy CML with neutrophilia at a young age and development of AML over time (16,17). We found Irf8 is also required to terminate emergency (stress) granulopoiesis; the episodic process for neutrophil production during the innate immune response (18,19). Emergency granulopoiesis is mechanistically distinct from steady-state granulopoiesis, involving different cytokines and regulatory pathways (20,21). During episodes of emergency granulopoiesis, we found exaggerated and sustained neutrophilia in Irf8 −/− mice compared with wildtype (18,22). Repeated challenges accelerated progression to AML in Irf8 −/− mice but had no adverse effects on wildtype mice (18). Emergency granulopoiesis was stimulated by activation of the Nlrp3 inflammasome in these studies. In mice with Bcr-abl-induced CML, we found similar dysregulation of neutrophil production during repeated episodes of emergency granulopoiesis with acceleration of chronic phase relapse in mice in tyrosine kinase inhibitor-induced remission and enhanced progression to BC (19). The RASSF5 gene produces Nore1a and b isoforms through use of different promoters, and we identified Irf8 binding to the A promoter (24)(25)(26). Nore1a is a ubiquitously expressed protein with an N-terminal protein kinase C-conserved region (C1), a Ras association (RA) domain, and a C-terminal SARAH domain (25)(26)(27)(28). Nore1b expression is restricted to activated T cells and contains RA and SARAH domains but lacks the N-terminal C1 domain (25)(26)(27)(28). Like other RA domain family (Rassf) proteins, Nore1 is a scaffold for multiprotein complexes, influencing cell function by facilitating protein-protein interactions (29)(30)(31)(32). For example, Nore1 directs the Mst1, a serine/threonine kinase, to the cell membrane where it autophosphorylates and enhances activation of caspases and/or stress-activated protein kinases during Fas or tumor necrosis factor alpha (TNFα)-induced apoptosis (29,30). This suggests that RASSF5 transcription may mediate some effects of Irf8 on apoptosis. Rassf5 −/− mice lack expression of both Nore1a and b isoforms but have no obvious phenotype. Hepatocytes from these mice are resistant to TNFα or TRAIL-induced apoptosis and fail to activate Mst1 in vivo (25). In some human solid tumors, RASSF5 deletion or silencing correlates with aggressive disease (33)(34)(35)(36)(37)(38)(39)(40)(41)(42)(43)(44)(45). However, hematopoiesis in Rassf5 −/− mice was not previously studied, and implications for leukemogenesis are unknown. In the current work, we investigate Nore1 as a mediator of leukemia suppression, emergency granulopoiesis termination, and age-associated clonal hematopoiesis. Events downstream from Irf8 and Nore1a may clarify the contribution of infectious episodes to constitutive activation of inflammatory pathways and clonal hematopoiesis during aging and suggest therapeutic targets to mitigate such effects. Irf8 activates RASSF5 promoter A and enhances Nore1a expression To identify target genes that mediate effects of Irf8 (referred to as Icsbp in prior studies), we hybridized chromatin that coimmunoprecipitated from U937 myeloid leukemia cells with Irf8 to a CpG island microarray (6). U937 cells undergoing cytokine-induced differentiation were studied to identify Irf8 target genes associated with the innate immune response. We previously reported functional characterization of several genes that were identified in these studies (6)(7)(8)(9)(10). Further examination of these data identified interaction between Irf8 and a CpG island in the RASSF5A promoter (Fig. 1A). To determine if Irf8 functionally impacted the RASSF5A promoter, U937 myeloid cells were transfected with series of reporter constructs with promoter A truncations (Fig. 1B) and vectors to overexpress Irf8 or express Irf8-specific shRNAs versus relevant control vectors. We found that activity of reporter constructs with 2 kb, 250 bp, or 180 bp of promoter A was increased by Irf8 overexpression (70% increase by Irf8 with each of these constructs, p < 0.01, n = 10), but a 110 bp construct was not (p = 0.6, n = 10) (Fig. 1C). In contrast, Irf8 knockdown decreased reporter activity of the constructs to below the activity seen without knockdown (65% decrease, p < 0.01, n = 5 for Irf8-specific shRNAs versus control). This is consistent with the location of a conserved Ets/Irf consensus sequence identified between −180 and −110 bp in the human promoter. In these experiments, neither overexpression nor knockdown of Irf8 influenced activity of the control reporter vector (without RASSF5A promoter sequences), and this was subtracted as background. We generated another reporter construct with three copies of this promoter A Ets/Irf-consensus sequence linked to a minimal promoter. We found Irf8 overexpression increased, but Irf8specific shRNAs decreased, activity of this construct relative to control vectors that did not either overexpress or knockdown Irf8 (p ≤ 0.004, n = at least 5) (Fig. 1D). Since binding of Irf8 to the RASSF5A promoter was identified in studies with differentiating U937 cells, we investigated the impact of interleukin 1β (IL1β) on this cis element. IL1β is the major mediator of Nlrp3 inflammasome-induced emergency granulopoiesis (22,46). We found that cis element activity increased significantly with IL1β, with or without Irf8 overexpression or knockdown (60% increase under all conditions, p ≤ 0.01, n = at least 5 for comparison to control vectors without Irf8 overexpression or knockdown). Control experiments were performed with the minimal promoter/reporter vector (without the RASSF5A cis element). Inclusion of the putative cis element increased reporter expression above the control vector alone. Neither Irf8 overexpression nor Irf8 knockdown altered expression of the control minimal promoter/reporter construct, and this was subtracted as background. As a control, we also investigated the effect of Irf8 on reporter constructs with the proximal 2.0 kb or 580 bp of RASSF5 promoter B. Activity of promoter B constructs was significantly less than A, consistent with restriction of Nore1b expression to T lymphocytes (Fig. 1C). Compared with control expression vector, neither Irf8 overexpression (p = 0.3 and p = 0.5, n = 4 for the 2.0 kb and 580 bp constructs, respectively) nor knockdown of Irf8 (p = 0.6 and p = 0.4, n = 4 for the 2.0 kb and 580 bp construct) influenced activity of RASSF5B constructs. We next investigated the impact of Irf8 on Nore1 expression in vivo. Since Irf8 is required to terminate emergency granulopoiesis, we determined if it influenced expression of Nore1a during this process. For these studies, we injected Irf8 −/− or wildtype mice with alum to induce emergency granulopoiesis (via Nlrp3 inflammasome activation) or saline as a steady-state control (18,19,22). Bone marrow was harvested 2 weeks later, representing the peak abundance of neutrophils in the circulation and bone marrow in alum-injected wildtype mice. We quantified Nore1 mRNA in Lin − or Lin + bone marrow cells with primers specific to Nore1a or Nore1b. In both Irf8 −/− and wildtype bone marrow, we found Nore1a mRNA was more abundant in Lin − versus Lin + cells, with or without alum injection ( Fig. 2A). At steady state, expression of Nore1a mRNA was equivalent in the two genotypes in Lin − or Lin + cells (p ≥ 0.1, n = 3 for all comparisons). In wildtype Lin − cells, Nore1a mRNA increased significantly 2 weeks after alum injection (2-fold, p = 0.04, n = 3), but this increase was not observed in Irf8 −/− Lin − cells (p = 0.2, n = 3) ( Fig. 2A). Therefore, Nore1a expression was significantly impaired in Irf8 −/− Lin − cells relative to wildtype during emergency granulopoiesis (p = 0.005, n = 3). A, a CpG island in RASSF5 promoter A was identified by a chromatin immunoprecipitation-based screen with an antibody Irf8 (also referred to as Icsbp) (6). The RASSF5A promoter was analyzed for potential Irf8-binding sequences, and a conserved Ets/Irf consensus is indicated in red. Primers for chromatin immunoprecipitation studies are in green. Exon 1 is indicated in gray, and the ATG codon for translation start is underlined in red. B, cartoon depiction of RASSF5 promoter A sequences used analyzed in reporter gene assays. Constructs are indicated by purple lines and the conserved Ets/Irf consensus by a purple X. C, Irf8 activates RASSF5 promoter A constructs with at least 180 bp of 5 0 flank. U937 cells were transfected with RASSF5 promoter A/luciferase reporter constructs or RASSF5 promoter B/reporter constructs. Cells were cotransduced with vectors to overexpress or knockdown Irf8 (or with relevant control vectors). Each construct was analyzed in at least five independent transfection experiments. Statistical significance is indicated by * (p = 0.002), ** (p = 0.003), *** (p = 0.0004), # (p = 0.0002), ## (p = 0.0005), or ### (p = 0.0003). Error bars represent SD, and open circles represent individual data points. The empty reporter vector was not influenced by overexpression or knockdown of Irf8, and this minimal activity was subtracted as background. D, Irf8 activates a conserved Ets/Irf consensus sequence between 180 bp and 110 bp in RASSF5 promoter A. U937 cells were transfected with a reporter construct with a minimal promoter and three copies of the Ets/Irf consensus sequence from the A promoter and vectors to overexpress or knockdown Irf8. Some cells were stimulated with interleukin 1β (IL1β) for 24 h prior to analysis. Each construct was analyzed in at least five independent experiments. Statistical significance for panels in this figure is indicated by * (p = 0.004), ** (p = 0.005), ***, # (p = 0.004), ## (p = 0.0004), or ### (p = 0.01). Error bars represent SD, and open circles represent individual data points. The empty reporter vector was not influenced by overexpression or knockdown of Irf8, or treatment with IL1β and this minimal activity was subtracted as background. Icsbp, interferon consensus sequence binding protein; Irf8, interferon regulatory factor 8. In control experiments, we determined that Nore1b was expressed at lower levels compared with Nore1a in Lin − and Lin + bone marrow cells. Nore1b was not altered by Irf8 knockout or alum injection, consistent with known T cellrestricted expression ( Fig. 2A). For these studies, relative expression of Nore1a versus Nore1b was determined by normalization to a known quantity of plasmid containing the Nore1a complementary DNA (cDNA). We also investigated Nore1 expression in murine bone marrow myeloid progenitor cells (Lin − ckit + ) after in vivo stimulation with IL1β. We found less Nore1a protein in Irf8 −/− cells compared with wildtype cells (by Western blot), consistent with mRNA expression. Isoforms were identified by size, and these cells did not express Nore1b (Fig. 2B). To verify interaction of Irf8 with RASSF5 promoter A in murine bone marrow, we performed chromatin immunoprecipitation with Lin − bone marrow cells from wildtype or Irf8 −/− mice. Since Irf8 binds to a composite Ets/Irf consensus sequences in many promoters, we quantified coprecipitating chromatin with primers flanking the conserved Ets/Irf composite in promoter A (consensus indicated in red and primers highlighted in green) (Fig. 1A). We found enrichment of this region in chromatin that immunoprecipitated with antibody to Irf8 (p = 0.001, n = 3 versus control antibody) or trimethyl K4histone 3 (p = 0.002, n = 3 versus control antibody), consistent with activation by Irf8 (Fig. 2C) (47). Irf8 −/− murine bone marrow cells were a negative control for this experiment. Decreased Irf8 expression in aging human and murine bone marrow HSCs was previously documented (2, 3). Therefore, we determined the impact of aging on expression of Irf8 and select target genes in Lin − bone marrow cells from young versus aging wildtype mice (<30 weeks versus >40 weeks) (Fig. 2D). We found decreased Irf8 in aging mice, associated with decreased expression of Irf8 activation target genes, Nore1a and Fancc (p < 0.02, n = 3 for all three genes) (46). We previously determined that Irf8-enhanced expression of Fanconi C during emergency granulopoiesis is essential to handle genotoxic stress of this process (46). To define a mechanism for these differences, we studied apoptosis in bone marrow populations from young versus aged Rassf5 −/− and wildtype mice, as defined previously. We found that LSK and Lin + Gr1 + cells from aged Rassf5 −/− mice were apoptosis resistant compared with cells from similarly aged wildtype mice (p = 0.015, n = 6 for LSK cells, p < 0.001, n = 6 for Lin + Gr1 + cells) (Fig. 4B). In young mice, differential apoptosis sensitivity of these bone marrow populations between the two genotypes was less marked (not shown). Rassf5 −/− mice develop transplantable AML with aging Work in our laboratory and others determined that 80% of Irf8 −/− mice develop AML (AML or BC) by 36 weeks of age (in a low pathogen environment; earlier in standard housing) (16,18). If impaired Nore1 expression contributes to this, Rassf5 −/− mice might also have a tendency for leukemogenesis. To study this, we monitored aging Rassf5 +/− and Rassf5 −/− mice for circulating myeloid blasts with wildtype littermates as controls. Mice with ≥20% circulating myeloid blasts were sacrificed for further analysis. For these studies, development of AML was defined as ≥20% myeloid blasts by histology and flow cytometry in peripheral blood and bone marrow. To further characterize the disorder that developed in aging Rassf5 −/− mice, we transplanted sublethally irradiated wildtype mice with bone marrow from Rassf5 −/− mice with AML (defined as aforementioned). Rassf5 −/− donor mice were all older than 36 weeks, and these mice had expansion of the bone marrow LSK population compared with wildtype control littermates of the same age (Fig. 6A). Consistent with transplantable AML, myeloid blasts rapidly appeared in the circulation, bone marrow, and spleen (Fig. 6, B and C) of all recipients. LSK cells were significantly expanded in bone marrow recipients compared with wildtype mice, also consistent with this conclusion (Fig. 6A). Rassf5 −/− HSCs accumulate DNA damage with aging Development of AML in Rassf5 −/− mice suggests that an age-associated increase in apoptosis resistance of LSK cells permits accumulation of DNA damage, eventually leading to transformation. To investigate this, we analyzed the bone marrow of 40-to 45-week-old Rassf5 −/− and wildtype mice. Although none of the Rassf5 −/− mice in this experiment had overt AML, the percent of bone marrow myeloid blasts was significantly greater than in comparably aged wildtype mice (12.4% ± 1.5% versus 3.6% ± 1.0%, p = 0.04, n = 4). We assessed total Lin − bone marrow cells, LK and LSK cells from the two genotypes by flow cytometry for γH2AX staining as a marker for double-stranded DNA damage (assessing fluorescent Figure 5. Rassf5 −/− mice develop acute myeloid leukemia with aging. Aging Rassf5 −/− , Rassf5 +/− , and wildtype mice were compared for abundance of bone marrow myeloid blasts and spleen size. A, the incidence of acute myeloid leukemia (AML) is greater in Rassf5 −/− mice compared with Rassf5 +/− mice. Peripheral blood was analyzed for white blood cell counts, and mice with >20% blasts were considered to have AML. Statistical significance determined by log-rank analysis for survival curves. B, bone marrow blasts increase with age in Rassf5 −/− mice. Significantly more myeloid blasts, as identified by histologic examination of bone marrow biopsies, were present in young and aging Rassf5 −/− mice compared with Wt. Statistically significant differences * (p = 0.02, n = 5) or ** (p = 0.001, n = 5). C, myeloid blasts accumulate in the peripheral blood, bone marrow, and spleen of Rassf5 −/− mice with aging. Bone marrow myeloid blasts indicated by *. Immunohistochemistry of sternal bone marrow reveals CD34 + Rassf5 −/− myeloid blasts and megakaryocytes (stained brown). Aging wildtype mice were controls. D, Rassf5 −/− mice with AML develop splenomegaly. Spleen weight in Rassf5 −/− mice with AML versus wildtype was determined. Statistical significance indicated by * (p = 0.01, n = 8). Error bars represent SD. Nore1 influences leukemogenesis and emergency granulopoiesis intensity as a function of cell number) (Fig. 7A). We found significantly more DNA damage in aging Rassf5 −/− mice compared with wildtype in each of these populations by this assay (expressed as area under the curve, p ≤ 0.003 for comparison of genotypes in the three tested populations) (Fig. 7B). As another approach to exploring mutagenesis in aging Rassf5 −/− versus wildtype control littermates, we analyzed genomic DNA from total bone marrow mononuclear cells by whole exome sequencing. For these studies, we quantified the average number of single base pair mutations, deletions, or insertions per mouse in aged cohorts. Consistent with flow cytometry data, we identified a significantly higher incidence of mutations in aging Rassf5 −/− mice compared with comparably aged wildtype mice (p = 0.004, n = 4) (Fig. 8A). We identified a common set of genes that were mutated in both aging Rassf5 −/− and wildtype littermate cohorts and may be characteristic of aging in this strain. However, we also identified a set of mutations unique to Rassf5 −/− mice. By Gene Ontology analysis, Rassf5 −/− specific mutations were in pathways related to nucleotide base excision repair, Notch signaling, ATP metabolism, and synthesis of glycogen and glycerolipids (Fig. 8B); pathways with potential to impact leukemogenesis or sustained inflammation. In human subjects, clonal hematopoiesis of indeterminant potential is defined by a VAF of >2% for a mutation (1). Therefore, we analyzed VAF for point mutations in aging Rassf5 −/− mice or wildtype control littermates that we identified by whole exome sequencing. To ensure these mutations represented cell populations with relative expansion, we focused on clones with VAF of >20%. We identified a significant excess of mutations meeting this definition in aging Rassf5 −/− mice compared with wildtype littermate controls (p = 0.0003, n = 4) (Fig. 8C). All genes with VAF >20% in aging wildtype mice were also mutated in Rassf5 −/− mice, but there was set of genes with variant alleles more specific to aging Rassf5 −/− mice. This included Nup98 (present in all three of four Rassf5 −/− mice), Fam43a, Usp47, and Fam181a. Nore1 influences leukemogenesis and emergency granulopoiesis We analyzed mutational signatures in aging Rassf5 −/− and wildtype littermate control mice using the Catalogue of Somatic Mutations in Cancer (COSMIC) database (Fig. 8D) (49). In Rassf5 −/− mice, signatures were enriched for pathways associated with defective homologous recombination repair (SBS26 and SBS44), defective DNA mismatch repair (SBS3), and an age related clock-like signature (SBS5). These patterns were consistent with pathway analysis of mutations specific to aging Rassf5 −/− murine bone marrow, aforementioned. Nore1 is required to terminate of emergency granulopoiesis We previously demonstrated enhanced and sustained aluminduced emergency granulopoiesis in Irf8 −/− mice (18). Repetition every 4 weeks resulted in AML in 50% of these mice after two alum injections, and shortened survival of Irf8 −/− mice compared with steady state (18). If Nore1 contributes to termination of emergency granulopoiesis, we anticipated a similar phenotype in Rassf5 −/− mice. Therefore, we injected Rassf5 −/− every 4 weeks with alum to induce emergency granulopoiesis or saline as a steady-state control. Wildtype littermates and Irf8 −/− mice were controls in these studies. Peripheral blood counts were analyzed every 2 weeks. In alum-injected wildtype mice, peripheral blood neutrophilia was maximal at 2 weeks, and steady state resumed by 4 weeks, as in our prior studies (Fig. 9A) (18,19). In contrast, neutrophils did not return to baseline abundance in Rassf5 −/− mice by 4 weeks post alum injection (p = 0.0007, n = 7 versus steady state) (Fig. 9A). Peak neutrophilia increased with each alum injection in Rassf5 −/− mice and was significantly greater than wildtype mice by the third injection (p = 0.001, n = 7). However, starting with the first alum injection, aberrant neutrophilia was less profound in Rassf5 −/− mice compared with Irf8 −/− mice (p = 0.0004, n = 7 for the last injection). Repeated alum injection induced comparable and mild anemia in all three genotypes (p ≤ 0.02, n = 7) (Fig. 9B). Circulating monocytes were not significantly different in Rassf5 −/− mice compared with wildtype during emergency granulopoiesis, and platelet counts did not vary significantly during this experiment of any of the three genotypes (not shown). Nore1a contributes to apoptosis resistance of Irf8 −/− bone marrow cells In prior studies, we determined that Irf8 enhances Fasdependent and -independent apoptosis (6,9,10). To investigate the contribution of Nore1a, we transduced Irf8 −/− murine bone marrow cells with a vector to express Nore1a or empty control vector (Fig. 11A). Since Rassf5 −/− LSK cells were relatively apoptosis resistant, we quantified apoptosis in Figure 9. Nore1 is required for termination of emergency granulopoiesis. Rassf5 −/− , Irf8 −/− , and wildtype mice were injected with alum to stimulate Nlrp3-induced emergency granulopoiesis or saline as a steady-state granulopoiesis control (injection weeks indicated in red). Peripheral blood counts were determined every 2 weeks. A, Rassf5 −/− mice exhibit progressive neutrophilia with repeated emergency granulopoiesis episodes. The increase in circulating neutrophils is enhanced and sustained in Rassf5 −/− and Irf8 −/− mice after alum injections compared with wildtype mice. The differences are more exaggerated in Irf8 −/− mice. Statistically significant differences indicated by * (p = 0.01) or ** (p = 0.0004). B, mild anemia develops in the three genotypes during repeated alum-induced emergency granulopoiesis episodes. Statistically significant differences indicated by * (p = 0.0009) or ** (p = 0.02). At least six mice per group were analyzed for the panels in this figure, and error bars represent SD and open circles represent individual data points. Irf8, interferon regulatory factor 8. Other investigators found that Mst1 activation by Nore1 facilitated Fas-induced apoptosis in hepatocytes (29,30). Consistent with this, we found transduction of Irf8 −/− LSK cells with a Nore1a expression vector increased phospho (activated)-Mst1 but not total Mst1 (Fig. 11C). Treatment with Fas agonist slightly decreased phospho-Mst1 in Nore1atransduced Irf8 −/− cells; however, these cells were undergoing apoptosis at a higher rate than untreated cells. Discussion Although Nore1a was not previously known to play a role in hematopoiesis, our current study implicated it in leukemia suppression, termination of emergency granulopoiesis, and aspects of bone marrow aging such as myeloid skewing and clonal hematopoiesis. In this work, we found that Irf8 interacted with, and activated, an Ets/Irf-binding consensus sequence in the proximal RASSF5A promoter. We also found expansion, apoptosis resistance, and mutagenesis in LSK cells from aging Rassf5 −/− murine bone marrow compared with wildtype controls. This was associated with myeloid skewing, clonal hematopoiesis, and predisposition to AML. During episodes of Nlrp3 inflammasome-induced emergency granulopoiesis, we found that Nore1a increased in an Irf8-dependent manner in Figure 10. Episodes of emergency granulopoiesis accelerate development of acute myeloid leukemia (AML) in Rassf5 −/− mice. Rassf5 −/− and wildtype mice were injected every 4 weeks with alum to stimulate emergency granulopoiesis or saline as a steady-state control. Peripheral blood and bone marrow were analyzed for AML/BC (myeloid blasts >20%). At least six mice per group were analyzed. A, repeated episodes of emergency granulopoiesis significantly increase the number of Rassf5 −/− mice developing AML compared with steady state. Statistical significance was determined by log-rank analysis. Nonsignificant values indicated by "ns." B, circulating myeloid blasts increase at earlier time points in alum-injected Rassf5 −/− mice compared with Rassf5 −/− mice at steady state. Mice were studied at weeks 8 and 12. Statistical significance indicated by * (p = 0.03). Error bars represent SD, and open circles represent individual data points. C, Lin − Sca1 + ckit + cells in the bone marrow of alum-injected Rassf5 −/− mice were relatively expanded (p = 0.02) and apoptosis resistant (p = 0.03) compared with wildtype cells. Gr1 + bone marrow cells are not significantly different between the two genotypes (p = 0.45). Mice were analyzed at 8 weeks. BC, blast crisis. hematopoietic stem and progenitor cells. In contrast, the RASSF5B promoter did not bind Irf8, and Nore1b did not exhibit Irf8-regulated expression in myeloid cells; consistent with the previously described lymphoid restriction of this isoform. Both Nore1a and b isoforms are disrupted in Rassf5 −/− mice, and these mice did not exhibit the age-associated decrease in circulating lymphocytes observed in wildtype mice. The latter is a topic of interest for future studies. We found that loss of either Irf8 or Nore1 enhanced neutrophil production and predisposed to accumulation of mutations with aging that favor differentiation block and AML (16). This has implications for understanding mechanisms involved in age-related myeloid skewing and CHIP in human subjects (1). It is not clear how some commonly identified CHIP-associated mutations facilitate leukemogenesis, and mechanisms for variable development of AML in subjects with the same mutation are also undefined. We hypothesize that cooperation of such leukemia-associated mutations with an age-associated decrease in Irf8, and thereby Nore1a, may define such mechanisms. For example, we found a higher incidence of Nup98 point mutations in aging Rassf5 −/− mice compared with wildtype littermate controls. These mutations were present at a VAF that suggested clonal expansion of mutant cells. Nup98 is involved in gene transcription and plays Nore1 influences leukemogenesis and emergency granulopoiesis a key role in mRNA transport from the nucleus (50). Chromosomal translocations involving NUP98 port Compared with Irf8 −/− mice, AML occurred in Rassf5 −/− mice after a longer lag time and with a lower incidence, even during episodes of emergency granulopoiesis. This is not surprising, since Irf8 also influences target genes involved in cytokine-induced proliferation, DNA repair, and proapoptotic mechanisms that are not attributable to Nore1. However, consistent with a contribution of decreased Nore1a to apoptosis resistance of Irf8 −/− cells, Nore1a re-expression facilitated Fas-induced and intrinsic apoptosis in HSC and myeloid progenitor cells. Apoptosis was most rapid in Irf8 −/− cells with the highest Nore1a expression level, leading to an underestimation of the effect. However, transducing Irf8 −/− cells with a Nore1a expression vector activated Mst1, a known downstream target. We found that relative apoptosis resistance of LSK cells from aging Rassf5 −/− mice was associated with enhanced accumulation of DNA damage compared with aging wildtype mice. By whole exome sequencing, we defined a mutation profile unique to aging Rassf5 −/− bone marrow versus aging wildtype littermate controls. This included impaired base excision repair and DNA mismatch repair; deficiencies anticipated to permit accumulation of DNA damage in apoptosisresistant Rassf5 −/− cells. An increase in bone marrow neutrophils in aging Rassf5 −/− mice may contribute excess reactive oxygen species to the microenvironment via activation of the phagocyte NADPH oxidase. The combination of increased reactive oxygen species, decreased apoptosis, and impaired DNA repair would favor mutagenesis leading to clonal hematopoiesis and/or AML in aging Rassf5 −/− mice. Nore1a may also contribute to cell cycle arrest and regulate Mdm2 (32,(51)(52)(53)(54). The contribution of Nore1 to Mdm2/Tp53-induced cell cycle pause and/or apoptosis during the DNA-damage response is of interest for future study. We found that Rassf5 −/− mice were unable to efficiently terminate an emergency granulopoiesis response, similar to Irf8 −/− mice and a murine model of CML (18,19). This suggests that Nore1a enhances apoptosis in LSK cells to contribute to emergency granulopoiesis termination. Apoptosis of Rassf5 −/− Lin + Gr1 + cells was not impaired, suggesting differentiation stage specificity to regulation of the innate immune response by Nore1a. We hypothesize enhanced genotoxic stress during emergency granulopoiesis, coupled with apoptosis resistance of LSK cells, accelerated accumulation of mutations in Rassf5 −/− mice. This has implications for understanding the contribution of infectious challenges to clonal hematopoiesis in aging human bone marrow with decreased Irf8 expression. Plasmid vectors and myeloid cell line transfections Irf8 cDNA (also referred to as Icsbp) was obtained from Dr Ben Zion-Levi (Technion) and subcloned into the mammalian expression vector pcDNAamp (Stratagene). Irf8-specific shRNA and scrambled control sequences were designed using the Promega website (Promega). Prior published work documented Irf8 overexpression or knockdown in U937 cells by these vectors (6,9,10). The Nore1a cDNA was subcloned into a pMSCVneo retroviral vector (Stratagene). Sequences from the RASSF5A or B promoters were generated by PCR from the U937 myeloid cell line. PCR products were sequenced and compared with genomic databases. RASSF5 promoter A/reporter constructs were generated by subcloning 2 kb, 250 bp, 180 bp, and 110 bp of 5 0 flank into the pGL3enhancer reporter vector (Promega). RASSF5B constructs with 2 kb and 580 bp of 5 0 flank were also generated. A minimal promoter-reporter construct was generated by Figure 12. Schematic representation of interactions between Nore1a, upstream regulators, and downstream effectors of apoptosis. Irf8 modulates apoptosis by enhancing Nore1a expression but repressing expression of Fap1 (a Fas antagonist) and Gas2 (a calpain inhibitor). In the context of apoptotic signals such as Fas and TNF-α, Nore1a interacts with downstream targets to mediate apoptosis. Irf8, interferon regulatory factor 8; TNFα, tumor necrosis factor alpha. subcloning a double-stranded oligonucleotide with three copies of −180 bp to −157 of Rassf5 promoter A into pGL3promoter reporter vector (Promega) (TTCTCTTGGGTCGT CCTTCCGCC, Ets/Irf consensus underlined). The oligonucleotide was custom synthesized by Integrated DNA Technologies. U937 myeloid leukemia cells (obtained from Dr Andrew Kraft, University of Arizona) were maintained in Dulbecco's modified Eagle's medium, 10% fetal bovine serum (FBS), and 1% penicillin-streptomycin. Cells (30 × 10 6 /ml) were transfected by electroporation with vectors to overexpress or knockdown Irf8 (or relevant control vectors), RASSF5 promoter reporter constructs (or empty reporter vector control), and a CMV/renilla-luciferase reporter (to control for transfection efficiency). Dual luciferase assays were performed per manufacturer's instructions (Promega). Reporter activity for the empty control vectors was subtracted from the activity from vectors with RASSF5 sequences as background. Chromatin immunoprecipitation U937 cells were incubated briefly in formaldehydesupplemented media followed by sonication to generate chromatin fragments of 0.5 kb. Lysates were immunoprecipitated with rabbit anti-Irf8 serum or preimmune serum (Covance, Inc), and immunoprecipitated chromatin was hybridized to a CpG island microarray, as described (6). Significant CpG island precipitation was determined by nearest neighbor normalization, as described, and was at least threefold (6). Experiments were performed in triplicate, and only genes identified by in all three independent experiments were further investigated. Chromatin immunoprecipitation was also performed with murine bone marrow cells and antibodies to Irf8, Tri-Methyl K4 Histone H3 (Cell Signaling Technology, Inc), or irrelevant antibody (immunoglobulin G isotype control). Specific precipitation of the Rassf5 promoter A sequence was determined by quantitative PCR. Primers designed to flank an Ets/ Irf-binding consensus sequence in the promoter were A forward (5 0 -ACTCCTAAATCCACGCGGC-3 0 ) and A reverse (5 0 -ATCTAGGAGCAGCGGGTAGG-3 0 ). Primers were also designed to flank a sequence from the first exon of Rassf5 and the CpG island in RASSF5 promoter B as negative controls. The experiment was performed in triplicate in three independent precipitations. Quantitative real-time PCR and Western blotting RNA was isolated using the TRIzol reagent (Invitrogen) and tested for integrity by denaturing gel electrophoresis. Primers were designed with Applied Biosystems software, and PCR was performed using SYBR Green according to the "standard curve" method. Results were normalized to actin and GAPDH to control for mRNA abundance. For Western blots, cells were lysed in SDS sample buffer, separated by SDS-PAGE, transferred to nitrocellulose, and serially probed with antibodies to Nore1 (Thermo Fisher Scientific), total Mst1 (Cell Signaling Technology), phospho-Mst (Cell Signaling Technology), or GAPDH (loading control). Each experiment was repeated with at least three different lysates, and a representative blot is shown. To induce activation of the Nlrp3 inflammasome and induce emergency granulopoiesis, mice were injected intraperitoneally with ovalbumin/aluminum hydroxide-magnesium hydroxide (i.e., alum) or saline (as a steady-state control) every 4 weeks for 12 weeks, as described (18,22). Multiple alum injections were performed to mimic repeated infectious challenges (equivalent to several episodes of infection per year in humans). Peripheral blood counts were determined every 2 weeks using tail vein phlebotomy. Mice were randomly assigned to cohorts (seven per group) by coin flip. Average blood counts for cohorts in each genotype were not significantly different at the start of the experiment. All mice in each cohort were analyzed. Analysis of murine peripheral blood and tissues Peripheral blood was obtained by tail vein phlebotomy for complete blood counts by automated cell counter. Circulating myeloid blasts were identified by examination of May-Grünewald-Giemsa-stained peripheral blood smears by light microscopy (100× magnification; Zeiss Axioskop microscope, Zeiss Group). Slides from decalcified sternal bone marrow and paraffin-embedded spleen tissue were stained with hematoxylin and eosin by the Pathology Core Facility of the Lurie Cancer Center. Light microscopy was performed, and digital images were captured (100× magnification). Sternal myeloid blast counts were verified by hand counting at least 200 cells/ high-power field × 2 separate high-power fields. Murine bone marrow separation and retroviral transduction Bone marrow mononuclear cells were obtained from the femurs of Irf8 −/− , Rassf5 −/− , or wildtype mice for use in these studies (18,19). Lin − cells were isolated from Lin + cells by antibody-based magnetic bead affinity chromatography using a lineage depletion cocktail that includes antibodies to CD5 (T cell), B220 (B cell), CD11b (monocytes and neutrophils), Gr-1 Nore1 influences leukemogenesis and emergency granulopoiesis (neutrophils and some monocytes), 7-4 (neutrophils), and Ter-119 (erythroid progenitors) (Miltenyi Biotec). In some experiments, Lin − cells were further selected by affinity chromatography with antibodies to Sca1 and ckit. For other experiments, Lin + cells were recovered and selected for the Gr1 + subpopulation. For retroviral production, 293T cells were transfected by electroporation with Nore1a/MSCVneo plasmid or control MSCVneo, supernatants were collected 48 h post transfection, and virus was titrated in NIH3T3 cells, as described (10). Cell lines were obtained from American Type Culture Collection, and all lines were validated annually by genomic fingerprinting (ATCC Whatman FTA Human STR Kit). Whole exome sequencing and data analysis Total bone marrow mononuclear cells were isolated from the femurs of Rassf5 −/− mice or Rassf5 +/+ littermates, and DNA libraries were prepared with the Illumina TruSeq Exome Library Prep Kit. After validation with Qubit and Agilent Bioanalyzer, DNA libraries with unique barcoding indexes were pooled and hybridized to exome oligo probes to capture the exonic regions of the genome. Libraries were validated with Qubit quantification and Bioanalyzer quality check using a high-sensitivity DNA chip. Library sequencing was conducted on an Illumina NovaSeq NGS System. Paired end 150 bp reads were generated. Read quality, in FASTQ format, was evaluated using FastQC (Illumina, version 0.11.7). Trimmed reads were aligned to the mouse genome (mm10) using the Burrows Wheeler Aligner (version 0.7.12) (54). Resulting alignment files were cleaned, sorted, and marked for duplicates using the Picard Tools (version 1.85) CleanSam, SortSam, and Mark Duplicates, respectively. Files were further filtered using the TruSeq_Ex-ome_TargetedRegions_v1.2.bed file. Variants were detected from the processed files using bcftools mpileup with minimum base quality and maximum depth parameters set to 20 and 1000, respectively. Files were filtered for quality and depth of 20 and 10, respectively (55,56). The resulting variant files in Variant Call Format were annotated using SnpEff (an open source tool), version 4.3. VAF was calculated as the ratio of allele depth (variant)/median depth (minimum read depth of 30). The mutational patterns analysis was done using Muta-tionalPatterns package on R (49). Statistics Statistical significance was determined by Student's t test (for comparison of two conditions) or ANOVA (for comparison of more than two conditions) using GraphPad Prism software (GraphPad Software, Inc) or SigmaPlot software (Systat Software Inc). Results are reported as the mean ± SD with p < 0.05 considered significant. Survival curves were compared by the log-rank test. Study approval Animal studies were performed according to protocols approved by the Animal Care and Use Committees of Northwestern University and Jesse Brown VA Medical Center. Data availability Data are available from E. Eklund (e-eklund@northwe stern.edu). Conflict of interest-The authors declare that they have no conflicts of interest with the contents of this article.
2023-05-29T15:03:36.397Z
2023-05-01T00:00:00.000
{ "year": 2023, "sha1": "da1ff4778f0a68abfcf39b7b9254868e4dd030b3", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/article/S0021925823018951/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "742875c22624432ee00dce6010cb23bfbcb4191d", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
253172075
pes2o/s2orc
v3-fos-license
Diabetes Self-Management and Health-Related Quality of Life among Primary Care Patients with Diabetes in Qatar: A Cross-Sectional Study Diabetes self-management (DSM) practices are an important determinant of health-related outcomes, including health-related quality of life (HRQOL). The purpose of this study is to explore DSM practices and their relationship with the HRQOL of patients with type 2 diabetes in primary health care centers (PHCCs) in Qatar. In this cross-sectional study, data were collected from PHCC patients with diabetes via interview-administered questionnaires by utilizing two instruments: the DSM questionnaire (DSMQ) and the HRQOL Short Form (SF-12). Frequencies were calculated for categorical variables and medians were calculated for continuous variables that were not normally distributed. A statistical comparison between groups was conducted using chi-square for categorical data. Binary logistic regression was utilized to examine the relationship between the significant independent factors and the dependent variables. A total of 105 patients completed the questionnaire, 51.4% of whom were male. Approximately half of the participants (48.6%) reported poor overall DSM practices, and 50.5% reported poor physical health quality of life (PC) and mental health quality of life (MC). Female participants showed significantly higher odds of reporting poor DSM than male participants (OR, 4.77; 95% CI, 1.92–11.86; p = 0.001). Participants with a secondary education (OR, 0.18; 95% CI, 0.04–0.81; p = 0.025) and university education (OR, 0.18; 95% CI, 0.04–0.84; p = 0.029) showed significantly lower odds of reporting poor DSM than participants with no/primary education. Older participants showed higher odds of reporting poor PC than younger participants (OR 11.04, 95% CI, 1.47–82.76 and OR 8.32; 95% CI, 1.10–62.86, respectively). Females also had higher odds for poor PC than males (OR 7.08; 95% CI, 2.21–22.67), while participants with a secondary (OR, 0.13; 95% CI, 0.03–0.62; p = 0.010) and university education (OR, 0.11; 95% CI, 0.02–0.57; p = 0.008) showed significantly lower odds of reporting poor MC. In conclusion, patients with diabetes reported poor overall DSM practices and poor HRQOL. Our findings suggest intensifying efforts to deliver culturally appropriate DSM education to patients and to empower patients to take charge of their health. Introduction Diabetes mellitus continues to be one of the most debilitating diseases. According to the International Diabetes Federation, the prevalence of diabetes in the Middle East and North African region is 12.8%, the highest in the world (IDF regions) [1]. Alarmingly, in Qatar, the prevalence of diabetes among adults is 15.5%, which is higher than the global prevalence [2]. It is projected that the prevalence of type 2 diabetes in Qatar will increase to at least 24% by 2050 [3]. It is well-documented in the literature that individuals with diabetes report a deteriorated quality of life compared to individuals with no chronic conditions. This could be due to certain clinical characteristics of diabetes, such as the required comprehensive daily-care activities; the presence of comorbidities; and diabetes complications, which is the single most important determinant of how persons with diabetes perceive their physical and mental quality of life [4,5]. Heath-related quality of life (HRQOL) is a multidimensional concept that has been defined as the subjective evaluation of an individual's physical and mental health from the individual's perspective [6]. An improved or sustained quality of life in general is the primary goal outcome of diabetes early diagnosis, self-management, and treatment [7,8]. In recent years, there has been an increased interest in evaluating the quality of life of individuals with diabetes and other chronic conditions. While diabetes, due to its demanding and progressive nature, can impair an individual's perception of their quality of life, a deteriorated perception of quality of life can lead to poor self-care activities related to diabetes self-management [7]. The term self-management refers to the day-to-day activities that individuals undertake to minimize the negative outcomes and to prevent further complications of their chronic condition over the course of their illness [9]. Self-management behaviors for participants with diabetes mellitus represent a collection of actions that include undergoing treatment, self-monitoring glucose, exercising regularly, and managing diet, in addition to problem solving and reducing risks [10]. It is well-established in the literature that participants who are actively engaged in diabetes self-management have improved health outcomes [11,12]. Additionally, a magnitude of studies have pointed to the importance of DSM practices as a way to improve quality of life, making HRQOL a significantly important outcome for individuals with diabetes [13,14]. However, the relationship between diabetes selfmanagement and quality of life among people with diabetes has not been explored in Qatar. The purpose of this study was to investigate DSM practices among participants with type 2 diabetes attending primary health care centers (PHCCs) in Qatar and to explore the relationship between participants' DSM practices and their perceived quality of life. Study Design This is a multicenter, cross-sectional study conducted among participants with type 2 diabetes mellitus from seven primary health care centers in Qatar that offered clinical services for individuals with diabetes. Since the PHCC locations that offer diabetes clinical services are limited, we were bound by PHCC recommendations as to which clinics to approach. Data Collection and Instruments Data were collected utilizing interview-administered questionnaires from a convenience sample in patient waiting areas at the seven diabetes clinics. Three of the researchers interviewed patients in the waiting area after asking for their consent to participate in the study. The inclusion criteria included adult participants over the age of 18 with a type 2 diabetes diagnosis, who were fluent in English or Arabic, and who consented to participate in the study. 2. The Diabetes Self-Management Questionnaire (DSMQ) was utilized to assess patient's self-management activities related to their diabetes. The tool is composed of 16 items, which are scored on a 4-point scale from 0 (does not apply to me) to 3 (applies to me very much). The DSMQ has 4 sub-scales, namely, Glucose Management (items 1, 4, 6, 10, 12), Dietary Control (items 2, 5,9,13), Physical Activity (items 8,11,15), and Health-Care Use (items 3,7,14). From the 16 items, 9 items that are worded negatively require reverse scoring. Item 16 requests an "overall rating" of self-care, and it is included in the sum scale. The total possible score can range between 0 and 48, with a higher score indicating more effective self-management behavior. Scale scores were calculated as sums of item scores and then transformed to a scale ranging from 0 to 10 for ease of interpretation [15]. The DSMQ is a reliable and valid tool that provides a measurement of diabetes self-management behaviors in relation to glycemic control [15,16]. The DSMQ was translated to Arabic using the standardized forward and backward translation method in order to validate the quality of the translated tool, and it was culturally adapted to the local context. Additionally, the Arabic versions of the questionnaires were reviewed by 10 bilingual adults and by two experts in diabetes. 3. The 12-item Health-Related Quality of Life Questionnaire Short Form (SF-12) was used to evaluate HRQOL. The SF-12 tool assesses patient's perceived health status and constitutes two components: a physical health component (PC) and a mental health component (MC) [17,18]. Each parameter was calculated by applying an algorithm to generate a score from 0 to 100, with a higher score reflecting a better quality of life. We utilized the Arabic version of the tool, which has been determined to be a valid and reliable tool in previous studies [19][20][21]. Data collection occurred between 14 April and 24 April 2019. Data Analysis Data were analyzed using the Statistical Package for the Social Sciences (SPSS, version 24; IBM, Armonk, NY, USA). For a descriptive analysis, frequencies were calculated for categorical variables, and medians were calculated for continuous variables that were not normally distributed. A statistical comparison between groups was conducted using chi-square for categorical data. Binary logistic regression was utilized to examine the relationship between the significant independent factors and the dependent variables. The alpha level was set at 5% for statistical significance. The PC, MC, and DSMQ variables were each further split into two groups based on the median scores. Participants who scored below the median were classified as "poor" and were given a score of "0", while those who scored more than or equal to the median were categorized as "good" and were given a score of "1". Results The study included a total of 105 participants, 51.4% of whom were male. The majority of the participants had either secondary (41.9%) or a university education (40.0%). A total of 66.7% reported having diabetes for more than two years. A total of 47.6% of the participants reported having at least one comorbidity. Almost half of the participants reported having poor DSM (48.6%), poor PC (50.5%), and poor MC (50.5%). The mean HbA1c level was 7.6 ± 1.9 and was reported by less than half of the participants (45.7%) ( Table 1). The associations between the baseline characteristics of the participants with diabetes and their DSM status are presented in Table 2. The majority of females reported poor DSM (69.0% versus 30.0%; p < 0.001). There were no significant associations between age categories, nationality, diabetes duration, comorbidities, and HbA1C level with DSM status. A further analysis of the DSM subscales releveled that participants with a poor physical activity status were more likely to be males (p = 0.001) and had diabetes for over 2 years (p = 0.024). Similarly, participants with diabetes for over 2 years had poor health care use compared to those with a shorter diabetes duration (79.0% versus 59.0%; p = 0.035). Additionally, females reported poorer glucose management than males (61% versus 39%, p = 0.048), as observed in Table 3. The associations between the participant's baseline characteristics and PC and MC QOL statuses are presented in Table 4. Females were more likely to report poor PC compared with males (p < 0.001). Participants with good DSM reported good PC (p < 0.001). The median HbA1C level was higher for participants with poor MC than for participants with good MC (p = 0.011). No association was found between DSM and HRQOL components. Discussion This was a cross-sectional study that revealed important findings related to diabetes self-management practices, health-related quality of life, and associated factors among a sample of patients with diabetes of the primary health care system in Qatar. Our study found that approximately half of the participants reported poor overall DSM practices, as revealed by the DSM median score. This result is comparable to that of similar studies conducted in Kuwait and Saudi Arabia, which reported similar DSM median scores of 6.5 and 5.04, respectively, indicating poor self-care habits among their participants [22,23]. Barriers to DSM have been reported extensively in the literature. Locally, in Qatar, a qualitative study shed light on the barriers reported by 29 participants with diabetes [24]. The authors found similar barriers reported in other studies, such as work-related stress, the cost of testing strips, and long working hours; all external barriers. However, with Qatar being a country that hosts 94 other nationalities, culture also plays a major role in how individuals with diabetes perceive their DSM [24]. Furthermore, a related striking result was that less than half of our participants knew their most recent HbA1c value. These findings reflect the low knowledge levels of our participants. A similar study that surveyed adults with type 2 diabetes on their knowledge, attitudes, and practices (KAP) in Qatar found that participants reported poor knowledge related to diabetes [25]. Another KAP study also from Qatar that surveyed 2400 people from the general public found that the knowledge component had the lowest score. As a matter of fact, 69% of their participants scored low on knowledge related to normal fasting glucose levels [26]. Other studies from the region have confirmed low levels of knowledge related to diabetes [27,28]. Despite the extensive evidence that diabetes self-management education (DSME) is an effective and available resource in improving knowledge related to diabetes, DSM activities, and quality of life, in addition to other diabetes-related clinical outcomes [11,12], attendance remains poor, and, therefore, DSME remains an underutilized resource [29,30]. Education level was found to be a significant predictor in reporting better DSM and better MC in our study. Findings from other studies examining the relationship between education and DSM have been inconsistent. A similar study from the region in which participants with diabetes were surveyed in a primary health care setting about DSM found that participants with formal education reported better DSM [31]. On the contrary, a systematic review found that education was not a significant factor in reporting better DSM [32]. Education level or attainment, a long-standing indicator of socioeconomic status, is a complex and multidimensional factor that should not be measured the traditional way. A limitation to our study is that participants' previous knowledge and training related to DSM were not taken into account, which may have influenced our results [33]. Furthermore, our study revealed important gender differences in diabetes self-management, as females reported worse DSM habits than males overall, which remained significant in the regression analysis. When we further analyzed the DSMQ, we found that females also reported poorer glucose management than males. Glucose management is an essential cornerstone of DSM that, if not managed properly, could lead to further complications. Sex differences in diabetes self-management and disease outcomes exist, as seen in other studies. Sex differences have been attributed to biological factors, such as differences in hormonal pathophysiology [34,35]. Gender differences, however, are related to complex psychosocial processes that shape human behavior and in turn manipulate the clinical outcomes of diabetes [34]. For example, women in general have poorer glycemic control and are less likely to reach their A1c targets [36,37], which our study results are in line with. Considering the local psychosocial and cultural contexts when comparing our results to other studies conducted in the Gulf Cooperation Council (GCC) region, we found that our study results are in line with another study from Kuwait in which they found that men scored significantly better on the DSMQ than women [22]. Gender differences in DSM are clearly reported in the literature. Females tend to assume a responsible caregiving role in diabetes self-management toward family members, especially toward their spouses. The literature shows that the support females with diabetes receive from family members, especially from spouses, is less compared to the support that males receive from their female spouses [38,39]. This might be linked to our finding that women reported poorer PC than men. In most studies that examined gender differences in quality of life among individuals with diabetes, it was found that women tended to report worse outcomes than men [5,40]. A study in Qatar that examined the quality of life predictors of individuals with diabetes also found that females tend to report a lower quality of life than males [41]. In addition to gender, we found that older participants reported poorer PC than younger participants. The link between diabetes and the impairment of HRQOL has long been studied and documented. A longitudinal study that included data from 26,344 participants found that participants with a type 2 diabetes M diagnosis had a fivefold increase in the odds of reporting a significantly poorer quality of life [42]. Another longitudinal study that assessed patients with diabetes at a five-year follow-up point also found a deterioration in patients' reported HRQOL over time [43]. Although our study did not find a direct link at the multivariate level between DSM and HRQOL, it sheds light on other factors associated with DSM and HRQOL. This study had limitations. A main limitation is that it is a cross-sectional study, in which causal relationships cannot be established, in addition to the small sample size of patients, making our results not generalizable. Conclusions Our study results highlight the importance of providing patients with diabetes with diabetes self-management education to enhance patients' health literacy, knowledge, and skills needed to successfully self-manage diabetes and to prevent its complications, and ultimately to improve quality of life. Furthermore, these programs should be culturally adapted to suit local needs and to address gender gaps. Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: The data presented in this study are available on request from the corresponding author. Conflicts of Interest: The authors declare no conflict of interest.
2022-10-28T15:20:31.189Z
2022-10-25T00:00:00.000
{ "year": 2022, "sha1": "f8cc9820ce55f76335dde91012507d0477b35db7", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2227-9032/10/11/2124/pdf?version=1667818563", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "83290c505435ec7e890d34ce42f3b2ab4dfcc550", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
250357672
pes2o/s2orc
v3-fos-license
COVID‐19 and plasma cells: Is there long‐lived protection? Summary Infection with SARS‐CoV‐2, the etiology of the ongoing COVID‐19 pandemic, has resulted in over 450 million cases with more than 6 million deaths worldwide, causing global disruptions since early 2020. Memory B cells and durable antibody protection from long‐lived plasma cells (LLPC) are the mainstay of most effective vaccines. However, ending the pandemic has been hampered by the lack of long‐lived immunity after infection or vaccination. Although immunizations offer protection from severe disease and hospitalization, breakthrough infections still occur, most likely due to new mutant viruses and the overall decline of neutralizing antibodies after 6 months. Here, we review the current knowledge of B cells, from extrafollicular to memory populations, with a focus on distinct plasma cell subsets, such as early‐minted blood antibody‐secreting cells and the bone marrow LLPC, and how these humoral compartments contribute to protection after SARS‐CoV‐2 infection and immunization. both mild and severe disease at frequencies as high as 10-30% making it even more puzzling. Therefore, understanding immune mediators of protection from infection and severe disease as well as the immune mechanisms of the sequelae are critical to overcoming this pandemic. Viral neutralizing antibodies (nAbs) secreted by LLPC provide durable protection after infection. Prior to COVID-19, the best-known pandemic was the 1918 H1N1 influenza virus, which offered life-long serologic protection after primary infection. 3 However, reinfections could occur from new re-assorted influenza viral mutants and not necessarily from the previously circulating strains. But, in COVID-19, unlike influenza virus infections, antibody responses after SARS-CoV-2 infection whether it be mild or severe appear to persist for only 18-20 months. 4,5 Thus, antibody protection after SARS-CoV-2 infection may not necessarily be long lasting and a cause of breakthrough infections. Additionally, similar to influenza viruses, the evolution of new viral variants of SARS-CoV-2 for which there is little cross-protection may be another cause of repeat coronavirus infections with the recent Delta 6 and Omicron 7 mutants despite history of previous infection. [8][9][10] In the United States and then globally, vaccines to SARS-CoV-2 were introduced within a year after the start of the pandemic which was an incredible scientific achievement. These vaccines provided robust protection especially with high titers of nAbs and afforded safeguards for severe disease. However, the primary vaccine series were effective only short-term and exhibited waning efficacy within months. [11][12][13][14] Thus, the CDC guidance now recommends a booster dose 6 months after the initial primary two-dose immunization. Despite shielding from hospitalizations, waning vaccine titers were not necessarily effective against new viral variants, causing many breakthrough infections (BTI) even though most were mild. In all, following emerging viral mutants, the understanding of the mechanisms of durable humoral protection from infection and vaccination is vital in the fight against this pandemic. as MBC, or further differentiate into ASC have been studied in mouse models 15 but are not well described in human studies 16 (Figure 1). | B CELL S AND LONG -LIVED PL A S MA CELL S IN VIR AL INFEC TI ON At steady state, healthy humans have a low ASC frequency in the circulation (i.e., <1% of the total B cells) [17][18][19] but during acute viral infections, ASC rapidly burst into the bloodstream with a rise in protective pathogen-specific antibody levels. 18 While the neutralizing capability of these populations has been confirmed, the impact of EF-biased responses on memory formation, plasma cell development, and bone marrow engraftment is less clear. Heavy arrows-dominant pathway; Light arrows-secondary pathway; Dotted arrows-unconfirmed pathway. GC, germinal center; aNav, activated naive B cells; DN2, double negative (i.e., IgD − CD27 − B cells that also lack expression of CXCR5 and are involved in the EF response that is outside the GC but can still have T cell help); ASC, antibody-secreting cell; SLPC, short-lived plasma cell; LLPC, longlived plasma cell primary versus secondary exposures or result from the different types of viruses is not entirely clear. Typical serum titer responses reveal an early GC-independent phase with the appearance of low-affinity primarily IgM, 21,25 followed by high-affinity, class-switched, pathogen-specific durable IgG and IgA. After the initial robust rise in Ab titers, the decay kinetics of the antigen-specific levels are twofold as shown in nonhuman primate models. The first is a rapid fall-off due to apoptosis of short-lived ASC and then a slower decline or "memory" Ab after months likely from LLPC generation and maintenance. 26 The main source of serum "memory" Abs arises from circulating GC-derived MBC which differentiate into ASC to mature into tissue-resident LLPC, both of which are extremely rare and produce highly diverse and affinity-matured Abs. [27][28][29][30] In mice, LLPC have been identified in the spleen, the gut, and the bone marrow (BM), and are found weeks following the initial induction. 31 The mechanisms of how human LLPC are generated and maintained are not entirely clear. However, it is known to include GC and MBC responses and the migration of ASC to long-lived tissue sites such as the BM niches. In humans, LLPC were found in the BM CD19 − CD38 hi CD138 + compartment from natural viral infections that occurred over 40 years ago. However, exposures to repeat viral infections and vaccination were localized in other BM compartments such as CD19 + CD38 hi CD138 + subsets and the LLPC subsets. 32 After the initial burst into the blood, most early-minted ASC or plasmablasts undergo apoptosis triggering the rapid primary decline of Ab titers. Only some ASC eventually enter long-lived tissue sites such as the BM to submit to further development and maturation through factors provided by the specialized microniche. 33 These cells are likely responsible for the second slower Ab decay. Histology shows that LLPC have unique morphology from nascent ASC such as increased cytoplasm/nucleus ratio and higher number of mitochondria. Although LLPC are derived from early-minted ASC, they are transcriptionally and epigenetically different illustrating ongoing maturation in the BM sites. 34 These special molecular and epigenetic pathways enhance longevity, minimize energetic needs, and upregulate programs to acquire resistance to apoptosis in order to maintain antibody secretion for a lifetime. 34 In this review, we will investigate whether ASC after SARS-CoV-2 infection and new COVID-19 mRNA vaccines follow the canonical B cell and LLPC maturation programs or if these humoral responses are fundamentally altered. | Primary infection: virus-specific antibodies are highly diverse, peak early, and decline The majority of primary SARS-CoV-2 infections elicits a robust systemic viral-specific Ab response initially within 1-2 months, [35][36][37] although the Ab magnitude among infected individuals is heterogenous with peak levels varying over 200-fold. 37,38 By and large, Ab levels were reduced by 5-fold to 10-fold compared to the peak at 5 months [35][36][37]39 with some studies showing that they remain detectable for 5-12 months, 37,39-45 13-14 months, 46,47 and some suggesting 18-20 months, 4,5 in the absence of vaccination and reinfection. However, the pandemic started only 2 years ago, and so longer durability data are just not available. After an early peak within 2-5 weeks, Abs decline in a fashion that varied by isotype, viral antigen-specificity, and age. 37,48,49 While IgM and IgA often wane rapidly and become undetectable after 2-3 months, 50,51 IgG decays at a slower rate. Additionally, different viral antigens such as nucleocapsid (N), receptor-binding domain (RBD), and spike (S) also give rise to variable kinetics. For example, the serum N-Ab decay more rapidly compared to RBD-or S-Ab. The estimated average half-life in most infections of S-specific IgG, IgM, or IgA1 is 14-33, 8, or 6 weeks, respectively. 37,52 On average, the fastest waning Abs were N-specific IgG with two-third the levels at 4-9 months and undetectable levels in 33% of the patients. By 1 year, almost all patients had no measurable N-specific IgG. 53-57 S-specific IgG decays slowest, waning to less than one-third of the peak levels at 8-10 months. However, nearly all patients (90-97%) have detectable S-IgG titers at 12-13 months. [53][54][55] Finally, not all SARS-CoV-2infected patients developed demonstrable serum Abs, with some studies reporting 5% to 33% of PCR-positive patients particularly in young adults who did not seroconvert. [58][59][60] Antibodies that functionally neutralize correlate with total virusspecific Abs and RBD-specific Abs. 61 Both total virus-specific and nAbs usually peaked between 3 and 5 weeks after infection, but also rapidly decayed with an average half-life of 8-13 weeks. 37,41,50,52,[62][63][64][65] However, it appears that in mild to moderate infections, nAbs could last for at least 5-7 months. 14,39,[66][67][68][69][70] Both total viral-specific Abs and nAbs rapidly wane initially, but then declined at a much slower rate to remain relatively stable with time. 37,39,51,[71][72][73] In all, infectioninduced serum S-and RBD-specific IgG were positively correlated with nAbs, and these antibodies peaked within a few months and initially wane rapidly and then with a slower decay over the first year. 37,52,53,56,[74][75][76] Whether this slower decay will ultimately plateau as seen in other infections to provide LLPC and life-long protection remains at large. Increased Ab responses were associated with older age, male sex, and hospitalization. 38 However, disease severity seemed to have the greatest effect on the magnitude of infection-induced Abs. 38,73,77,78 In general, severe infections were associated with both a more rapidly rise and a higher peak in both binding and neutralizing Abs. 40,51,79,80 These Abs rocketed rapidly within days of symptom onset 40,81 especially in hospitalized or critically ill patients compared to mild (outpatients or asymptomatic) subjects. 40,50,51,54,56,73,79 Moreover, unlike conventional responses, the majority of these responses did not generate an early IgM response followed by the conventional class-switched IgG and IgA. 50,77 Instead, a class-switched IgG with neutralization was detected early in these critically ill patients. Later monoclonal antibody studies showed low or germline mutation frequencies found in severe infections, implicating unique nonconventional B cell origins. 81-83 | Germinal centers are disrupted in severe COVID-19 infection Unlike typical viral infections, early studies showed that in severe SARS-CoV-2 infections, the GC are impaired 84,85 and are associated with large plasmablast expansions and enhanced Ab levels compared to mild disease 37,40,77,86 (Figure 1). The decreased numbers of Tfh in the draining lymph nodes (LN) and spleen provided evidence that functional GC fail to form during critical illness. 84,85 Furthermore, in these severely ill patients, a robust EF B cell response dominates with higher ASC expansion and correlated with nAb levels. 81,86 Corroborating this model, multiple potent nAbs were isolated from severe patients exhibiting only few mutations suggesting that EF responses can give rise to effective nAbs. 83 | SARS-CoV-2-specific ASC responses The rapid and transient expansion in the circulation of ASC is generally a hallmark of early B cell responses during acute viral infections. 21 Initial infections with SARS-CoV-2 give rise to an early Ab peak within the 2nd week post-induction that wanes substantially and rapidly over time (declining by 5-fold to 10-fold within 3-4 months or to <7% of the peak at 5-6 months) ( Figure 2). 19,[35][36][37]51,[71][72][73]77,[93][94][95] This fast decay most probably reflects apoptosis of many circulating shortlived IgG and IgA ASC, known to appear within a few days after initial antigen exposure. 20,[22][23][24]32,96,97 In severe infections, circulating ASC defined as CD19 + CD27 hi CD38 hi, which included CD138 + subsets were expanded although their frequency was not associated with virus-specific IgM. 81,86 A similar pattern was seen in Dengue infections, where higher ASC expansions were associated with more severe illness. 86,98,99 Hence, the rapid antibody decay is a manifestation of apoptosis of the nascent blood ASC. Early ASC may serve as a biomarker of disease severity, 40,81 which at the same time, raises concerns about a potential pathogenic role of ASC. 86,98,99 One study showed that the expansion of ASC in the circulation in hospitalized patients with COVID-19 infection decreased 28-day mortality although the differences were small, suggesting ASC might actually also serve as a marker of disease resolution. 100 Whether ASC expansions are pathogenic or bystander effects from certain proinflammatory cytokines supporting ASC survival, such as IL-6 and TNFα, which are coincidently elevated in severe COVID-19, 81,86,101-103 is not entirely clear. A meaningful ASC response depends not only on quantity but also on quality, such as nAb and different isotypes. Different isotypes IgM, IgA, and IgG were notable in the serum and/or mucosal sites 1-2 weeks post-symptom onset. 40,77,104 In COVID-19, although IgA is normally responsive at mucosal sites, virus-specific IgA ASC were also expanded in the circulation. 95,105 Additionally, SARS-CoV-2 neutralization was correlated more closely with IgA than IgM or IgG in the first weeks after symptom onset. 95 Despite this result, the IgA responses were not associated with disease severity and serum IgA concentrations decreased by 1 month. However, mucosal neutralizing IgA remained detectable in the saliva for more than 3 months, suggesting locally differentiated IgA ASC may have a longer half-life than systemic IgA ASC and confer protection from reinfection. 95 In all, IgA ASC can be found in the blood and mucosal sites during an acute infection. However, it is not clear if mucosal IgA ASC differentiate locally or systemically and then migrate to the mucosal sites in acute illness. Another study showed that RBD-specific ASC are released into the blood transiently during acute COVID-19 with high IgM and low IgG ASC frequencies. 106 However, these results may have been skewed with antigen-labeled flow cytometry which only select for ASC that retain surface BCR expression. From B cell to ASC differentiation, surface Ig receptors are often downregulated. 32,97 Interestingly, only IgM ASC preferentially express surface BCR compared to IgG ASC. 82 Hence, antigen-specific surface flow cytometry of ASC may neglect the majority of blood ASC in this infection. | Memory B cell evolution and cross-variant reactive antibodies in COVID-19 infection Understanding MBC specificity and kinetics is key to predicting durability of protection from reinfection. After infection, it is wellestablished that a strong MBC response is elicited. While most Ab response metrics decrease within 4-6 months, the frequency of circulating MBC remain relatively stable for 6-9 months after infection (including mild and asymptomatic), 36,42,52,[107][108][109] and may even increase before plateauing during convalescence. 37,41,52,78 It appears that even after viral clearance, the MBC response continues to mature. Perhaps more importantly, infection-induced MBC continue to accumulate somatic mutations over 12 months comparable to those acquired in other acute viral infections. 85,110 This maturation results in the emergence of unique clones and the production of memory Abs with increased affinity. 36 81 whereas in mild illness, they are characterized by clonally diverse and mutated MBC. 112 Evolution of MBC after infection was observed over 12 months together with persistence of GC after infection intimated antigen persistence. 36,42,113 Interestingly enough, some asymptomatic individuals 4 months after the onset of COVID-19 infection showed persistence of SARS-CoV-2 nucleic acids in the intestinal biopsies, demonstrating antigenic persistence. 36 With each new emerging mutant, whether MBC in the LN continue to rapidly evolve to generate higher affinity clones that could provide a stronger and more cross-reactive protection will require further study. | Lack of bona fide LLPC in response to COVID-19 infection High-affinity "memory" nAbs in the serum are the effector molecules of long-term protection. While ASC provide robust Ab response during the acute infection, tissue-resident LLPC in the BM are the cellular origins of such persistent "memory" Abs. LLPC secrete Ab continuously in the absence of antigen. 114 After mild SARS-CoV-2 infections, plasma cells specific for SARS-CoV-2 have been identified in the BM 7-11 months after infection. 45 However, BM niche is known to contain both LLPC and other shorter-lived subsets 32 (Table 1), and this study 45 did not demonstrate whether these viral-specific ASC were residents of the BM LLPC subset 32 (i.e., PopD; Table 1). Furthermore, the serologic data after acute infection 37,39,51,[71][72][73]77 may not be consistent with the presence of LLPC, and thus, whether this infection generates bone fide LLPC still remains unknown ( Figure 1). | Transcriptional profiles of ASC in COVID-19 infection ASC single cell profiling from COVID-19-infected patients is often sorted from total peripheral blood mononuclear cell (PBMC) samples. 115,116 Despite acute and recovered time points and known F I G U R E 2 ASC kinetics and Ab effector functions during responses to infection with and vaccination against SARS-CoV-2. Initial infection induces ASC that produce virus-specific, low-affinity serum Abs. In general, mild infection, priming vaccination, or tertiary vaccination generates a GC response, by which the derived MBC undergo continued clonal evolution over 6-12 mo, leading to the production of more potent and broader nAbs. The frequency of ASC generally correlates with the magnitude of the serum Ab levels (total binding Ab pool size). Dose 1 vaccine induces a robust GC response resulting in the generation of virus-specific ASC (and MBC) including in infection-naive subjects and which is substantially enhanced either by Dose 2 (in infection-naive subjects) or in previously infected (recovered) subjects-and further enhanced by boosters (in infection-naive subjects). The highest total binding Ab production is observed in recovered, tertiary vaccinees. Dose 1 ignites potent nAbs (in about half the subjects) that are enhanced by Dose 2 and further enhanced by booters-against the wildtype but less potent against variants (decreasing cross-variant nAb potency). S-specific and nAbs wane over 4-6 mo following infection, although total binding Abs could be detected 18-20 mo post-infection. The nAb waning period of time in COVID-19-naive vaccinees also are usually 4-6 mo; it may last longer in previously infected subjects (i.e., 10-12 mo). Ab, antibody; nAb, neutralizing antibody; ASC, antibody-secreting cell; EF, extrafollicular; S, spike expansions, these cells are relatively rare in the blood. Therefore, single-cell studies using PBMC can at best enumerate the ASC, B cell, and other lymphocytes but have major limitations in understanding the transcriptional profiles of ASC due to the small number of ASC recovered from PBMC isolations. Using PBMCs, one study explored the transcriptional profile of ASC from COVID-19 during acute infection from those who shed virus <7 days versus <14 days and healthy adults. COVID-19 had higher percentage of ASC with significantly reduced naive BC frequencies as compared to healthy controls. As expected, they could only see higher level of B cell activation-related genes and ASC differentiation were upregulated in the COVID-19 patients. 115 Another PBMC study showed that ASC from a severe cohort had interferon responsive genes such as FOS, IFI6, and MX1, 117 suggesting the potential of EF B cell origins found in autoimmunity and recently described severe COVID-19. 81,118 However, the ASC numbers analyzed were small. Qi et al. 119 re-analyzed data from three published PBMC single-cell datasets from mild and severe COVID-19 and showed that metabolic genes regulating oxidative phosphorylation were expressed at highest level in ASC of severe COVID-19. Although interesting, the progressive upregulation this pathway had been previously appreciated in B cell to ASC differentiation. 120,121 The novel single-cell technologies have proven to be extremely powerful in deeply characterizing the transcriptional profiles and the VDJ sequences of plasma cells. However, the rare frequencies of ASC despite their large expansions together with the propensity for apoptosis are the major technical limitations of further enriching this population for single-cell studies. Hence, using total PBMC isolations to study ASC on a single-cell level has many limitations. To properly analyze the heterogeneity of ASC subsets and their possible role in severe and mild COVID-19 infection, strategies for better enrichment will be needed to provide insights into the ASC metabolic, homing, survival, and maturation pathways to become a LLPC. | Neutralizing versus non-neutralizing antibodies in COVID-19 infection Neutralization is thought to be the main mechanism of immune protection to most infections, including SARS-CoV-2. This mechanism is achieved by blocking the engagement of the SARS-CoV-2 S protein to its cognate receptor ACE2. As expected, many nAbs target the RBD. 122,123 During severe COVID-19 illness, patients have higher levels of Abs and exhibit an oligoclonal ASC expansion. 86 Although higher nAb titers are seen in severe disease, 124 Additionally, in SARS-CoV-1, Fc-mediated Ab function can skew macrophage activation to a more inflammatory state in the lung leading to tissue injury. 144 Furthermore, Abs against SARS-CoV-2 could facilitate viral entry into myeloid cells through Fc receptors in vitro. 145,146 Although studies have shown viral genetic and protein content inside macrophages, [147][148][149][150] there is still debate whether this cell type is permissive to productive SARS-CoV-2 viral replication. 151 To our knowledge, there is no evidence of clinically significant ADE with SARS-CoV-2 infection or vaccination. In vivo animal models of SARS-CoV-2 infection have revealed that Fc-mediated Ab function improves disease outcomes and reduces viral replication. Consistent results have been seen in mice, 145,[152][153][154][155] hamsters, 154 and macaques. 145,156,157 In humans, Fc patterns differentially correlate with disease outcomes. Patients with clinically more severe COVID-19 disease exhibited a more proinflammatory pattern of Ig Fc glycosylation than those with mild disease. 158 CD19 cytotoxicity (ADCC). 160,161 ADCP was associated with lower inflammation and clinically milder COVID-19 than ADCD. 162 Interestingly, adults after mRNA vaccination have a distinct pattern not seen with infection. 159,163 This finding demonstrates how different immunity to vaccination and infection can be. In another study, Zohar et al. 164 showed that in severe SARS-CoV-2 infection, Fcγ receptor binding and Fc effector activity were compromised and associated with COVID-19 non-survivors. Another potential protective mechanism of non-nAb is through the Similar to SARS-CoV-2 infection, mRNA vaccination induces early and robust production of S-specific IgM, IgA, and IgG in the circulation [179][180][181][182][183] (Figures 2 and 3). The GC disruptions present in severe COVID-19 patients 81,84 are not observed after mRNA vaccination and active SARS-CoV-2-specific GC responses can detected for several months. 183 However, with vaccination, the expansion of ASC is often less robust compared to acute infections (i.e., average of 2-6% and mostly <20% of the total circulating B cells). 18,[22][23][24] In contrast with infection that exposes the infected patient to epitopes across the entire viral proteome, vaccines only include S epitopes. 166 Therefore, as expected and unlike infection, vaccination incites a largely homogeneous S-specific response among vaccinees. [184][185][186] After receiving the first dose of mRNA vaccine, about only half of the recipients produce nAbs, which, to most of the vaccinees, increase after the second dose. [187][188][189] In comparison with the twodose mRNA vaccination strategy, the single-dose adenovirus vaccine used in the United States elicits lower S-specific Abs. 175,190,191 However, it sufficiently primes the immune system and provokes a durable humoral and cellular immunity lasting up to 8 months. 192 As with infection, serum binding S-specific IgG elicited by vaccination (both mRNA-based and adenovirus-vectored) positively corelate with nAbs 40,[74][75][76][193][194][195] and are associated with VE. 74,[193][194][195][196] Thus, Consistent with epidemiological data of VE, S-specific binding and neutralizing Abs induced by vaccination exhibit a timedependent reduction. 12,13,74,174,197,198 Moreover, most ASC undergo apoptosis rapidly after their peaks in peripheral blood (i.e., 5-7 days post-induction), resulting in a sharp fall of total Abs. 20,23,24,32,114 Ab waning often occurs within 4-6 months, yet it starts to become evident at 3-10 weeks after the second dose. 199 Ab decrease is more profound in immunosuppressed patients 13,200 and exhibits a more intense decline in the older individuals. 13 209,222 and are associated with substantially lower risk of developing long COVID symptoms than infections in unvaccinated individuals. 223 The facts that most BTI are associated with lower disease severity 8,209,222,224 suggest that nAbs elicited by wildtype antigens remain protective against severe infection to SARS-CoV-2 variants. Nonetheless, such protection of severe disease in BTI may be attributed to vaccineinduced S-specific T cell responses, as variants can evade Abs but not the T cell immunity. [225][226][227][228][229][230] Of note, in addition to nAb evasion and waning immunity, the lack of protective mucosal IgA mucosal in the setting of mRNA-based (i.e., intramuscular) vaccination 177,178 might also be a contributing factor in BTI (Figure 2). The antigenic variants that emerge and become the predominant strain are mostly those that escape pre-existing immunity. Compared with the wildtype, the Alpha, Beta, Gamma, and Delta variants exhibit a several-fold drop in vaccination-induced nAbs [231][232][233] (Table 2) that further decreases over time. 53 Moreover, VE against variants is predicted to lose more than half of its power at 12 months, 231 which may explain the BTI and reinfection with variants are increasingly occurring. 70,206,234,235 All this was further complicated by the emergence of Omicron in the fall of 2021. Once identified, this highly transmissible Omicron variant spread rapidly worldwide and by mid-winter it accounted for nearly 100% of new US infections. 7 Compared to the ancestral strain, Omicron has 56-60 mutations throughout its genome (Table 2). Of these mutations, 31-37 are in S with 15-16 of those in RBD. 236,237 While RBD accounts for only 15% of S, it is the target of over 90% serum nAbs. 123 | Booster vaccines increase nAbs and reduce infection with nAb-escaping variants The rapid waning of VE (and correlatively, nAbs) and the everchang- | Memory B cell responses in COVID-19 vaccination Like infection, primary vaccination against SARS-CoV-2 also provokes a strong MBC recall response and with booster vaccines eliciting expansion of MBC that rapidly enhance production of cross-variant nAbs. 183,247 Similarly, the frequency of circulating MBC remains relatively stable for 6-9 months post-vaccine. 181 In contrast to infection where class-switched MBC continuously evolve over time, 36,42,113 the evolution of vaccine-generated primary MBC is either little or no change in the blood or secondary lymphoid organs weeks after the second dose. 111,113,254 | Lack of bona fide LLPC in response to COVID-19 vaccination To be long-term effective, a vaccine must generate LLPC, which deliver durable recall protection through constitutively secreting circulating "memory" Abs as a rapid primary response. In reality, not all vaccines generate and maintain LLPC. For example, while tetanus, smallpox, or MMR vaccines offer long-lasting protection (i.e., longlived vaccines: LLV), pneumococcal 23-valent (PSV23), hepatitis B, or influenza vaccine confers short-lived efficacy (i.e., short-lived vaccines; SLV). 27,[255][256][257][258] Although the mechanisms for LLV and the generation and maintenance of LLPC remain poorly understood, 34,114 infections with a whole, replicative virus often induce long-lasting response even though viral Ag component-based vaccines usually lead to short-lived immunity. 114 Concerns have been raised that COVID-19 vaccines more likely belong to the SLV group. [259][260][261] The acquired humoral immunity rapidly waning within 4-6 months after completing two-dose and postboosters (i.e., VE decreases to 66% and 78% within 4 months 262 ) ( Figure 2) is inconsistent with LLPC being generated and maintained. Indeed, mRNA vaccination, instead of consistently provoking a primary LLPC response, may just trigger a secondary recall. 113,260,263 The nature of such recall response might be that of immunity conferred by pre-existing cross-reactive MBC and cross-reactive memory T cells, 263 which were previously elicited from prior vaccination 113 or previous eCoV infection), which may be mostly nonneutralizing and non-protective against the newest virus. In a single study, mRNA vaccines are reported to induce persistent GC reactions that last for months, 183 where blood circu- Note: Natural infection with wildtype SARS-CoV-2 virus or receipt of a homologous vaccine booster after completion of the two-dose wildtype Sbased vaccine series induces nAb potency (and hence, protection) of the highest level against the wildtype virus, which decreases gradually against variants of concern. In general, the most or the least reduced protection is observed in the variant bearing the most numerous or the least numerous numbers of mutations, respectively, in RBD/RBM sequence (i.e., most epitopic changes or more conserved epitopes, respectively). It is the generated MBC that increase in the number and continue to clonally evolve for at least 6-12 mo after (mild) infection or upon boosting that give rise to Abs with higher potency and broader breadth in neutralizing activities against the evolving virus. See texts for detail. Sequence mutation is retrieved from https://covdb.stanf ord.edu/page/mutat ion-viewe r/. | Transcriptional profiles of ASC after vaccination The | Local vs systemic protection: Abs at the viral entry site While infection incites a specific mucosal response dominated by potent neutralizing IgA early on at the site of viral entry, 95 | Protection conferred by vaccination versus infection: equivalent, superior, or inferior? To control reinfection at both population and individual levels, it is essential to understand whether protection elicited by vaccination might be more durable than by infection. 271 221 Whether this is true also for Omicron will require further study. The differences between infection versus vaccine-induced protective durability may be influenced by several immunological factors. The different aspects may drive MBC evolution and selection to distinctive Abs. For example, infection-induced MBC appears to undergo greater affinity maturation than those induced by vaccination, possibly generating more robust and durable immunity. 36,42,107,111,113 Antigen persistence in infection is weeks while for mRNA vaccines, it is days. 36 The route of Ag delivery probably plays a role with mucosal routes in infection and intramuscular with the current mRNA vaccines. Infections with its sundry of proteins in the intact virus compared to the adynamic pre-fusion S in the vaccines likely also manifests different immune responses. 284 In sum, while infectioninduced immunity may be generally superior to vaccine-elicited one, 114 virtues of immunity provided by vaccination in addition to protection from infection can be appreciated. | Hybrid immunity confers better protection than vaccine or infection alone Hybrid immunity, which is induced by prior infection in combination with vaccination, may drive stronger and longer-lasting protection against reinfection and severe disease compared to either immunity from infection or vaccination alone during 3-8 months from induction. During the Delta-virus surge in the summer 2021, previously infected persons who received the vaccine were protected against reinfection and severe disease better than adults who received just two doses of the vaccine. 14,70,274,275,285,286 Although all immunity wanes, vaccination after infection induced a rapid nAb titer which had higher cross-variant neutralizing activity compared to healthy adults after just two vaccine doses 42,107,[187][188][189]197,272,281,[287][288][289][290] ( Figure 2). Also, BTI significantly enhanced Ab responses to elevate IgA production (possibly owning to the intranasal route of Ag exposure) and broaden cross-nAb potency against variants. 291 Importantly, hybrid immunity appears the most protective against Omicron, which is the most mutated and most immune evasive variant to date. 5,[251][252][253]292,293 Overall, hybrid immunity appears to offer an immune response that is more robust, more durable, and with the best cross-variant neutralization than immunity from vaccine or infection alone. | Protection is not a single outcome but correlated with nAbs that wane The long-term control of the COVID-19 pandemic depends on understanding durability of protection, which is based on memory induced by infection and/or vaccination. Protection against symptomatic reinfection and severe illness is normally assessed epidemiologically since a single immune outcome is not available. Immune protection is likely attributed by multiple aspects of memory responses involving dynamic interplays of viral replication and pathogenesis with key humoral and cellular components that include Abs, B cells, and T cells (and secreted products). 195,225,229,294,295 The current lack of standardized or consensus quantitative Ab (particularly nAb) assays across studies further complicates this assessment. 296 Although Ab responsiveness represents only a partial picture of the overall immune responses, the magnitude of serum nAbs in most viral infections and vaccination is highly predictive of protection against reinfection. 74,126,297 Immunologically, this effect is in many adults. 263,[305][306][307][308] Durability of the response to eCoV varies significantly and is also strain-dependent. 309 Most infections with eCoV, such as OC43, NL63, 229E, or HKU1, as well as SARS-CoV-1 and MERS-CoV, led to Ab responses that last for only several months, 310,311 although some wane within 12-18 months. [306][307][308] Thus, they were thought to be short-lived. However, one report showed they persist for up to 3 years while another suggested longer durability but they were just modeling studies. 68,312,313 SARS-CoV-1 nAbs appear 5-10 days post-symptom onset 314 but may wane even more rapidly than total Ag-specific Abs, raising questions whether there is humoral durability after infection with any coronavirus. 312 During initial SARS-CoV-1 outbreaks, nAbs were detected for 16-24 months. [306][307][308]312,315 Using linear mixed models, Ab levels associated with protection against reinfection last 1.5-2 years. 74,316 In sum, it is not clear that after infection life-long protection is maintained with coronaviruses. Notably, children develop robust and stable cross-reactive Abs beyond 12 months, which may be linked to their often milder or asymptomatic COVID-19. 327 Of note, recent in vitro studies using human FcγR-expressing cells suggest that these cross-reactive Abs may be worse than non-protective since they can induce ADE of infection with SARS-CoV-2 virus in these cells. 325,328 Whether this in vitro phenomenon is relevant to patient disease is not known. | Autoreactive features of COVID-19induced ASC Despite early speculation that disease severity might correlate with a failure of B cell development and antibody production, these concerns proved unfounded with early reporting of nAb titers across a spectrum of disease severity in the acute and recovery phase of COVID-19. 335 These serologically based studies were rapidly bolstered by cellular analyses identifying ASC expansion as a critical feature of patients with severe disease. 333 confirmed as SARS-CoV-2 specific. 82 However, despite this specificity, these cells are also prone to self-reactivity with clonotypes capable of binding nuclear antigens, naive B cells, and even glomerular basement membrane, a target often associated with pathology of the kidney and lung. Interestingly, these features appeared independently controlled. Individual clonotypes could display viral binding alone, self-reactivity alone, or even both. 82 Thus, these findings are consistent with a general reduction in negative selection thresholds and suggest that documented emergence of autoreactivity in these patients is more likely a function of altered tolerance than the result of molecular mimicry or non-specific clonal activation. In patients with mild illness, a lack of these low-mutation ASC clonotypes together with lower levels of identified autoreactivity suggests that these features of ASC selection are highly responsive to the local developmental microenvironment. In this model, the EF response pathway could be envisioned as an emergency response mechanism. Under highly inflammatory conditions (reflecting severe viral illness), the slow process of GC-based B cell selection would be suppressed or even suspended 84 in favor of EF activation for the purpose of rapid antibody production and infection control. Previous work in mice suggests that even in these EF responses positive selection is likely to guide clonotype inclusion, thereby ensuring that the overall ASC mobilization will be generally viral specific. However, autoreactive clonotypes that have escaped central tolerance would also have the opportunity to respond under these circumstances, ultimately resulting in a mix of self-reactive and viralreactive ASC pools. These mixed antibody responses, while actively and effectively participating in viral clearance, may also contribute to the overall inflammatory environment through innate activation and self-targeting to create a feed-forward loop of EF response bias ( Figure 4). Ultimately, this bias may result in mounting tissue damage. Perhaps more interesting is how engagement of multiple antigens due to poor negative selection might combine to drive low-affinity clones toward response inclusion, although extensive molecular and cellular study would be required to confirm this phenomenon. Targets of these auto-Abs have been documented to include self-Ag seen commonly in autoimmune conditions [342][343][344][345] 352 have even hypothesized that anti-idiotype auto-Abs against SARS-CoV-2-specific Abs could structurally resemble the SARS-CoV-2 S epitopes with the potential to cause cellular dysfunction by engaging its cognate receptor ACE2. Notably, anti-ACE2 auto-Abs have been reported in COVID-19. 346 These proposed antiidiotype Abs would also be able to induce ADCC if the appropriate Fc functionality is present. It is still unknown whether the generation of auto-Abs during acute infection correlates with PASC, but evidence is starting to emerge that patients with PASC harbor auto-Abs for longer than the acute infection process, 342,344,347 and overall immune perturbations last longer than the acute period. 353 Interestingly, SARS-CoV-2 mRNA vaccination does not appear to trigger auto-Ab responses. 354 Forecasting who will eventually develop PASC could be helpful in anticipating complications and possibly directing treatment. | Post-acute sequelae of SARS-CoV-2 infection (PASC) and the role of auto-Ab responses Prediction models of self-reported symptoms and immune parameters have been suggested. 355 One showed particular IgM and IgG3 subclass signatures 356 and another one utilized a complex multiomics analysis to show that during acute illness, auto-Abs and Th1like responses, along with type 2 diabetes, SARS-CoV-2 viremia, and Epstein-Barr virus viremia may anticipate PASC. 344 Interestingly, the study also showed a signature of atypical memory B cells which were likely the previously described T-bet driven DN2 in SLE and severe COVID-19. 81,118 Ultimately, a better understanding the longevity of autoreactive ASC after EF-biased responses after acute COVID-19 infection may provide insight into one immune mechanism of PASC. DATA AVA I L A B I L I T Y S TAT E M E N T Data sharing is not applicable to this article as no new data were created or analyzed in this study.
2022-07-09T06:17:26.717Z
2022-07-08T00:00:00.000
{ "year": 2022, "sha1": "4346560186b7dfe5c1e45a8230e8b6d4d6f4ef0f", "oa_license": null, "oa_url": null, "oa_status": "CLOSED", "pdf_src": "Wiley", "pdf_hash": "f6d26e1d5360a35c05d21f19608cf213e7e6c8e2", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
3101315
pes2o/s2orc
v3-fos-license
Notable epigenetic role of hyperhomocysteinemia in atherogenesis Atherosclerosis is associated with multiple genetic and modifiable risk factors. There is an increasing body of evidences to indicate that epigenetic mechanisms also play an essential role in atherogenesis by influencing gene expression. Homocysteine is a sulfur-containing amino acid formed during methionine metabolism. Elevated plasma level of homocysteine is generally termed as hyperhomocysteinemia. As a potential risk factor for cardiovascular diseases, hyperhomocysteinemia may initiate or motivate atherogenesis by modification of DNA methylation. The underlying epigenetic mechanism is still unclear with controversial findings. This review focuses on epigenetic involvement and mechanisms of hyperhomocysteinemia in atherogenesis. Considering the potential beneficial effects of anti-homocysteinemia treatments in preventing atherosclerosis, further studies on the role of hyperhomocysteinemia in atherogenesis are warranted. Introduction Epigenetics is defined as changes in phenotype and gene expression that occur without alterations of DNA sequence [1]. By means of gene-environment interactions, epigenetic mechanisms can be acquired and/or heritable throughout lifespan. There are three major epigenetic types: (1) DNA methylation, (2) histone modification, and (3) noncoding RNA regulation. DNA methylation, occurred in cytosine residues of CpG dinucleotides, is mediated by DNA methyltransferases (DNMTs). During evolution, the CpG dinucleotides have been progressively eliminated from the genome and are present at only 5% to 10% of its predicted frequency. Cytosine methylation appears to play a major role in this process because of the high susceptibility of 5-methyl cytosine to undergo spontaneous deamination to yield thymine [2]. DNA methylation is the most wellknown epigenetic mechanism, and plays a critical role in the regulation of global and specific gene expression [3]. Intriguingly, recent evidences identified some allele-specific DNA methyation (ASM) [4][5][6] and methlylation-associated loci (meQTLs) [7]. These novel concepts, for the first time, associate genetic variations with epigenetic changes. The interaction between genetic variants and DNA methylation also emphasize the need for an integrated study [8]. Atherosclerosis is a chronic inflammatory disease of large or intermediate arteries. It is pathologically characterized by infiltration of lipid particles, endothelial activation, macrophage infiltration and foam cell formation. The foam cell formation, known as "fatty streak", followed by smooth muscle migration and proliferation, and extracellular matrix deposition usually resulted in the formation of an atherosclerotic plaque, which may eventually rupture and cause a cardiovascular event, such as stroke or myocardial infarction. The epigenetic impacts on cardiovascular diseases (CVD) have garnered considerable research interests since the initial suggestion of epigenetics in 1999 [9]. Atherogenesis has been proposed as a result, at least partly, of dietinduced DNA methylation. Although genome-wide association study (GWAS) indentified a number of single nucleotide polymorphisms (SNPs) associated with CVD, most of these SNPs have not been previously implicated in the pathogenesis of atherosclerosis and have modest biological plausibility [10]. It seems that the GWAS identified genetic discrepancies only account for a small fraction of heritability of atherosclerosis. Hence, epigenetics is emerging in the "post-GWAS" era as the next clue in probing the mechanisms of atherogenesis. It is expected to provide the previously missed link among gene, environment and disease. Hyperhomocysteinemia (HHcy) is an established risk factor for atherosclerosis [11][12][13][14]. HHcy can increase oxidative stress, activate inflammatory, and promote vascular smooth muscle cells (VSMCs) proliferation, all of which may result in initiation of atherosclerosis [15,16]. Since homocysteine (Hcy) is a key component of methionine recycle system, plasma Hcy level may be associated with DNA methylation and other epigenetic modification. Thus, a better understanding of the role of Hcy metabolism as a part of one-carbon metabolism is essential and may provide useful information in establishing efficacious strategies for preventing and treating atherosclerotic diseases. Homocysteine Homocysteine (Hcy) is a sulfur-containing amino acid derived from methionine after demethylation via two intermediate compounds, S-adenosylmethionine (SAM) and S-adenosylhomocysteine (SAH) [17]. Methionine is an essential amino acid acquired mostly from the methionine recycle system and partly from the diet (Figure 1). It can combine with adenosine triphosphate to yield SAM, which is the most important donor to methyl group in human body. With the transfer of a methyl group, SAM is converted to SAH and the SAM/SAH ratio may serve as an indicator for intra-cellular methylation capacity [18][19][20]. Most SAM-dependent methyltransferases, including DNA methyltransferases (DNMTs), can be inhibited by SAH which has a higher affinity with methyltransferases than SAM [21]. SAH can be further hydrolyzed to Hcy and adenosine. This reaction is reversible with a thermodynamic equilibrium that strongly favors SAH synthesis rather than hydrolysis [15]. Hcy is metabolized in vivo via two pathways: remethylation or transsulfuration. In remethylation pathway, Hcy is first transformed to methionine by the addition of a methyl group from 5-methyltetrahydrofolate or betaine. 5methyltetrahydrofolate is a product of the conversion of folic acid to 5,10-methyltetrahydrofolate and finally metabolized to 5-methyltetrahydrofolate by enzyme 5,10-methyltetrahydrofolate reductase (MTHFR). In almost all tissue types, the cofactor vitamin B 12 participates in the remethylation with 5-methyltetrahydrofolate, whereas the reaction with betaine is restricted to liver, and is independent of vitamin B 12 . In the transsulfuration pathway, Hcy is converted to cystathionine by cystathionine β-synthase (CBS) and finally to cysteine with vitamin B 6 as a cofactor [22]. Moderate HHcy (15-30 μmol/L) usually reflects impaired pathway of remethylation. The possible causes include deficiency of folic acid, vitamin B 12 or dysfunction of MTHFR. A point mutation of amino acid 677 (677 C → T) in MTHFR gene can causes alanine-valine substitution and is associated with reduced enzyme activity of MTHFR. This is the commonest form of genetic HHcy [25]. Severe HHcy (>100 μmol/L) may be caused by deficiency of homozygote CBS, homozygote thermo-stable MTHFR, or enzymes catalyzing vitamin B 12 metabolism. Abnormal increase of plasma Hcy (>15 μmol/L) after a methionine load (100 mg/kg) may reflect impaired Hcy transsulfuration due to deficiency of heterozygous CBS or vitamin B 6 [22]. HHcy is observed in approximately 5% of the general population and is associated with increased risk of CVD, autoimmune disorders, birth defects, diabetic mellitus, renal diseases, osteoporosis, neuropsychiatric disorders and cancer [26]. Several studies have identified moderate HHcy as an independent risk factor for atherosclerotic diseases [27]. Hyperhomocysteinemia and DNA methylation In the methionine recycle system, SAH hydrolyzes to Hcy and adenosine. This reaction is reversible, hence, elevated Hcy level may induce SAH synthesis. The increased SAH can, via a negative feedback, inhibit SAM-dependent methyltransferases, such as DNMTs. DNMTs mediate DNA methylation by transferring methyl groups from SAM to cytosine residues in a CpG dinucleotide context. Thus, dysfunction of Hcy metabolic pathways may result in DNA hypomethylation. There are increasing evidences to indicate that HHcy may be associated with DNA methylation levels in vivo. The pioneering work of Yi and colleagues [28] in 2000 showed that plasma total Hcy level (in healthy subjects) were associated with plasma SAH, lymphocyte SAH and lymphocyte DNA hypomethylation levels. In cardiovascular patients with concomitant HHcy, simultaneous elevation of plasma SAH [29] and disturbance of DNA methylation [30] were observed. This association was confirmed in animal studies [31,32]. In human somatic cells, methylated cytosine accounts for about 1% of total DNA bases and affects 70-80% of all CpG dinucleotides in the genome [33]. Unmethylated CpGs are grouped in clusters called "CpG islands" that are present in the 5′ regulatory regions of many human genes [34]. DNA methylation may influence the transcription of genes in two ways. The presence of methyl group at a specific CpG dinucleotide site may directly prevent DNA from recognizing and binding to transcription factors [35]. In other instance, methylated DNA may be bound by proteins known as methyl-CpG-binding domain proteins (MBDs). These MBDs can directly repress transcription, prevent the binding of activating transcription factors, or recruit enzymes that catalyze histone posttranslational modifications and chromatin-remodeling complexes that alter the structure of chromatin and actively promote transcriptional repression [36]. In general, DNA methylation is associated with low gene activity. Global or specific DNA methylation may contribute to altered gene expression, and may lead to vascular damage. Hyperhomocysteinemia, DNA methylation and atherogenesis Atherosclerosis is a dynamic process involving several cell types such as monocytes, endothelial cells, and smooth muscle cells (SMCs). A chronic inflammatory response with infiltration of macrophages and T-cells along with endothelial dysfunction is also prominent in the pathogenesis of plaque formation. In response to inflammation or injury, production of ROS is enhanced in vascular cells. These changes all contribute to the initiation and progress of atherosclerosis. There has been a variety of evidences to indicate that epigenetic changes play an important role in atherogenesis beside genetic and environment factors [37][38][39][40][41]. SMCs play a unique role in the development of atherosclerosis. Hypomethylation has been observed in proliferated VSMCs from advanced human atherosclerotic plaques, and from atherosclerotic lesions in mouse and rabbits [31,42,43]. Hypomethylation is correlated with increased transcriptional activity that may affect cellular proliferation and gene expression. Using VSMCs in culture, Yideng et al. [44] observed hypomethylation of LINE-1 and Alu elements in medium with high Hcy concentration. Their results indicated that HHcy may increase SAH and decrease SAM concentrations, change SAH hydrolase expression in RNA and protein levels, and enhance activity of DNA methyltransferase [45]. Researchers concluded that the dissimilar detrimental effects of Hcy in various concentrations may be functioned by different mechanisms. Mild or moderate HHcy may influence gene expression mainly through the interference of transferring methyl-group metabolism. However, severe HHcy may educe more injurious effects by increasing oxidative stress, promoting apoptosis and inflammation. HHcy induced SAH elevation can promote VSMC proliferation and migration through an oxidative stressdependent activation of the ERK1/2 pathway, which in turn can facilitate atherogenesis in apolipoprotein E (ApoE)-deficient mice [46]. Estrogen receptors (ERs) are expressed in SMCs and endothelial cells in coronary artery, and may play an important role in preventing atherosclerosis [47]. The protective effects of estrogens against oxidative stress may mediate by ERα. Decreased ERα level can deteriorate atherosclerosis in men [48]. According to the study with VSMCs from human umbilical vein, Hcy can induce de novo methylation in the promoter region of the ERα gene, and subsequently down-regulate the expressions of ERα mRNA [49]. Hypermethylation of CpG islands located in promoter region of ERα gene is positively correlated with the plasma Hcy level, and facilitate the initiation and development of atherosclerosis. Jamaluddin et al. [50] revealed that HHcy may exert highly specific inhibitory effects on cyclin A transcription and endothelial cells (ECs) growth through a hypomethylation related mechanism which blocks cell cycle progression and endothelium regeneration. Cyclin A suppression has been proposed as a possible mechanism for inhibiting EC growth, and therefore, may increase the risk of CVD. Furthermore, HHcy-mediated dysfunction of endothelial nitric oxide (NO) system is an important mechanism for atherosclerotic pathogenesis [51]. Dimethylarginine dimethylaminohydrolase (DDAH) is the key enzyme for degrading asymmetric dimethylarginine (ADMA), which is an endogenous inhibitor of endothelial nitric oxide synthase (eNOS). Using human umbilical vein endothelial cells (HUVECs), Zhang and colleagues observed that mildly increased Hcy concentration (10 and 30 μmol/L) may induce hypomethylation, while higher Hcy concentration (100 and 300 μmol/L) may induce hypermethylation in the promoter CpG island of DDAH2 gene [52]. The mRNA expression of DDAH2 increased in mildly increased concentration of Hcy, and decreased in higher concentration of Hcy correspondingly. The inhibition of DDAH2 activity, the increase of ADMA concentration, the reduction of eNOS activity and the decrease of NO production were all consistently relevant to the alteration of Hcy concentration. HHcy may influence the methylation of DDAH2 gene, and indirectly influence the function of NO system. This process may be an important pathway for the development of atherosclerosis involving NO system. Moreover, a recent study suggested that hypermethylation of DDAH2 contributes to apoptosis of ECs induced by Hcy [53]. DNA methylation inhibitor 5-azacytidine could attenuate the effect of Hcy on ECs. In mutant mice deficient in MTHFR, global DNA hypomethylation was shown in both heterozygous and homozygous knockouts [54]. Abnormal lipid deposition was observed in the proximal aorta in elder heterozygotes and homozygotes, suggesting an atherogetic effect of HHcy. ApoE gene has been associated with atherosclerosis. Researchers found that clinically relevant Hcy level (100 mM) may increase the total cholesterol (TC), free cholesterol (FC), and cholesteryl ester (CE) levels, and decrease ApoE mRNA and protein expression levels in cultured human monocytes. All these effects may be caused by increased DNA methylation of ApoE [55]. Peroxisome proliferatorsactivated receptor α and γ (PPARα and γ), acted as lipid sensors and bound with high affinities to ligands of antiatherosclerosis, were also observed concomitantly with hypermethylation in promoter induced by Hcy in monocytes [56]. Recently, Wang et al. [57] confirmed that DNA hypomethylation in promoter region of monocyte chemoattractant protein-1 (MCP-1) gene through NF-κB/ DNMT1 may play a key role in the formation of atherosclerosis under HHcy condition in ApoE-deficient mice. Cholesterol-loaded foam cells usually form the core of atherosclerotic lesions. ATP-binding cassette transporter A1 (ABCA1), which mediates the efflux of cellular cholesterol and phospholipids, is the rate-limiting step in lipid metabolism. Acyl-coenzyme A: cholesterol acyltransferase-1 (ACAT1) promotes accumulation of CE in macrophages, thereby resulting in the foam cell formation, a hallmark of early atherosclerotic plaque. In the study by Liang et al. [58], cultured monocyte-derived foam cells were incubated with clinical relevant concentrations of Hcy for 24 h. Number of foam cells and cholesterol level were increased, but the mRNA and protein expression of ABCA1 were decreased, while ACAT1 expression was increased in the presence of Hcy. The DNA methylation level of ABCA1 gene was increased whereas ACAT1 DNA methylation was decreased when Hcy concentrations were changed. Moreover, the results showed that DNMT activity and DNMT1 mRNA expression were increased by Hcy. It indicated that DNA methylation has the function to regulate the expression of ABCA1 and ACAT1 via DNMT. The results manifested that ABCA1 and ACAT1 DNA methylation induced by Hcy possibly play a potential role in ABCA1 and ACAT1 expression and the accumulation of cholesterol in foam cells. DNA methylation may reflect altered immune or inflammatory responses during atherosclerosis among cell types [59]. Given the established roles of inflammation and leukocytes in atherosclerosis, peripheral blood leukocytes represent a biologically relevant cell type for cardiovascular studies. Castro et al. demonstrated that patients with vascular diseases have a disturbed global DNA methylation status, which was associated with plasma Hcy levels [30]. High blood Hcy levels correlate with DNA hypomethylation and atherosclerosis and can lead to a 35% reduction in the DNA methylation status of peripheral blood lymphocytes. In contrast to these findings, Sharma and coworkers observed a significant positive correlation of global DNA methylation with plasma Hcy levels in patients with coronary artery diseases [60]. They concluded that alteration in genomic DNA methylation and the association with CVD appear to be further accentuated by higher Hcy levels. After reviewed literatures regarding to 135 genes either modulating or modulated by Hcy, Sharma et al. concluded that elevated plasma Hcy may lead to atherosclerosis either by directly affecting lipid metabolism and transportation, or by oxidative stress and endoplasmic reticulum stress by decreasing the bioavailability of NO and modulating the levels of other metabolites, including SAM and SAH [61]. In conclusion, aberrant global DNA methylation is only an index of the potential for epigenetic dysregulation. An increasing number of factors that can modify the DNA methylation patterns have been identified. These include the rate of cell growth and DNA replication, chromatin accessibility, local availability of SAM, nutritional factors, duration and degree of the hyperhomocysteinemic state, inflammation, dyslipidemias, oxidative stress, and aging [62]. The relationship between HHcy and DNA global hypomethylation may be masked in the clinical setting owing to the presence of these confounders, thereby possibly explaining some contradictory and counterintuitive findings reported to date. Another important aspect to consider is that DNA methylation is unequally distributed throughout chromosomes of differentiated cells [63]. Thus, hypermethylated and hypomethylated regions can coexist in the genome. The global DNA methylation status need not correspond to the methylation status of specific genomic regions. In the presence of HHcy, more promoter regions of pro-atherogenic genes might be hypomethylated while anti-atherogenic genes hypermethylated. Thus proatherogenic genes gain more activity along with loss of protective function of anti-atherogenic genes which accelerate the process of atherogenesis ultimately. Hyperhomocysteinemia and histone modification Nucleosomes are the basic units of chromatin and are composed of DNA wrapped around a protein octamer containing two molecules of each canonical histone (H2A, H2B, H3, and H4). Nucleosomes may be irregularly packed and fold into higher-order structures that occur in diverse regions of the genome during cell-fate specification or in distinct stages of the cell cycle. The arrangement of nucleosomes can be altered by covalent modification of histones, including acetylation, methylation, phosphorylation, ubiquitination, and sumoylation [64,65]. Post-translational modifications of histones are facilitated by different enzymes. Different histone modifications remodel the conformation of the chromatin, affecting the accessibility of transcription factors to a gene, and thereby regulating gene expression in a specific manner. Lysine residue acetylation and methylation are the most studied modifications. Histone acetylation of lysine residues in H3 and H4 tails catalyzed by histone acetyltransferases (HATs) has been consistently associated with active transcription in several studies [66,67]. Deacetylation of histones by histone deacetylases (HDACs) correlates with DNA methylation and the inactive state of chromatin [39]. Histone methylation is also a major dynamic covalent epigenetic modification with more complex patterns. The lysine residue modification can be mono-, di-, or tri-methylated. Depending on the position in the histone chain, methylated lysines are associated with transcriptional activation or suppression. For example, H3K9 methylation state is strongly indicative of transcriptional repression and gene silencing, while H3K4 tri-methylation state is associated with gene activation [64,68]. Histone methyltransferase (HMTs) catalyzes the transfer of a methyl group from SAM to a lysine residue either on H3 or H4, while histone demethylases eliminate methyl groups. There has been a number of studies demonstrated that histone modification play a role in atherosclerosis [39,[69][70][71]. But limited evidence is available about the implication of HHcy in atherogenesis via histone modification. Since HHcy can inhibit SAM-dependent methyltransferases through elevated SAH, histone methylation might be influenced due to inhibited HMTs. In a recent study in rats, diet-induced HHcy was found to disturb global protein arginine methylation in a tissue-specific manner and affect H3 arginine 8 methylation in brain, along with reduced ADMA [72]. Consistently, in CBS-deficient mice, protein arginine hypomethylation was presented in liver and brain. ADMA of arginine 3 on H4 content was markedly decreased in liver [73]. Moreover, in the research to elucidate the role of the extracellular superoxide dismutase (EC-SOD) in the development of foam cells, accelerated DNA methylation of EC-SOD was induced by HHcy, as well as increased binding of acetylated H3 and H4 in momocytes [74]. Hcy-induced histone hyperacetylation was also observed in astrocytes [75]. Therapy of hyperhomocysteinemia The reduction in Hcy and the increased availability of methyl compounds provided by vitamin supplementation, such as folic acid, may not be sufficient to reverse epigenetic changes induced by HHcy [76]. It is possible that individuals with HHcy have an "Hcy memory effect" due to epigenetic alterations which continue to promote progression of cardiovascular complications even after Hcy levels are lowered. Deleterious effect of prior, extended exposure to elevated Hcy concentrations might have long-lasting effects on target organs and genes, hence underestimating the benefit of Hcy lowering therapies in CVD patients. Therapies targeting the epigenetic machinery as well as lowering circulating Hcy concentrations may have a more efficacious effect in reducing the incidence of cardiovascular complications. Conclusion HHcy may be regarded as a global DNA hypomethylation effecter via SAH accumulation. While it is clear that epigenetic regulation involve in atherogenesis, it is unclear about the relative importance of global versus gene-specific methylation, nor is it clear how Hcy participates in the epigenetic modification. Global DNA hypomethylation may serve as a candidate mechanistic link between HHcy and atherosclerosis. Further studies are warranted to unravel the mechanisms that select specific genes for epigenetic regulation in the presence of HHcy during atherogenetic process.
2017-06-24T11:28:17.993Z
2014-08-21T00:00:00.000
{ "year": 2014, "sha1": "d815dc231d186f803823ff7995427e4f4c27d595", "oa_license": "CCBY", "oa_url": "https://lipidworld.biomedcentral.com/track/pdf/10.1186/1476-511X-13-134", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d815dc231d186f803823ff7995427e4f4c27d595", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
41371679
pes2o/s2orc
v3-fos-license
Longitudinal Change in the Relationship between Fundamental Motor Skills and Perceived Competence: Kindergarten to Grade 2 As children transition from early to middle childhood, the relationship between motor skill proficiency and perceptions of physical competence should strengthen as skills improve and inflated early childhood perceptions decrease. This study examined change in motor skills and perceptions of physical competence and the relationship between those variables from kindergarten to grade 2. Participants were 250 boys and girls (Mean age = 5 years 8 months in kindergarten). Motor skills were assessed using the Test of Gross Motor Development-2 and perceptions were assessed using a pictorial scale of perceived competence. Mixed-design analyses of variance revealed there was a significant increase in object-control skills and perceptions from kindergarten to grade 2, but no change in locomotor skills. In kindergarten, linear regression showed that locomotor skills and object-control skills explained 10% and 9% of the variance, respectively, in perceived competence for girls, and 7% and 11%, respectively, for boys. In grade 2, locomotor skills predicted 11% and object-control skills predicted 19% of the variance in perceptions of physical competence, but only among the boys. Furthermore, the relationship between motor skills and perceptions of physical competence strengthened for boys only from early to middle childhood. However, it seems that forces other than motor skill proficiency influenced girls’ perceptions of their abilities in grade 2. Introduction Lower perceptions of physical or sport competence are associated with dropout from organized sport among children and youth [1] and avoidance of physical education [2], whereas higher perceptions of physical competence are consistently associated with greater participation in physical activity among children and youth [3][4][5][6]. With low levels of physical activity worldwide, it is important to understand the development of self-perceptions and motor skills in children and youth so participation can be enhanced. As children transition from early to middle childhood, hypothetically two processes should strengthen the relationship between motor skill proficiency and perceptions of physical competence. First, motor skills generally improve during childhood [7][8][9][10][11], and second, perceptions of physical competence generally decrease as children develop cognitively [5,[12][13][14]. Further, developmental theorists note that as children age and become more exposed to additional factors that influence their perceptions, they rely less on the feedback from significant others (e.g., parents and caretakers), and more on that from other sources (e.g., peers) [15]. Overall, motor skills are less developed in early childhood than in middle childhood [9]; however, children's physical competence beliefs tend to be higher and less accurate in early childhood [13,15]. To date, the relationship between fundamental motor skill proficiency and perceived physical competence in early childhood is unclear. Five studies have examined this relationship using identical tools [1,[16][17][18][19], specifically, the Test of Gross Motor Development-2 (TGMD-2; Ulrich, 2000) to assess motor proficiency and the Pictorial Scale of Perceived Competence and Acceptance for Young Children [20] to assess perceptions of physical competence. Three of the studies found significant positive relationships between either locomotor skill and perceptions and object control skills and perceptions, but Crane et al. [2] found that the relationship was only significant for boys. Contrastingly, two other studies did not find significant relationships between motor skill proficiency and perceptions of physical competence [16,19]. It is possible that the relationship in the Goodway and Rudisill [16] study differed because the children had extremely low motor skill scores. In slightly older children (8 years of age), Yu and colleagues [21] examined this relationship among children with developmental coordination disorder (DCD) and typically developing children. These authors reported that children with DCD had lower perceptions of their physical abilities and displayed lower motor proficiency levels than their typically developing peers. In addition, they found that physical coordination was a predictor of object control skills [21]. As they age, children's expanding cognitive abilities enable greater awareness of their own competence and performances [22] and allow them to: compare their performances to their peers' performances, analyze the reasons for their successes and failures, and internalize feedback [13,23,24], resulting in less inflated perceptions. As children mature, those with less proficient skills will more likely have less favourable physical competence perceptions, and those with well-developed motor skills more favourable [14], which can significantly influence physical activity and sport participation patterns. Although no studies have tracked the relationship between fundamental motor skills and perceived physical competence from early to middle childhood longitudinally, Spessato and colleagues [19] assessed this relationship in a cross-section of children at different ages. As might be expected developmentally, Spessato et al. found that the relationship was not significant among the 4-and 5-year-old children, but it was significant among the 6-year-old children. Sex-based differences in fundamental motor skill proficiency as well as in the relationship between motor skills and perceptions have also been identified. Results of a recent systematic review and meta-analysis of the correlates of gross motor competence revealed that sex was a correlate of gross motor competence in more than 40 studies worldwide [25]. Boys have consistently shown higher object control skills and a stronger relationship between object control skills and perceptions of competence, compared to girls [10,12,17,18,26,27]. The evidence is less clear for locomotor skills. Several studies demonstrated that girls have better locomotor skill proficiency [10,12,18], while other studies found no differences [28]. For perceptions of physical competence, LeGear and colleagues [18] found that 5-year-old girls and boys had high perceptions of their physical competence, but that girls' perceptions were significantly higher than boys'. On the other hand, Robinson [17] found that 4-year-old boys had higher levels than girls, and Goodway et al. [16] found no sex-based differences among 3-4-year-olds. However, current models such that presented by as Stodden et al. [15] have not included sex differences. Part of the impetus for this study was to test aspects of Stodden and colleagues' [15] developmental model published in Quest. Stodden et al. hypothesized that in both early and middle childhood, perceptions of competence would predict motor skill proficiency, but for slightly different reasons. In early childhood, positive perceptions may help in the development of motor skills because the children do not really differentiate between the effort they put into their activity and the outcomes (i.e., success or failure), while in middle-childhood, improvements in motor skill proficiency coupled with positive perceptions of their ability should encourage children to continue to practice and refine their skills, which in turn leads to more positive perceptions. However, Stodden et al. [15] did not include gender in their model. There is a need to further our understanding of the developmental trajectory of perceptions of physical competence and the relationship between perceptions and motor proficiency, especially from early to mid-childhood and between boys and girls. A longitudinal design was used to examine the relationship between fundamental motor skill proficiency and perceptions of physical competence from early childhood to the beginning of middle childhood. Focusing on children's transition from kindergarten to grade 2, four hypotheses were tested: (1) that motor skill proficiency would increase, (2) that perceptions of physical competence would decrease, (3) that sex-based differences would be evident for motor skills and perceptions of physical competence, and (4) that the relationship between motor competence and perceived competence would strengthen. Participants Children were eligible to participate if they were attending one of eight consenting elementary schools in one school district in British Columbia, Canada. Census data from Statistics Canada indicates that, as of 2014, British Columbia families have a median income of approximately 10% higher than the national median [18]. In the school district used in the present study, rates of vulnerability (as measured by the Early Development Index) are lower than, or equivalent to other British Columbia school districts [13,19]. Two cohorts of children were examined. In kindergarten, participants for cohort one were recruited during the 2010-2011 school year (wave 1) and cohort two during the 2011-2012 school year (wave 2). These kindergarten cohorts were tracked to grade 2 using data collected between October and May 2012-2013 and 2013-2014. Children were included in the longitudinal sample if they had complete motor skills and perceptions of competence data in both kindergarten and grade 2. The University of Victoria Human Research Ethics Board and the School District approved this study. Parents provided consent and children provided assent. Materials Fundamental motor skills (six locomotor skills: run, jump, hop, slide, gallop, and leap; and six object control skills: throw, roll, kick, strike, catch, and dribble) were assessed using the TGMD-2 [9]-a criterion and norm-referenced test with established test-retest reliability and evidence of content, construct, and criterion validity [10]. Additionally, body mass index (BMI) was measured as a potential confounder. The Pictorial Scale of Perceived Competence and Acceptance for Young Children-preschool and kindergarten [29] and The Pictorial Scale of Perceived Competence and Acceptance for Young Children-first and second grade [30] were used to assess perceptions of physical competence. Each scale consists of 24 items subdivided into four subscales (six statements each): Cognitive Competence, Physical Competence, Peer Acceptance, and Maternal Acceptance. Only the perception of physical competence subscale was used for this study. Both versions of the test contain the same number of questions, as well as the same subscales and script to be read; however, two skills were changed to be more age appropriate. For the physical domain, two questions (bouncing a ball and jumping rope) were added by Harter and Pike [20] to the grade 1 and 2 version of the survey, while tying one's shoes and hopping on one foot were omitted. The surveys have acceptable reliability and validity for use with kindergarten and grade 2 children [20]. Procedures A team of 10 trained research assistants collected these data. The TGMD-2 was administered during scheduled physical education classes in accordance with the testing procedure outlined in the Examiner's Manual [9]. Each class was divided into four small groups prior to entering the gymnasium, with each group consisting of 3-5 children. Each consented child was digitally recorded performing the skills at their station twice before moving onto the next station. After all skills were recorded, trained research assistants measured the height and weight of each participant to determine BMI. Due to scheduling and time constraints, data were collected over multiple visits to each school. The Pictorial Scale of Perceived Competence and Acceptance for Young Children [20] was administered by the research assistants individually with each child in a quiet area. Data Treatment and Analyses The principal investigator scored the behavioural components of each of the 12 skills dichotomously using digital video. The number of components completed correctly for each subtest (locomotor and object control skills) was summed to provide a raw score (range 0-48). The items of both versions of The Pictorial Scale of Perceived Competence and Social Acceptance for Young Children assessing physical competence were scored on a scale of 1-4 for each item. Scores from the physical competence subscale questions were summed (six items total) to provide a raw score out of 24 (range 6-24). Descriptive statistics were then computed for locomotor skills, object control skills, and perceptions of physical competence in kindergarten and grade 2. Specifically, means and standard deviations, as well as percent of maximum possible score (POMP) [31] were calculated. POMP was calculated using the following equation: (Observed score − minimum possible/maximum possible − minimum possible) × 100. To test whether motor skills improved (Hypothesis 1) and perceptions decreased (Hypothesis 2), we performed a mixed-design analysis of variance to examine the change in perceived competence, locomotor and object control skills over time using sex as the between-subject factor and grade level as the within-subjects factor. Further, paired-sample t-tests were conducted to examine change or stability in each of the 12 skills. Pearson product-moment correlation coefficients were computed to test the relationships between locomotor skills and perceptions of physical competence and between object control skills and perceptions of physical competence for boys and girls in kindergarten and in grade 2 (Hypotheses 3 and 4). Further, a series of linear regression analyses were conducted to predict perceptions of physical competence (as the outcome variable) from locomotor and object control skills (predictor variables) in kindergarten and grade 2 for boys and girls separately. All statistical analyses were conducted using IBM SPSS (Version 23.0, IBM Corp., Armonk, NY, USA) for Windows [32], and alpha value for rejecting the null hypothesis was set at <0.05 [33]. Results Two hundred and fifty children (of 780 measured) had complete motor skills and perceptions of competence data in both kindergarten and grade 2 and were included in the longitudinal analysis. The mean age for children in kindergarten and grade 2 was 5.8 ± 0.3 years and 7.7 ± 0.4 years, respectively. Descriptive statistics grouped by sex for locomotor skills, object control skills, and perceptions of physical competence are reported in Table 1. Note. POMP = Percent of maximum possible score (range 0-100). The raw and POMP motor skill scores indicated that the children's skills in kindergarten were in the middle of the range of possible scores, but had increased to approximately 57% to 67% of the maximum possible by grade 2. Perceived physical competence scores were high in both grades. The mixed analyses of variance revealed a significant improvement in object control skill raw scores from kindergarten to grade 2, Wilk's Lambda = 0.873, F(1, 248) = 36.129, p = < 0.001, ηp2 = 0.13 as well as a significant effect for sex F(1, 248) = 29.992, p < 0.001, ηp2 = 0.11. Boys had significantly higher object control skills compared to girls in both kindergarten and grade 2 (see Table 1). There was no overall improvement in locomotor scores from kindergarten to grade 2; Wilk's Lambda = 0.994, F(1, 248) = 1.611, p = 0.206, ηp2 = 0.006. However, there was a significant effect for sex, revealing that girls had significantly higher locomotor skills compared to boys in both kindergarten and grade 2 F(1, 248) = 8.806, p = 0.003, ηp2 = 0.04. There was also a significant increase overall in perceived physical competence from kindergarten to grade 2, as evidenced by a Wilk's Lambda of 0.983, F(1, 248) = 4.257, p < 0.040, ηp2 = 0.02. In addition, there was a significant effect of gender on perceived physical competence F(1, 248) = 11.369, p = 0.001, ηp2 = 0.044, revealing that girls had higher perceptions of physical competence than boys in both kindergarten and grade 2. Change or stability of each of the 12 TGMD-2 skills is presented in Table 2. Across both the boys and the girls, there was significant improvement in seven out of the 12 skills (run, hop, leap, slide, strike, dribble, and catch) from kindergarten to grade 2. Sex-based differences were evident, with catching and rolling improving among boys and not girls, and gallop, leap, and jumping improving among girls and not boys. In kindergarten, perceived physical competence and both locomotor skills and object control skills were significantly related for both girls and for boys (see Table 3). In grade 2, significant relationships were found between perceived physical competence and locomotor skills for boys only; whereas significant correlations were found between perceived physical competence and object control skills were found for both. Because there were significant differences between the boys' and girls' motor skills and perceptions of physical competence, separate linear regression analyses were calculated to predict perceptions of physical competence. After controlling for age in months and BMI, both the regression model for To examine the generalizability of the findings of this study, we tested whether the 250 children in the longitudinal sample were representative of children not included in the longitudinal sample in both kindergarten (n = 137) and grade 2 (n = 143) from the same schools. Using independent t-tests, children's locomotor skills, object control skills, and perceptions of physical competence scores were compared. No significant differences were found between the two groups for locomotor raw scores t(386) = −0.417, p = 0.677, object control raw scores t(386) = 0.405, p = 0.685, or perceptions of physical competence t(386) = −0.613, p = 0.541 in kindergarten. Similarly, no significant differences were found between the two groups for locomotor raw scores t(391) = 1.587, p = 0.113, object control raw scores t(391) = 1.650, p = 0.653, or perceptions of physical competence t(391) = 0.457, p = 0.459 in grade 2. These findings suggest that the longitudinal sample was representative of the larger cross-sectional samples. Table 3. The relationship between motor skills and perceived physical competence. Discussion Examining the relationship between perceived and actual motor skill proficiency as children transition from early to middle childhood provides important developmental information about how perceptions are forming and when they begin to become more accurate. We tested aspects of a developmental model conceptualized by Stodden and colleagues [15]: that the relationship between motor skill proficiency and perceptions of physical competence would strengthen as children transitioned from early to middle childhood. Further, we extended this thought by including gender as a factor affecting these relationships. Consistent with the principle that motor development is cumulative [34,35], the children's locomotor and object control skills significantly improved from kindergarten to grade 2. The POMP scores showed that locomotor skills improved 7% for boys and 10% for girls, while object control skills improved by 15% and 14% for boys and girls, respectively. More detailed analyses of individual skills revealed that girls improved in all six locomotor skills and most object control skills, with the roll and throw being the exceptions ( Table 2). The boys also significantly improved in all six object control skills and demonstrated significant improvements in three of the six locomotor skills. The boys' gallop, leap, and jump did not change (Table 2). These findings are consistent with previous research showing that boys perform better with object control skills [10,12,17,18,26,27]; they also support studies showing that girls have more mature locomotor skills than boys of the same age [10,12,18]. The locomotor (gallop, leap, and jump) and object control skills (roll and throw) that did not improve in our study may reflect the types of activities children are engaging in and practicing. For example, both the boys and girls significantly improved their kicking and running, two predominant skills associated with soccer. In Canada, soccer is the most popular team sport for children and youth [36]. One in four respondents in a nationwide survey reported having at least one child living in the household playing soccer on a regular basis [37], and more than 750,000 children in Canada are part of an organized soccer program [36]. The impact that experience and/or practice can have on motor skill proficiency is evident in Williams and colleagues' [38] study on throwing patterns of boys and girls. These authors found that young boys and girls did not differ in throwing velocity when using their non-dominant hand. However, boys threw significantly harder than girls when using their dominant hand, which demonstrated that the large differences in throwing when children use their dominant hand could be attributed to practice [38]. Opportunities to engage in activities and play often conform to stereotypical gender norms, reflecting sociocultural influences and beliefs about how boys and girls should act [34]. There is some evidence to suggest that girls in the school district in this study play fewer team sports and participate in more dance than boys [39]; there is also evidence from the broader literature indicating that girls may play more of a spectator role in territory invasion activities such as soccer [40]. These participation patterns likely have an effect on the specific motor skills that boys and girls develop, and as Stodden et al. [15] suggested, these motor skills are the tools that enable participation. Perceptions of physical competence raw scores were high in kindergarten and in grade 2, which support previous findings reporting higher levels of perceived physical competence in early childhood [1,16,18,19] and middle childhood [19]. Our second hypothesis was that perceptions of physical competence would decrease from kindergarten to grade 2. Although the change in perceived physical competence was statistically significant from kindergarten to grade 2, the effect size (ηp2 = 0.02) was small [41], equating to an increase in perceived physical competence raw score of approximately one on a scale ranging from 6 to 24. Although the change was small, it was surprising that perceptions of physical competence increased, which is not consistent with developmental theory [13][14][15]. While it was expected that perceptions of physical competence would decrease from kindergarten to grade 2, we actually found that perceptions increased for boys and girls. Similarly, the children's fundamental motor skills significantly increased from kindergarten to grade 2. This is one of the very few studies to examine children's perceptions of physical competence during this transitional period of development. Spessato et al. [19] found a modest significant relationship between perceptions and motor skill proficiency among 6-year-old children, and a similar strength relationship that was not significant among 7-year-old children. However, their sample was not stratified by sex. Our findings show that the relationships between fundamental motor skills and perceived physical competence were significant for both boys and girls in kindergarten, and became stronger for boys by grade 2. The strengthened relationship for boys may reflect a positive internalization of their increased proficiency. For girls however, the relationship weakened as they aged. The girls' perceptions of physical competence increased, while the relationship with motor competence decreased. This suggests that factors other than the motor skills may be influencing the formation of girls' perceptions of physical competence. It is possible that the TGMD-2 did not capture a specific range of motor skills that influence girls' perceptions of competence. Previous studies have suggested that slightly older girls may discount their abilities and therefore have lower perceptions of physical competence [2,42]. However, this is not the case in this study, where girls' perceptions of physical competence stayed high in grade 2 and were significantly higher than the boys. Data previously collected with grade 2 students in the same eight schools showed that girls participated in significantly more swimming, gymnastics, and formal and informal dance compared to boys [43]. The skills used for these types of activities are not well captured in the TGMD-2 and, therefore, the potential to see the relationship between motor skill proficiency and perceived physical competence is likely diminished. A limitation of longitudinal research and the current study is the loss to follow-up through study withdrawal or other factors (e.g., switching schools). However, our comparisons of the cross-sectional and longitudinal samples did not reveal significant differences in any of the variables, which suggests that the longitudinal sample in this study was representative of the larger sample of kindergarten and grade 2 children. Another limitation of the study is that the survey used to assess perceived physical competence had few questions pertaining to specific object control skills (no questions in kindergarten and one question in grade 2). As a result, stronger relationships may have been established if the tool used in this study more closely reflected the motor skills tested. In the future, researchers may wish to also measure children's perceptions of their abilities in activities they participate in. Finally, the physical activity levels of children were not measured in this study, which may have impacted the relationship between motor competence and perceived physical competence, and should be taken into consideration when interpreting the findings. Conclusions Children's perceptions of their physical competence are important in the development of the self and ultimately can encourage or discourage children and youth's participation in physical activities and sport [14,44]. Our findings suggest that the change in the relationship between fundamental motor skill proficiency and perceived physical competence as children transition from early to the beginning of middle childhood is statistically different for boys and girls. With that, factors other than motor skills may be influencing girls' perceptions. Explanatory models may need to be altered to represent the influence of sex. Efforts need to be made to better understand what factors influence girls' perceived physical competence and what that means for the girls' cognitive, social, and physical development.
2017-08-30T22:00:21.373Z
2017-08-10T00:00:00.000
{ "year": 2017, "sha1": "f7d95b8a5170e5587ee8b7d868347c45f85e834f", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-4663/5/3/59/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f7d95b8a5170e5587ee8b7d868347c45f85e834f", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
218632181
pes2o/s2orc
v3-fos-license
A Study of Association of Premature Graying of Hair and Osteopenia in North Indian Population Context: Hair graying is one of the signs of human aging and is caused by a progressive loss of pigmentation from growing hair shafts. Studies have shown a correlation of early hair graying with osteopenia, indicating that premature graying could serve as an early marker of osteopenia. Aim: To compare the degree of osteopenia in individuals with premature graying of hair (PGH) compared to ordinary individuals. Settings and Design: We conducted an observational, case–control study among 132 healthy individuals between 18 and 30 years of age. Subjects and Methods: Detailed history and examination of PGH was taken. Bone mineral density (BMD) was assessed using Furuno CM‐200 ultrasound bone densitometer. Statistical Analysis: SPSS 21 software was used, and the data were summarized in the form of mean ± standard deviation for quantitative values and percentages for qualitative values. Chi–square test, Student’s t‐test, analysis of variance, and other appropriate tests were applied for comparison, and P < 0.05 was considered statistically significant. Results: PGH was present in 82 (62.1%) cases, whereas osteopenia was present in 56 (42.4%) cases. The mean age of onset of graying of hair among the cases was 20.62 ± 3.74 years. A higher age group of 25–30 years (P = 0.016) and family history of PGH (P < 0.001) were significant risk factors for PGH. The mean BMD of the case group was 0.76 ± 1.00 and the control group was 0.68 ± 1.11, but the difference was not statistically significant (P = 0.649). Conclusion: The study concluded that there is no significant association between osteopenia and PGH. INTRODUCTION H air graying, one of the prototypical signs of human aging, is caused by a progressive loss of pigmentation from growing hair shafts. In normal aging, the onset of hair graying occurs at 34 ± 9.6 years of age in Caucasians and 43.9 ± 10.3 years in African Americans. Hair graying represents an impaired ability of melanocytes to maintain normal homeostasis and replenish melanin, pigment for the newly growing hair. [1] Hair is said to be gray prematurely only if graying occurs before the age of 20 years in Caucasians, before 25 years in Asians, and before 30 years in Africans. [2] A gradual loss of bone mass occurs with aging leading to osteopenia and osteoporosis. The diagnostic difference between osteopenia and osteoporosis is based on the measure of bone mineral density (BMD). Osteopenia and osteoporosis markedly increase the risk of skeletal fractures. [3] Whether hair graying, early or otherwise, is a risk factor/predictor for osteopenia is controversial. Previous studies have shown a correlation of early hair graying with osteopenia, indicating that premature graying could serve as an early marker of osteopenia. [4,5] However, some studies showed that there is no correlation between the two. [6,7] We conducted a case-control study to compare the degree of osteopenia in healthy individuals with This is an open access journal, and articles are distributed under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 License, which allows others to remix, tweak, and build upon the work non-commercially, as long as appropriate credit is given and the new creations are licensed under the identical terms. premature graying of hair (PGH) and those without PGH. SUBJECTS AND METHODS We conducted an observational case-control study (from February 2018 to April 2019) and a total of 132 normal individuals, who accompanied the patients attending the outpatient department of Department of Dermatology, Era's Lucknow Medical College and Hospital, were enrolled. We included adults of either sex in the age group of 18-30 years and willing to participate in the study. The participants were divided into two groups: cases (n = 82) and controls (n = 50). Individuals who had graying of hair were included in the case group. The control group included age-, sex-, and bone mass index (BMI)-matched individuals without graying of hair. The exclusion criteria were all individuals with recent or old fracture, chronic debilitating disease, malabsorption syndrome, gross malnutrition, arthritis, hormonal disorders, neurologic disorders, on long-term treatment with systemic corticosteroids, chloroquine, hormone replacement therapy >6 months, calcitonin, bisphosphonates, antiepileptics, and psychiatry drugs. A history of age of onset of PGH, intake of vegetarian and nonvegetarian food, amount of milk/day, smoking, and back pain was taken from all individuals. Family history of PGH was also taken. All the individuals were examined for PGH, and hair whitening score (HWS) was calculated (1: pure black; 2: black > white; 3: black = white; 4: white > black; and 5: pure white). [8] BMD was assessed using Furuno CM-200 ultrasound bone densitometer [ Figure 1]. This machine uses ultrasound (500 Hz) to measure the speed of sound in heel (calcaneus). The machine's %CV (coefficient of variation) is 0.5%, and it provides accurate results by compensating the heel temperature. T-score of BMD between 1.0 and 2.5 was considered as osteopenia. [9] The data were analyzed using Statistical Package for the Social Sciences, Version 21.0 (IBM Corporation, Armonk, New York, USA). Chi-square test and Student's t-test were used to compare the data. Logistic regression analysis was performed to see the simultaneous effect of various explanatory variables. The confidence level of the study was kept at 95%; hence, P < 0.05 indicated statistically significant association. Ethical clearance was taken from the institutional ethical committee for the purpose of the study. The mean BMD of the case group was 0.76 ± 1.00 and the control group was 0.68 ± 1.11, but the difference was not statistically significant (P = 0.649) [ Table 3]. The proportion of cases having low BMD was slightly more than the control group (odds ratio = 1.54), though the risk was not found to be significant [ Figure 3]. Among the biodemographic factors, a higher age group of 25-30 years (P = 0.016) and family history of PGH (P < 0.001) were found to be significant risk factors for PGH. However, the intake of vegetarian/ nonvegetarian food (P = 0.016) and amount of milk intake per day (P = 0.008) and history of back pain P = 0.031) and smoking (P = 0.401) did not alter the risk of PGH. DISCUSSION Hair graying scientifically termed as canities is a physiological phenomenon that occurs with chronological aging, regardless of the gender or race. When graying begins before the usual age of onset, it is termed as PGH or premature canities. [10] PGH has been proposed as a clinical marker of osteopenia in various studies, but the association of PGH with osteopenia has not been validated in large studies. In our study of 132 participants between the age group of 18 and 30 years, graying of hair was present in 82 (62.12%) cases and graying was absent in 50 (37.87%) controls. In our study, the mean age of onset of graying of hair was 20.62 ± 3.74 years. Similar age distribution of PGH was observed by Shin et al. [11] (20.2 ± 1.3 years). However, Daulatabad et al. [12] observed PGH in age group as low as 11.6 ± 3.6 years. Out of 132 individuals, osteopenia was present in 56 (42.4%) participants and absent in 76 (57.6%) participants. The proportion of osteopenia in cases with PGH was slightly more than the normal individuals. However, the risk was not found to be significant. Our result was similar to that of Chakrabarty et al. [13] who reported that there was no significant association between PGH and serum calcium concentration. Other studies reported a significant association between low calcium levels and PGH. [14,15] In our study, the mean BMD difference between cases and controls was not statistically significant (P = 0.649), which was similar to the observations of Orr-Walker et al. [16] However, Rosen et al. [4] had reported that individuals with PGH were 4 times more likely to have osteopenia than individuals without graying. The possibility of association found between osteopenia and PGH in previous studies being purely coincidental cannot be denied. It is proposed that the etiopathogenesis mechanisms of both the said conditions follow different paths and are not related to each other. We observed that family history is a significant risk factor of PGH. Similar observations have been recorded in previous studies. [11,12,17] It is proven that PGH has genetic roots. It follows an autosomal dominant inheritance, [18] but has a multifactorial etiology. The various causes include stress, autoimmune conditions, inflammation, environmental factors, nutritional deficiencies, and premature aging syndrome. It has been postulated that PGH could be due to exhaustion of melanocyte's capability to produce pigment for the hair after a defined age. CONCLUSION To conclude, we did not find any significant association between osteopenia and PGH. Also, a higher age group (25-30 years) and family history of PGH were found to be the significant risk factors for PGH. The limitations of our study include small sample size and lack of long-term follow-up. The number of control group was less due to logistics (less availability of controls in the stipulated time frame of the study). This could have caused some reduction in the power of study. Despite this, the observed results of the study are above the cutoff value of 80%. The accuracy of the study is still more than 80% and the findings are not altered markedly. Since premature canities has a strong bearing on patients sociocultural acceptance and self-esteem, prospective studies on a large scale are warranted for a better understanding of the etiopathogenesis of this condition. Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest.
2020-05-07T09:17:00.933Z
2020-03-01T00:00:00.000
{ "year": 2020, "sha1": "4110ffbe4cd70ed5481217b417d0c5f9a7d05eb4", "oa_license": "CCBYNCSA", "oa_url": "https://europepmc.org/articles/pmc7362968", "oa_status": "GREEN", "pdf_src": "MergedPDFExtraction", "pdf_hash": "b517fb69a2abeae30154f967c3d4e74f88502e64", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
251455792
pes2o/s2orc
v3-fos-license
On Distributional Solutions of a Singular Differential Equation of 2-order in the Space K’ : The main purpose of this work is to describe all the zero-centered solutions of the second order linear singular differential equation with Dirac delta function (or it derivatives of some order) in the second right hand side in the space K’. All the coefficients and the exponents of the polynomials under the unknown function and it derivatives up to second order respectively, are real and natural numbers in the considered equation. We conduct investigations for both the euler case and left euler case situations of this equation, when it is fulfilled some particular conditions in the relationships between the parameters A, B, C, m, n and r . In each of these cases, we look for the zero-centered solutions and substitute the form of the particular solution into the equation. We then after, determinate the unknown coefficients and formulate the related theorems to describe all the solutions depending of the cases to be investigated. Introduction The importance of differential equations is well known as these equations describe many physical phenomena in our daily life. One also understand and know that it is not easy, in some specific cases, to solve certain kind of differential equations even those of the first order. Solving differential equations in the spaces of generalized functions such as K' and S ' is always challenging, and various scientific researches were devoted to such topic. We recall that the theory of distributions was established by the imminent french mathematician Laurent Schwartz in 1945 posing the ideas which already in germ in the works of the great russian mathematician Sobolev in the 1930s. It is also well known that physics extended in space by functions of several variables and the expression of the laws of physics in terms of partial differential equations have been a great advance in the study of these phenomena. Talking to some specific notions, we know that sometimes even the concept of weak derivatives is not sufficient, and the need arises to define derivatives that are not functions, but are more general objects. Some measures and derivatives of measures will enter. We underline the fact that, for the purpose of setting up the rules for a general theory of differention where classical differentiability fails, Schwartz brought forward around 1950 the concept of distributions: a class of objects containing the locally integrable functions and allowing differentiations of any order. In many scientific works it is mentioned that the distributional solutions, specifically as a series of Dirac delta function and its derivatives, have been used in several areas of applied mathematics such as the theory of partial differential equations, operational calculus, and functional analysis; in Physics such as quantum electrodynamics. We advice readers to see for more details the papers [15][16]. In our previous works, we have already used the Fourier Transform and its inverse applied to singular linear differential equation of some forms to describe and obtain all the generalized-function solutions of the considered equation, see [6,11,13]. In the same direction, we can quote among many others, for the generalized solutions, Nonlaopon et al. [17] used the Laplace transform technique to study some differential equations satisfying the differential equation. Here we consider the following singular linear differential equation. This is the first step we undertake to see in which way, we can generalize step by step, the results of researches obtained in the previous case studied for a linear singular differential equation of first order. See [6]. We structure this paper as follow: in section 2, we recall some fundamental well known concepts of distributions (generalized functions). Section 3 presenting the main results of the paper is firstly discribing, the Euler case and, secondly is devoted properly to the investigation of the solvability (existence of zero-centered solutions) of the considered homogenous equation in the situation called the left Euler case. We conclude our paper in section 4. Preliminaries Before we proceed to our main results, the following definitions and concepts well known from the theory of generalized functions are required. We also recall the notions of Fourier Transform and it inverse applied when looking for solutions of differential equations in our previous researches, for more details see [6]. By the way, we briefly review this important notions of Fourier transform, its properties, and generalized function centered at a given point (for a detailled study, we refer to [2,6,9,14]). We recall that K is denoted the space of test functions, of finite infinitely differentiable on R functions and K' the space of generalized functions on K. For the function 1( ) ∈ 0, through 81 = 1 9 we noted the Fourier transform defined by the formula. For the Fourier transform of generalized function, many properties are conserved as those taking place for Fourier transform for test functions, and particularly formulas of relationships between differentiability and decreasement. Next, we need the following assertions which can be found with their proofs in the books for theory of generalized functions, for example see [2,3,9]. where & L are some constants. Then it takes place the following formula. As consequence from lemma 2.1 when N(") = " P , we obtain the following assertion. The proof of this lemma can be found in some special mathematical books related to the theory of distributions, see also [2,3,9]. Sometime we used in our investigations the following very is called a generalized function (") ∈ 0′ that satisfies in the space 0′ the equality. For the proof of this lemma it is sufficient to apply the definition 2.2, the Fourier transform to both members of the equation (8) and, next applying the inverse Fourier transform, we reach to the needed result. Next, we consider a more complicated case when it is violated at least one of the two conditions (12). First of all let consider the case = , + 2 ≠ , + 1 and, one of the situation to be investigated we call this case as following way. B. The left Euler case. In this section we consider the case = + 1, , > − 1 and call this situation of the equation (1) Next, let us calculate immediatly. for that value of c G , and in the summation one term disappear. Therefore we have: Immediatly note that the right hand side in (30) -(31) are similar and therefore from these recurrent relationships, it is easy to obtain the general form of the coefficients | O . That allows us writing the result defined by formula (17). The theorem is proved. The proof in the case c) is deduced from theorem 3.3, in the case b) from the fact that }( (y * ? ) (") = 0 , by the realization (35) and { * + − 1 − , < 0 and finally in the case a) it is obvious. Conclusion In this paper we have completly investigated the existence of the zero-centered solutions of the equation (1) in both cases called: Euler and left Euler cases. We have look for the wanted solutions by replacing initialy the particular solution expressed with unknown coefficients into the initial equation (1) and, therefore we obtain in theorem 3.1 the necessary and sufficient conditions for the existence of zero-centered solutions of the equation in the euler case. Next, it is described all the solutions in the previous case within theorem 3.2 in connexion of the two possibilities mentionned. Investigating left euler case, we bring out the existence of non-trivial solutions in the form of Dirac delta functions as well as it derivatives up to the order − 2. The main results obtained and concerning this case, depending of the relationships between the parameters, are formulated in theorems 3.3 and 3.4. From the obtained results, it is clear that it is challenging to try to imagine how to generalize such investigations of similar type of differential equation up to the general cases call left Euler case or right Euler case when there are realized the conditions: ∑
2022-08-10T15:20:22.155Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "e9c6e4c4f6470094178f0792e5e2a1229489ad23", "oa_license": "CCBY", "oa_url": "https://article.sciencepublishinggroup.com/pdf/10.11648.j.ijtam.20220802.13.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "07cd93be1a3f386c5c187c578f079393716b73f3", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [] }
225686743
pes2o/s2orc
v3-fos-license
Computational Quantum Chemical Study, Drug-Likeness and In Silico Cytotoxicity Evaluation of Some Steroidal Anti-Inflammatory Drugs This paper contains a theoretical study of ten Anti-inflammatory steroids (AIS) on the understanding of the relationship between the structure and activity of the drug, the pharmacokinetic parameters responsible for bioavailability and bioactivity and finally the toxicity evaluation. DFT calculations with B3LYP/6-31G (d, p) level have been used to analyze the electronic and geometric characteristics deduced for the stable structure of the compounds. Moreover, using the Frontier Molecular orbital (FMO) energies, MEP surface visualizations and the density-based descriptors such as chemical potential (μ), electronegativity (χ), hardness (η) and softness (σ), the chemical stability were determined. Furthermore, in silico, studies showed that Lipinski rules are applied, which means that these (AIS) are expected to have a high probability of good oral bioavailability. On the other side, the bioinformatic Osiris/Molinspiration analyses of the relative cytotoxicity o f these derivatives are reported in comparison to Cortisol. In fact, it has been showed that almost of these compounds are non-toxics except for Mometasone that presents a great risk of tumorigenicity during reproduction with a slightly mutagenic structure due to the two chloride atoms. from all results obtained, we can conclude that fluticasone has the best physico-chemical properties which explains its high efficiency. INTRODUCTION Corticosteroids are a group of structurally related molecules that include natural hormones secreted by the hypothalamicpituitary-adrenal axis and synthetic drugs 1 . The endogenous corticosteroids have the function of regulating the physiological immune and metabolic mechanisms, in particular glucido-protein and phosphocalcic 2 . for therapeutic purposes, Corticosteroids are thus commonly used for their fantastic anti-inflammatory properties, but also for their cytostatic effects, which explain their effectiveness in inflammatory, immunoallergic and hematological malignancies 3 . Furthermore, Corticosteroids have the originality of exerting their actions through essentially genomic effects by acting on the transcription of DNA into RNA and on the post-transcriptional regulation of messenger RNA. Being less well known, corticosteroids can also have non-genomic effects, especially when used in high doses 4-6 . However, corticosteroids have side effects that should not be underestimated, especially during long-term treatments. First, metabolic disorders, such as sodium retention, hypokalemia, a diabetogenic effect and an increase in protein catabolism. Second, endocrine disorders in the event of an incorrectly administered dose of cortisone such as a braking of the cortico-adrenal axis. Finally, an increased risk of infection and possible digestive disorders (lower than AINS) 7 . The discovery of new molecules that could be more effective with fewer unwanted side effects is a constant concern of the pharmaceutical industry. So, it is moving towards new research methods, which consist in predicting the properties and activities of molecules even before they are synthesized 8 . The significant development of computer science as well as theoretical studies of quantum chemistry allow researchers to obtain more precise physicochemical and quantum parameters of compounds in a shorter time. It is moving towards the synthesis of a very large number of molecules simultaneously and to test their actions on therapeutic targets. This is the main objective of the QSPR property structure quantitative relationships. These studies are essentially based on the search for similarities between molecules in large databases of existing molecules whose activities or properties are known 9 . The relationships between the structures of molecules and their activities or properties are generally established using molecular ISSN: 2250-1177 [69] CODEN (USA): JDDTAO modeling methods and statistical methods. The usual techniques are based on the characterization of molecules by a set of descriptors, real numbers measured or calculated from molecular structures. It is then possible to establish a relationship between these descriptors and the modeled quantity 10 . This study aims to understand the relationship between the drug structure and its activity of ten chosen gluco-corticoids illustrated in Fig 1. using density functional theory (DFT) in order to clarify the properties responsible for the drugs efficiency. On the other hand, Toxicity risks and physicochemical properties of all compounds were calculated by the methodology developed by Osiris and Molinspiration. COMPUTATIONAL STUDIES The molecules under investigation have been analyzed with density functional theory (DFT), employing Becke's three parameter hybrid exchange functional 11 with Lee-Yang-Parr correlation functional (B3LYP) 12 . All the quantum chemical calculations in this study were performed using the Gaussian 09 program. Drawing the structure of the optimized geometry and visualization of the HOMO and LUMO calculations have been done by gausView 5.0.8 program 13 . The chemical reactivity descriptors were calculated using DFT. These are very important physical parameters to understand chemical and biological activities of these compounds. The calculated HOMO-LUMO orbital energies can be used to estimate the ionization energy, electron affinity, electronegativity, electronic chemical potential, molecular hardness, molecular softness, and electrophilicity index 14 . Using Molinspiration cheminformatic (https://www.molinspiration.com) , all pharmacokinetic parameters were performed to predict the bioactivity of compounds while The Osiris software analyses give information about the relative cytotoxicity of these derivatives witch are reported in comparison to Cortisol. Frontier Molecular Orbital (FOM) Analysis The Frontier Molecular orbital makes allows to predict the reactivity of the molecule whose active site can be demonstrated by the distribution of the orbital frontiers 15 . Indeed, The HOMO (Highest Occupied Molecular Orbital) is noted as a nucleophile that donates electrons, which behaves like an electron donor, while LUMO (Lowest Unoccupied Molecular Orbital) LUMO can be an electrophile it accepts electrons from nucleophile, which behaves like an electron acceptor 16 . several new chemical reactivity descriptors, such as the chemical potential, global hardness and electrophilicity, have been calculated to understand various aspects of pharmacological sciences including drug design and the possible eco-toxicological characteristics of the drug molecules. The global electrophilicity index is proposed as ω= μ2/2η. It measures the stabilization in energy when the system accepted an additional electronic charge from the environment. Electrophilicity is considered to be a better descriptor of overall chemical reactivity encompassing both the ability of an electrophile to acquire an additional electronic charge and the resistance of the system to exchange an electronic charge with the environment. It provides information on electron transfer (chemical ISSN: 2250-1177 [70] CODEN (USA): JDDTAO potential) and stability (hardness). As shown in Table 1. The lowest value of the energy gap is about 0.1764 eV corresponding to the fluticasone molecule, which indicates that this molecule is the most stable and reactive one compared with the other cortisol derivatives. The same molecule has the biggest value of ionization potential (I) and electron affinity (A), with 0.2363 and 0.0599 eV, respectively. Also, this compound is suggested to be a soft molecule. On the other hand, it can be observed that the commercialized derivatives show more stability and biological reactivity than the original cortisol molecule. As known, molecules with high chemical hardness have a little intramolecular charge transfer.in our case, this result is corresponding to the flunisolide compound. The electronegativity is a measure of attraction of an atom for electrons in a covalent bond. When two unlike atoms are covalently bonded, the shared electrons will be more strongly attracted to the atom of greater electronegativity. The global electrophilicity index is about 0.1234 eV for the third compound, which ensures strong energy transformation between HOMO and LUMO. The HOMO and LUMO orbitals of the best compound (Fluticasone) are illustrated in Molecular Electrostatic Potential Molecular electrostatic potential (MEP) provides information on the molecular regions preferred or avoided by an electrophile or a nucleophile. indeed, any chemical system creates an electrostatic potential around it 17 . MEP has proven to be a very useful tool for studying the correlation between molecular structure and the physicochemical property relationship of molecules, including biomolecules and drugs 18 . When we take a hypothetical positive unit charge ' as a probe, it is 'volumeless' sed to feel the attractive or repulsive forces in regions where the electrostatic potential is negative or positive, respectively 19 . The different values of the MEP on the surface of the acids studied appear with the different colors 20 . Generally, the regions of the negatively charged molecule are colored in red while those with positive charge are colored in blue. The green color corresponds to an intermediate potential with zero charge located between the two extremes (red and dark blue) 15 . The yellow and light blue color divides the difference between the average color (green) and the extremes (red / dark blue) 21 . The MEP surface map of cortisol derivatives Fig 3, shows the two regions characterized by the color red (negative electrostatic potential) around the towing oxygen atoms which explain the capacity of an electrophilic attack on these positions, also, the blue color (positive electrostatic potential) around the three hydrogen atoms explains that these regions are susceptible to nucleophilic attack. Finally, for the green color located between the red and blue regions, corresponds to the electrostatic neutral potential surface. The variation in the electrostatic potential produced by a molecule is largely responsible for the binding of a drug to its active sites (receptor) since the binding site in general should have opposite areas of electrostatic potential. As can be seen in Fig 3. the sulfur atom in the fluticasone molecule, presented with a yellow color, has a primary role in the interaction of the drug with the amino acids of the receptor sites. 25 . To support this claim, note that all the compounds have respect the rule of 5, except for Ceclisonide which has two violations. As known, two or more violations of the rule of 5 suggest the likelihood of bioavailability problems 26 . The drug resemblance tabulated for compounds ( Table 2) reflects a complex balance of various molecular properties and structural characteristics that determine whether a molecule is similar to known drugs. These properties, mainly hydrogen bonding characteristics, hydrophobicity, electronic distribution, flexibility, size of molecules and the presence of various characteristics of pharmacophores has an effect on the behavior of the molecule in a biological medium, including the bioavailability, reactivity, toxicity, protein affinity, metabolic stability, transport properties and many more. The activity of all standard compounds and drugs has been rigorously analyzed according to four criteria of successful drug activity known in the fields of the activity of the GPCR ligand, the modulation of ion channels, the activity of inhibition of kinase, nuclear receptor ligand, protease inhibitor and enzyme inhibitor. The results are presented for all the compounds in Table 3. Osiris Calculations Nowadays, structure-based design is very common, but because of ADME-Tox responsibilities, many potential drugs do not reach the clinic. A very important class of enzymes responsible for many ADMET problems is the cytochromes P450. Inhibition of these or the production of unwanted metabolites can lead to many adverse drug reactions. Thanks to recent work on drug design by combining various pharmacophoric sites using a heterocyclic structure, it is now possible to predict activity and / or inhibition with increasing success on two targets (bacteria and HIV) 27 . The risks of toxicity (mutagenicity, tumorigenicity, irritation, reproduction) and the physicochemical properties (miLogP, solubility, drug likeness and drug score) of compounds 1-10 are calculated by the methodology developed by Osiris and illustrated in Table 4. The toxicity risk predictor locates fragments in a molecule, indicating a potential toxicity risk. these alerts indicate that the structure drawn can be harmful concerning the specified risk category. According to the data evaluated in table 4, five of the ten compounds (budesonide, flunisolide, dexamethasone, methyl prednisolone, fluticasone) have structures supposed to be non-mutagenic, non-irritant and without effects on reproduction during the execution of l of mutagenicity by comparison with the standard drug used (Hydrocortisone). except for Mometasone which presents a great risk of tumorigenicity during reproduction with a slightly mutagenic structure due to the two chloride atoms. The logP value of a compound, which is the logarithm of its partition coefficient between noctanol and water, is a well-established measure of the hydrophilicity of the compound. Low hydrophilicity and therefore high logP values can lead to poor absorption or permeation. Compounds have been shown to have a reasonable probability of being well absorbed, their logP value should not be greater than 5.0. On this basis, all compounds 1 to 10 have logP values within the acceptable criteria. At the same time, the right compounds, which have shown good results in screening for anti-inflammatory steroids, have the best drug score values (DS = 0.62 -0.83), compared to the other compounds in the series. CONCLUSION The present work provided additional structure-activity and structure-cytotoxicity information for the series of ten Antiinflammatory steroids (AIS). Using DFT methods, the ground state structure was calculated using the B3LYP/6-31G(d) level of theory. Indeed, the relative stabilities, HOMO-LUMO energy gap and implications of the electronic properties were calculated and discussed with all compounds that are potentially able to cross biological membranes and to have a good oral bioavailability. The FMO theory offers good information about the reactivity of these compounds that are found to be in agreement with the literature that gives information about the relationship between the drug structure and its efficiency, which allows us to design new molecules. in order to get information about positive (nucleophilic attack) and negative (electrophilic attack) regions, MESP surface visualizations were performed. generally, this study aims to illustrate how the electronic and geometric characteristics can be useful in identifying all possible bioactivity of organic compounds. Moreover, the bioinformatic Osiris/Molinspiration analyses of the relative cytotoxicity of these derivatives are reported in comparison to Cortisol. From the theoretical calculations of the studied compounds, it can be resolved that the molecular structure influences on the inhibition efficiency.
2020-06-25T09:07:13.211Z
2020-06-15T00:00:00.000
{ "year": 2020, "sha1": "3a157e3ea1d39c9451b44044a604666dd084435c", "oa_license": "CCBYNC", "oa_url": "http://jddtonline.info/index.php/jddt/article/download/4165/3159", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ad94c2dbbe3acd999c88c348ad2211c0de239201", "s2fieldsofstudy": [ "Chemistry", "Medicine" ], "extfieldsofstudy": [ "Chemistry" ] }
261381418
pes2o/s2orc
v3-fos-license
Ultrasonic-Assisted Surface Finishing of STAVAX Mold Steel Using Lab-Made Polishing Balls on a 5-Axis CNC Machining Center The inconvenience of conventional wool ball polishing is that the surface finishing process should be equipped with a slurry container. The main objective of this research is to develop an ultrasonic-assisted surface finishing process for STAVAX mold steel on a 5-axis CNC machining center, by using new lab-made rubber polishing balls containing the abrasive aluminum oxide instead of the traditional wool ball polishing. In total, five types (type A to type E) of new rubber-matrixed polishing balls with a composite of nitrile butadiene rubber (NBR), an abrasive of aluminum oxide, and an additive of silicon dioxide have been developed. The performance of the composites with different grain sizes (0.05 μm to 3 μm) and concentrations of the abrasive of aluminum oxide have been investigated. The effects of multiple polishing passes on the surface roughness improvement for the lab-made polishing balls have also been investigated in this study. A surface roughness of Ra 0.027 μm on average was achieved by using the multiple polishing process of E-C-B-A. The volumetric wear of the lab-made polishing balls, using ultrasonic vibration-assisted polishing, can be improved from about 12.64% (type A) to 65.48% (type E) compared with the non-vibration-assisted polishing. The suitable combination of the ultrasonic vibration-assisted polishing parameters were an amplitude of 10 μm, a frequency of 23 kHz, a spindle speed of 5000 rpm, a feed rate of 60 mm/min, a stepover of 20 μm, a penetration depth of 180 μm, and a polishing pass of E-C-B-A, based on the experimental results. The surface roughness improvement on a test carrier with a saddle surface has also been presented by using the ultrasonic vibration-assisted polishing with the lab-made polishing balls. The investigation into cutting performance in the ultrasonic-assisted helical milling of Ti6Al4V alloy by various parameters and cooling strategies has been presented in [3].The study of the mechanism of burr formation by simulation and experiments on ultrasonic vibration-assisted micro-milling has been presented in [4].The benefits of green ceramic machining through ultrasonic-assisted turning have been experimentally investigated in [5].Sustainable cooling strategies to reduce tool wear, power consumption, and surface roughness during the ultrasonic-assisted turning of Ti-6Al-4V have been proposed in [6].The damage characteristics of ultrasonic vibration-assisted grinding of a C/SIC composite material has been studied in [7].The effects of processing parameters on the surface quality of wrought Ni-based superalloy by ultrasonic-assisted electrochemical grinding has been investigated in [8].The chip generation mechanism of Inconel 718 with ultrasonic-assisted drilling by step drill has been studied in [9].The experimental research on new hole-making methods assisted with asynchronous mixed-frequency vibration in TiBw/TC4 composites has been reported in [10].The ultrasonic-assisted abrasive micro-deburring of micromachined metallic alloys has been studied in [11].The burr removal from high-aspect-ratio micro-pillars using ultrasonic-assisted abrasive micro-deburring has been discussed in [12].The enhanced machinability of Ni-based single crystal superalloy by vibration-assisted diamond cutting has been presented in [13].The zirconia responses to edge chipping damage induced in conventional and ultrasonic vibration-assisted diamond machining are presented in [14].Wire-EDM performance has been enhanced with zinc-coated brass wire electrode and ultrasonic vibration [15].The research on ultrasonic vibration-assisted electrical discharge machining SiCp/Al composite has been presented in [16].A numerical modeling and experimental study on the material removal process using ultrasonic vibration-assisted abrasive water jet has been reported in [17].The hybrid simultaneous laser-and ultrasonic-assisted machining of the Ti-6Al-4V alloy has been presented in [18].The comparative study of the laser and ultrasonic vibration-assisted hybrid turning of micro-SiCp/AA2124 composites has been carried out in [19]. To improve the surface roughness of the test objects, some vibration-assisted surface polishing devices have been developed in [20][21][22].The experimental study on ultrasonicassisted electrolyte plasma polishing of SUS304 stainless steel has been presented in [20].The modeling and analysis of the material removal rate for the ultrasonic vibration-assisted polishing of optical glass BK7 has been proposed in [21].The ultrasonic-assisted innovative polyurethane tool to polish the mold steel has been developed in [22].A vibration-assisted ball polishing device was developed in [23] to obtain the ultra-precision surface finish of the hardened stainless mold steel.The ball burnishing process has also been introduced to perform the pre-finishing process for ball polishing to improve the surface roughness of the stainless mold steel in [23].The wear reduction of the machining tool can be achieved by using ultrasonic-assisted devices (tools) [6,[23][24][25].To evaluate the surface quality of a product, some definitions and surface texture parameters, such as the height, spatial, and hybrid parameters, etc., can be found in ISO 21920-2:2021 [26].The functional importance of the surface texture parameters, including both the amplitude parameters of the surface and the functional parameters, has been presented in [27]. The effect of aluminum powder on the properties of nitrile rubber (NBR) composites and the role of the bonding agent, viz., hexamethylene tetramine-resorcinol, has been investigated in [28].The polymethyl methacrylate (PMMA) denture base composites were enhanced by various combinations of nitrile butadiene rubber (NBR) and treated ceramic fillers (aluminum oxide, yttria-stabilized zirconia, and silicon dioxide) reported in [29]. Vibration-assisted ball polishing using the ultrasonic tool with the holder type BT40, which can be integrated into the tool magazine of a machining center, has been implemented in [30], as shown in Figure 1.The processing parameters for the STAVAX mold steel, such as the amplitude, frequency, spindle speed, abrasive diameter, feed rate, depth of penetration, etc., have been investigated in [30].To facilitate the ultrasonic-assisted ball polishing process without using a slurry container device and concerning green manufacturing, the new polishing balls embedded with the abrasive materials are to be introduced in this study. Materials 2023, 16, x FOR PEER REVIEW 3 of 14 manufacturing, the new polishing balls embedded with the abrasive materials are to be introduced in this study. Figure 1. Experimental setup of the ultrasonic-assisted polishing tool integrated in a CNC machining center. The main objective of this study was to present the development of the new lab-made polishing balls, a different number of passes of polishing balls on the surface roughness improvement, and to investigate the tool wear of the polishing ball using the ultrasonic vibration-assisted ball polishing on a machining center.The property of the tested material, STAVAX stainless mold steel, which is equivalent to AISI 420 modified [31], the experimental setup of the ultrasonic-vibration-assisted ball polishing system on a 5-axis machining center, and the investigation of the used ultrasonic tool are introduced in Section 2. The development of the new lab-made polishing balls, the suitable combination of the ultrasonic vibration-assisted ball polishing parameters for a plane surface, the effects of different passes on the surface roughness improvement, the volumetric wear of the polishing balls, and the application to the surface finishing of a test carrier with a saddle surface are presented in Section 3. Some discussion and future work will be explained in Section 4. Finally, some remarkable results of this work are concluded. Materials STAVAX is a premium-grade stainless tool steel (AISI 420 modified) with the following properties [31]: The combination of these properties yields a steel with outstanding production performance.As a result, it is suitable for application to most molds of larger tools.The chemical composition of the STAVAX stainless mold steel is shown in Table 1 [31].The hardness of the STAVAX used in this study is about HRB190 (HRC 22) after soft annealing.Table 1.Chemical composition of STAVAX stainless mold steel (%) [31]. Composition C Si Mn Cr V % 0.38 0.9 0.5 13.6 0.3 The main objective of this study was to present the development of the new lab-made polishing balls, a different number of passes of polishing balls on the surface roughness improvement, and to investigate the tool wear of the polishing ball using the ultrasonic vibration-assisted ball polishing on a machining center.The property of the tested material, STAVAX stainless mold steel, which is equivalent to AISI 420 modified [31], the experimental setup of the ultrasonic-vibration-assisted ball polishing system on a 5-axis machining center, and the investigation of the used ultrasonic tool are introduced in Section 2. The development of the new lab-made polishing balls, the suitable combination of the ultrasonic vibration-assisted ball polishing parameters for a plane surface, the effects of different passes on the surface roughness improvement, the volumetric wear of the polishing balls, and the application to the surface finishing of a test carrier with a saddle surface are presented in Section 3. Some discussion and future work will be explained in Section 4. Finally, some remarkable results of this work are concluded. Materials STAVAX is a premium-grade stainless tool steel (AISI 420 modified) with the following properties [31]: The combination of these properties yields a steel with outstanding production performance.As a result, it is suitable for application to most molds of larger tools.The chemical composition of the STAVAX stainless mold steel is shown in Table 1 [31].The hardness of the STAVAX used in this study is about HRB190 (HRC 22) after soft annealing. Experimental Setups on a 5-Axis Machining Center To investigate the performance of the ultrasonic vibration-assisted polishing using lab-made polishing balls, the experimental setup has been developed, as shown in Figure 2. The developed lab-made polishing ball is clamped on the ultrasonic tool with the holder type BT40 that can be mounted on the spindle of the machining center.The 5-axis machining center used in this research was a product of the QUASER Co., (Taichung, Taiwan) type UX300.The machining center equipped with the CNC controller of the HEIDENHAIN Company (Traunreut, Germany), type iTNC 530.A TS740 touch-trigger probe, product of HEIDENHAIN Company (Traunreut, Germany), was integrated with the machining center tool magazine to measure the origin coordinates of the workpiece to be fabricated and to execute the in-process measurement.To simulate the sequence of fine milling, ball burnishing, and ball polishing path and to generate the respective NC codes, the Unigraphics NX10 CAD/CAM software (NX10) has been used.The Hommelwerke T8000 roughness instrument, a product of JENOPTIC Co. (Jena, Germany), was used to measure the surface roughness of the machined specimens. Experimental Setups on a 5-Axis Machining Center To investigate the performance of the ultrasonic vibration-assisted polishing using lab-made polishing balls, the experimental setup has been developed, as shown in Figure 2. The developed lab-made polishing ball is clamped on the ultrasonic tool with the holder type BT40 that can be mounted on the spindle of the machining center.The 5-axis machining center used in this research was a product of the QUASER Co., (Taichung, Taiwan) type UX300.The machining center equipped with the CNC controller of the HEI-DENHAIN Company (Traunreut, Germany), type iTNC 530.A TS740 touch-trigger probe, product of HEIDENHAIN Company (Traunreut, Germany), was integrated with the machining center tool magazine to measure the origin coordinates of the workpiece to be fabricated and to execute the in-process measurement.To simulate the sequence of fine milling, ball burnishing, and ball polishing path and to generate the respective NC codes, the Unigraphics NX10 CAD/CAM software (NX10) has been used.The Hommelwerke T8000 roughness instrument, a product of JENOPTIC Co. (Jena, Germany), was used to measure the surface roughness of the machined specimens. Specification and Investigation of the Used Ultrasonic Tool An ultrasonic tool, fabricated by KLI Technology [32], was adopted in this study.The detailed specification of the ultrasonic tool used is listed in Table 2. Table 2. Specification of the ultrasonic tool [30].The LK-H025 triangulation laser probe, a product of KEYENCE Corporation (Osaka, Japan), has been used to measure the amplitude of the ultrasonic tool.According to the test results, the greater the working frequency, the greater the generated amplitudes of Specification and Investigation of the Used Ultrasonic Tool An ultrasonic tool, fabricated by KLI Technology [32], was adopted in this study.The detailed specification of the ultrasonic tool used is listed in Table 2. Model Specification Holder The LK-H025 triangulation laser probe, a product of KEYENCE Corporation (Osaka, Japan), has been used to measure the amplitude of the ultrasonic tool.According to the test results, the greater the working frequency, the greater the generated amplitudes of the ultrasonic tool by fixing the driven power of the controller.As a result, the suitable working frequency of 23 kHz has been set.The measured amplitudes of the ultrasonic tool with the power ranging from 60 W to 300 W and with the fixed working frequency of 23 kHz is shown in Figure 3.The relationship between the amplitude and the power was almost linear. the ultrasonic tool by fixing the driven power of the controller.As a result, the suitable working frequency of 23 kHz has been set.The measured amplitudes of the ultrasonic tool with the power ranging from 60 W to 300 W and with the fixed working frequency of 23 kHz is shown in Figure 3.The relationship between the amplitude and the power was almost linear.the ultrasonic tool by fixing the driven power of the controller.As a result, the suitable working frequency of 23 kHz has been set.The measured amplitudes of the ultrasonic tool with the power ranging from 60 W to 300 W and with the fixed working frequency of 23 kHz is shown in Figure 3.The relationship between the amplitude and the power was almost linear. Configuration and Fabrication of the Lab-Made Polishing Balls Considering the slurry container that is no longer being used, new polishing balls embedded with abrasives were developed in this study.The abrasive aluminum oxide (Al 2 O 3 ) is suitable for polishing the stainless mold steel, according to the previous study results [23,30].As a result, the new polishing balls embedded with aluminum oxide abrasive has been developed by taking the Nitrile Butadiene Rubber (NBR) as the matrix, mixing it with the aluminum oxide abrasive with different grain sizes (0.05 µm to 3 µm) and concentration and additives of silicon dioxide.The powder components for the new polishing balls were Nitrile Butadiene Rubber (NBR) (60, 60, 50, 60, and 50 wt%), aluminum oxide abrasives of various sizes (0.05 um to 3 m) (20, 20, 30, 20, and 40 wt%), and silicon dioxide (20 wt%).The concentration and abrasive size of aluminum oxide is shown in Table 3. Concerning the good physical, mechanical, and chemical properties, such as abrasion resistance, adhesion to metal, compression set, tear resistance, vibration dampening, and solvent resistance, etc., NBR was selected as the matrix material.Five types of NBR-based blanks for the polishing balls were made after the NBR, abrasives, and some additives were homogenized by a blending machine.The hardness of each blank has been tested by using the Shore hardness tester.After the tensile test specimens were made from the NBR-based blanks, the yielding strength and the static coefficient of friction have been tested by using a universal material testing machine.The measured results of the hardness, yielding strength, and the static coefficient of friction have been summarized in Table 4. Different kinds of polishing balls with the diameter of 12 mm have been fabricated by the thermal forming processes, as shown in Figure 5. results [23,30].As a result, the new polishing balls embedded with aluminum oxide sive has been developed by taking the Nitrile Butadiene Rubber (NBR) as the matrix ing it with the aluminum oxide abrasive with different grain sizes (0.05 µm to 3 µm concentration and additives of silicon dioxide.The powder components for the new ishing balls were Nitrile Butadiene Rubber (NBR) (60, 60, 50, 60, and 50 wt%), alum oxide abrasives of various sizes (0.05 um to 3 m) (20, 20, 30, 20, and 40 wt%), and s dioxide (20 wt%).The concentration and abrasive size of aluminum oxide is sho Table 3. Concerning the good physical, mechanical, and chemical properties, such as sion resistance, adhesion to metal, compression set, tear resistance, vibration dampe and solvent resistance, etc., NBR was selected as the matrix material.Five types of based blanks for the polishing balls were made after the NBR, abrasives, and some tives were homogenized by a blending machine.The hardness of each blank has tested by using the Shore hardness tester.After the tensile test specimens were made the NBR-based blanks, the yielding strength and the static coefficient of friction have tested by using a universal material testing machine.The measured results of the ness, yielding strength, and the static coefficient of friction have been summarized in 4. Different kinds of polishing balls with the diameter of 12 mm have been fabr by the thermal forming processes, as shown in Figure 5. Processing Parameters for the Ultrasonic Tool To perform the surface finishing of the specimens and test carrier, the sequenti burnishing and ultrasonic vibration-assisted polishing processes using the lab-mad ishing balls have been adopted in this research.The adopted suitable burnishing p eters were the burnishing force of 80N, the feed rate of 300 mm/min, and the stepo Processing Parameters for the Ultrasonic Tool To perform the surface finishing of the specimens and test carrier, the sequential ball burnishing and ultrasonic vibration-assisted polishing processes using the lab-made polishing balls have been adopted in this research.The adopted suitable burnishing parameters were the burnishing force of 80N, the feed rate of 300 mm/min, and the stepover of 30 µm.The burnished surface roughness of the fine-milled specimens was 0.11 µm using the suitable combination of the ball burnishing parameters.The suitable processing parameters using the ultrasonic tool for the STAVAX mold steel, such as the amplitude, frequency, spindle speed, abrasive diameter, feed rate, depth of penetration, etc., have been determined by Taguchi's experimental method in [30].An L18 (2 1 × 3 7 ) orthogonal table with one factor for two levels and seven factors for three levels has been figured out, to obtain the suitable processing parameters for the ultrasonic vibrationassisted ball polishing of STAVAX stainless steel.The configuration table is shown in Table 5.The amplitude, frequency, and spindle speed for the ultrasonic tool, abrasive diameter, feed rate, stepover, depth of concentration, and abrasive concentration have been selected as main factors and studied.According to the S/N ratio calculation of the Taguchi L18 matrix experimental results, a suitable combination of the ultrasonic vibration-assisted ball polishing parameters is summarized in Table 6.The amplitude of the ultrasonic tool was 10 µm; the working frequency of the ultrasonic tool was 23 kHz; the spindle speed was 5000 rpm; the diameter of the aluminum oxide (Al 2 O 3 ) was 0.3 µm; the feed rate was 60 mm/min; the stepover distance was 20 µm; the depth of penetration was 180 µm; and the slurry concentration was 1:10.Some of the parameters are to be applied to the experiments to the number of the passes of the lab-made polishing balls on the surface roughness improvement. Effects of Different Passes on the Surface Roughness Improvement Multiple polishing passes are needed to improve the surface roughness by reducing the grain size of the abrasives step by step, in general, to achieve the final surface finish for a precision mold with freeform surface in industrial application.The effects of different passes on the surface roughness improvement for the lab-made polishing balls has been investigated in this study.Two individual pass and four kinds of combined passes, namely Type E, Type D, Type E-A, Type E-C, Type E-C-B, and Type E-C-B-A, have been figured out and tested, as shown in Table 7, to sequentially reduce the grain size of the abrasives.According to the experimental results, the fine-milled and burnished surface with a surface roughness of Ra 0.13 µm could be improved to 0.027 µm on average using the combined type E-C-B-A passes.The measured surface roughness Ra 0.03 µm (Rz 0.20 µm) of the specimen using the combined E-C-B-A ultrasonic-assisted process parameters for the labmade polishing balls is shown in Figure 6.As a result, to reduce the polishing time and obtain a good surface roughness of Ra 0.027 µm, the combination of the ultrasonic-assisted process parameters for the lab-made polishing balls is listed in Table 8.To reduce the polishing time, the surface roughness of Ra 0.04 µm on average was achievable by utilizing the Type E-C-B pass. of the specimen using the combined E-C-B-A ultrasonic-assisted process param the lab-made polishing balls is shown in Figure 6.As a result, to reduce the polish and obtain a good surface roughness of Ra 0.027 µm, the combination of the ul assisted process parameters for the lab-made polishing balls is listed in Table 8. T the polishing time, the surface roughness of Ra 0.04 µm on average was achie utilizing the Type E-C-B pass. Volumetric Wear Improvement of the Lab-Made Polishing Balls Tool wear reduction is one of the merits of the ultrasonic vibration-assisted machining as mentioned in the introduction.The volumetric wear investigation between the ultrasonic vibration-assisted polishing and the no-vibration-assisted polishing of the lab-made polishing balls from Type A to Type E and the conventional wool polishing balls has been carried out in this study.The solid models of the polishing balls used were constructed based on the 3D profile data of the polishing ball with wear, measured by a coordinate measuring machine.Creo Parametric 3D Parametric software (CREO 2016) has been used to construct the 3D solid models.The constructed solid models of the wool ball and the lab-made polishing balls from Type A to E, are shown in Figure 7.The volumetric wear of the ultrasonic vibration-assisted lab-made polishing balls ranges from 0.49% (Type A) to 0.73% (Type E), whereas the volumetric wear of the non-vibration-assisted lab-made polishing balls ranges from 0.56% (Type A) to 2.0% (Type E), as shown in Table 9.The improvement on the volumetric wear of the ultrasonic vibration assisted polishing of the lab-made polishing balls ranges from 12.64% (Type A) to 65.48% (Type E).The volumetric wear is increasing with the increase of the grain size and the concentration of the abrasives.The possible reason for that is the surface total bounding area decreases with the increase in the grain size and the concentration of the abrasives.The volumetric wear of the labmade polishing balls is less than that of the conventional wool balls for both the ultrasonic vibration-assisted polishing and the non-vibration-assisted polishing.Figure 8 shows the comparison of the volumetric wear between the lab-made rubber polishing balls and the conventional wool polishing balls with vibration and with no vibration.A to E explained in Table 9.The determined suitable ball burnishing and ultrasonic vibration assisted ball polishing parameters for plane surfaces were sequentially applied to the surface finishing of a test carrier with a saddle surface.The Unigraphics NX10 CAD/CAM software (NX10) was used to construct the CAD model of the test carrier, as shown in Figure 9. Four areas on the CAD, namely fine milling, ball burnishing, vibration-assisted polishing, and non-vibrationassisted polishing, have been identified in Figure 9a.The machining path simulations for fine milling and burnishing, polishing with no ultrasonic vibration, and polishing with ultrasonic vibration, are implemented in Figure 9b-d, respectively. The ball burnishing process has been carried out after the fine-milling process.The ultrasonic vibration assisted polishing using the lab-made polishing balls have been se-quentially applied to the burnished surface of the test carrier.The surface roughness R a of the surface region on the annealed STAVAX test carrier (HRC = 22) was improved sequentially from about 0.18 µm to 0.03 µm, as shown in Figure 10.The surface textures on the fine-milled, burnished, and polished area have been observed with a 30× optical microscope.The surface roughness value Ra on the burnished surface was 0.09 µm on average.The surface roughness value Ra on the ultrasonic vibration-assisted ball polished surface was 0.03 µm on average, whereas that on the no-vibration-assisted ball polished surface was 0.04 µm on average.The surface roughness improvement of the 3D test object on the burnished surface was about 50%, and that on the vibration-assisted ball polished surface using the suggested E-C-B-A passes of the lab-made polishing balls was about 83% compared with the fine-milled surface. Application to the Surface Finishing of a Test Carrier with Saddle Surface The determined suitable ball burnishing and ultrasonic vibration assisted ball polishing parameters for plane surfaces were sequentially applied to the surface finishing of a test carrier with a saddle surface.The Unigraphics NX10 CAD/CAM software (NX10) was used to construct the CAD model of the test carrier, as shown in Figure 9. Four areas on the CAD, namely fine milling, ball burnishing, vibration-assisted polishing, and non-vibration-assisted polishing, have been identified in Figure 9a.The machining path simulations for fine milling and burnishing, polishing with no ultrasonic vibration, and polishing with ultrasonic vibration, are implemented in Figure 9b-d The ball burnishing process has been carried out after the fine-milling process.The ultrasonic vibration assisted polishing using the lab-made polishing balls have been sequentially applied to the burnished surface of the test carrier.The surface roughness Ra of the surface region on the annealed STAVAX test carrier (HRC = 22) was improved sequentially from about 0.18 µm to 0.03 µm, as shown in Figure 10.The surface textures on the fine-milled, burnished, and polished area have been observed with a 30× optical microscope.The surface roughness value Ra on the burnished surface was 0.09 µm on average.The surface roughness value Ra on the ultrasonic vibration-assisted ball polished surface was 0.03 µm on average, whereas that on the no-vibration-assisted ball polished surface was 0.04 µm on average.The surface roughness improvement of the 3D test object on the burnished surface was about 50%, and that on the vibration-assisted ball polished surface using the suggested E-C-B-A passes of the lab-made polishing balls was about 83% compared with the fine-milled surface. Discussion Although the used ultrasonic tool and the new lab-made polishing balls have been presented, there are still some issues to be discussed and investigated in the future, listed as follows: 1. Based on the experimental results of the multiple passes of ball polishing on the sur- Discussion Although the used ultrasonic tool and the new lab-made polishing balls have been presented, there are still some issues to be discussed and investigated in the future, listed as follows: 1. Based on the experimental results of the multiple passes of ball polishing on the surface roughness improvement, in general, the more the passes used the better the improvement on the surface roughness.However, the results of the type E-C-B and type E-C-B-A showed a very small difference (0.016 µm) on the surface roughness.This implied that the variation in the same concentration of the abrasive with different grain size had no obvious influence on the surface roughness improvement.To reduce the polishing time, the surface roughness of Ra 0.04 µm on average was achieved by utilizing a number of Type E-C-B passes. 2. The volumetric wear of the lab-made polishing balls can be reduced by about 13% to 65% using ultrasonic vibration-assisted polishing.The mechanism for the reduction in the volumetric wear might be that the total sliding path had been reduced due to the intermittent contact between the vibrating polishing ball and the surface of the workpiece. 3. A constant force polishing process is suggested to be implemented regarding the wear of the polishing ball.4. The diameter of the lab-made polishing balls was 12 mm.Considering the efficiency of polishing, different diameters of the polishing balls could be designed and fabricated to adapt the curvature of the workpiece.The cylindrical polishing pads with different diameters can also be utilized for a plane surface or a smoothed curved surface with small curvature in the future. Conclusions The ultrasonic-assisted surface finishing process of STAVAX mold steel on a 5-axis CNC machining center has been developed in this study by using new lab-made rubber polishing balls containing the aluminum oxide abrasive instead of traditional wool ball polishing equipped with a slurry container.The main results of this study can be summarized as follows: 1. Five types of NBR-based blanks for the polishing balls, Type A to Type E, have been made after NBR, aluminum oxide (Al 2 O 3 ) abrasive, and some additives have been homogenously mixed by a blending machine.Five kinds of polishing balls with the diameter of 12 mm have been fabricated by the thermal forming processes using the lab-made molds, and some mechanical properties have been tested. 2. The effects of multiple polishing passes on the surface roughness improvement for the lab-made polishing balls has been investigated in this study.The multiple polishing process E-C-B-A resulted in the best outcome with a surface roughness of Ra 0.027 um on average. 3. According to the experimental results, the suitable combination of the ultrasonic vibration-assisted polishing parameters using the lab-made polishing balls was as follows: amplitude of 10 µm, frequency of 23 kHz, spindle speed of 5000 rpm, feed rate of 60 mm/min, stepover of 20 µm, depth of penetration of 180 µm, and polishing pass of E-C-B-A. 4. The volumetric wear of the Lab-made polishing balls is less than that of conventional wool polishing balls.The improvement in the volumetric wear of the ultrasonic vibration-assisted polishing of the Lab-made polishing balls ranges from 12.64% (Type A) to 65.48% (Type E), based on the calculation of the constructed CAD models of the used polishing balls. 5. The proposed suitable ball polishing parameters for a plane surface have been applied to the surface finishing of a test carrier with a saddle surface.The surface roughness improvement in the 3D test object on the burnished surface was about 50%, and that Figure 1 . Figure 1.Experimental setup of the ultrasonic-assisted polishing tool integrated in a CNC machining center. Figure 2 . Figure 2. Photo of the ultrasonic vibration-assisted ball-polishing tool mounted on the 5-axis machining center. Figure 2 . Figure 2. Photo of the ultrasonic vibration-assisted ball-polishing tool mounted on the 5-axis machining center. Figure 3 . Figure 3. Measured results of the vibration amplitudes of the ultrasonic tool under different power. 3. 1 . 1 . Design and Fabrication of the Mold for the Lab-Made Polishing Balls A mold made of Al-6061T6 with the hardness of HRB 54 has been designed and fabricated to generate the polishing balls with a diameter of 12 mm, as shown in Figure 4. Figure 4 . Figure 4. (a) Design of the polishing balls using NX 10; (b) fabricated mold for thermal pressing. Figure 3 . Figure 3. Measured results of the vibration amplitudes of the ultrasonic tool under different power. 3. Results 3.1.Development of the Lab-Made Polishing Balls 3.1.1.Design and Fabrication of the Mold for the Lab-Made Polishing Balls A mold made of Al-6061T6 with the hardness of HRB 54 has been designed and fabricated to generate the polishing balls with a diameter of 12 mm, as shown in Figure 4. Figure 3 . Figure 3. Measured results of the vibration amplitudes of the ultrasonic tool under different power. 3.1.1.Design and Fabrication of the Mold for the Lab-Made Polishing BallsA mold made of Al-6061T6 with the hardness of HRB 54 has been designed and fabricated to generate the polishing balls with a diameter of 12 mm, as shown in Figure4. Figure 4 . Figure 4. (a) Design of the polishing balls using NX 10; (b) fabricated mold for thermal pressing. Figure 4 . Figure 4. (a) Design of the polishing balls using NX 10; (b) fabricated mold for thermal pressing. Figure 6 . Figure 6.Measured surface roughness of the specimen using the combined E-C-B-A ultra sisted process parameters for the lab-made polishing balls. Figure 6 .Table 8 . Figure 6.Measured surface roughness of the specimen using the combined E-C-B-A ultrasonicassisted process parameters for the lab-made polishing balls.Table 8.Combination of the ultrasonic-assisted process parameters for the lab-made polishing balls.Factor Level A. Amplitude (µm) 10 B. Frequency (KHz) 23 C. Spindle Speed (rpm) 5000 D. Feed Rate (mm/min) 60 E. Stepover (µm) 20 F. Depth of Penetration (µm) 180 G. No. of Passes E-C-B-A Figure 7 .Figure 7 . Figure 7. Constructed CAD models of the volumetric wear of different polishing balls from Type A to E explained in Table9. Figure 8 . Figure 8.Comparison of the volumetric wear of the used polishing balls with vibration and with no vibration: (a) lab-made rubber polishing balls, (b) conventional wool polishing balls. Figure 8 . Figure 8.Comparison of the volumetric wear of the used polishing balls with vibration and with no vibration: (a) lab-made rubber polishing balls, (b) conventional wool polishing balls. 3. 5 . Application to the Surface Finishing of a Test Carrier with Saddle Surface , respectively. Figure 9 . Figure 9. Constructed CAD model and machining path simulation of the test carrier with a saddle surface: (a) configuration of different areas, (b) fine-milling and burnishing, (c) polishing with no ultrasonic vibration, (d) polishing with ultrasonic vibration. Figure 9 . 14 Figure 10 . Figure 9. Constructed CAD model and machining path simulation of the test carrier with a saddle surface: (a) configuration of different areas, (b) fine-milling and burnishing, (c) polishing with no vibration, (d) polishing with ultrasonic vibration.Materials 2023, 16, x FOR PEER REVIEW 12 of 14 Figure 10 . Figure 10.Measured surface roughness and surface texture (observed with a 30× optical microscope) of the fine-milled, burnished, polished with no ultrasonic vibration and with ultrasonic vibration on a carrier with saddle surface. Table 3 . Concentration and abrasive size of aluminum oxide (Al 2 O 3 ). Table 4 . Measure hardness, yielding strength, and static coefficient of the lab-made NBR-based blanks for the polishing balls. Table 3 . Concentration and abrasive size of aluminum oxide (Al2O3). Table 4 . Measure hardness, yielding strength, and static coefficient of the lab-made NBR blanks for the polishing balls. Table 7 . Comparison of different number of the passes of the lab-made polishing balls on the surface roughness improvement. Table 7 . Comparison of different number of the passes of the lab-made polishing balls on th roughness improvement. Table 8 . Combination of the ultrasonic-assisted process parameters for the lab-made polish 3.4.Volumetric Wear Improvement of the Lab-Made Polishing Balls
2023-08-31T15:08:20.969Z
2023-08-28T00:00:00.000
{ "year": 2023, "sha1": "b30d56a3107c52d8d12ac9d0abfa1ba99c3d86e1", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1944/16/17/5888/pdf?version=1693226319", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a1436244f8f88bd3bee9a52600a1d8334c01fe47", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [] }
244275294
pes2o/s2orc
v3-fos-license
Delphi consensus on the diagnosis and treatment of patients with short stature in Spain: GROW-SENS study Purpose To identify consensus aspects related to the diagnosis, monitoring, and treatment of short stature in children to promote excellence in clinical practice. Methods Delphi consensus organised in three rounds completed by 36 paediatric endocrinologists. The questionnaire consisted of 26 topics grouped into: (1) diagnosis; (2) monitoring of the small-for-gestational-age (SGA) patient; (3) growth hormone treatment; and (4) treatment adherence. For each topic, different questions or statements were proposed. Results After three rounds, consensus was reached on 16 of the 26 topics. The main agreements were: (1) diagnosis tests considered as a priority in Primary Care were complete blood count, biochemistry, thyroid profile, and coeliac disease screening. The genetic test with the greatest diagnostic value was karyotyping. The main criterion for initiating a diagnostic study was prediction of adult stature 2 standard deviations below the target height; (2) the main criterion for initiating treatment in SGA patients was the previous growth pattern and mean parental stature; (3) the main criterion for response to treatment was a significant increase in growth velocity and the most important parameter to monitor adverse events was carbohydrate metabolism; (4) the main attitude towards non-responding patients is to check their treatment adherence with recording devices. The most important criterion for choosing the delivery device was its technical characteristics. Conclusions This study shows the different degrees of consensus among paediatric endocrinologists in Spain concerning the diagnosis and treatment of short stature, which enables the identification of research areas to optimise the management of such patients. Supplementary Information The online version contains supplementary material available at 10.1007/s40618-021-01696-0. Introduction Short stature is one of the main reasons for paediatric consultation [1]. Hence, the importance of making a correct assessment allows the diagnostic process to be properly directed. Although there is no international consensus defining normal versus pathological growth, the most frequently used criteria for short stature or abnormal growth are as follows: (a) height below − 2 standard deviations (SD) for age and sex on growth curves corresponding to the population studied; (b) normal height (between ± 2 SD), but 2 SD below target height; (c) estimated adult stature below − 2 SD of target height; and (d) reduced growth velocity, below − 1 SD (25th percentile) for age and sex, maintained over a period of 1 year or, in the absence of short stature at a growth velocity less than − 2 SD [1][2][3][4][5]. The choice of appropriate growth curves is very important in growth assessment and the use of updated national curves is recommended [6][7][8]. There is different terminology used to classify short stature, but, in general, all of it can be divided into pathological or known-cause short stature and idiopathic or unknowncause short stature [1,5]. Idiopathic short stature has been extensively reviewed, and its concept has been debated and discussed in recent years. Differentiation between idiopathic short stature and idiopathic growth hormone (GH) deficiency is often difficult and reflects the poor discriminating power of somatotropic axis tests, since many patients diagnosed with short stature are variants of normal growth [1,5]. Since there is little uniformity in the criterion for management of short stature, this study has been proposed to evaluate clinical practice in the management of short stature in Spain and show the degree of consensus in the management of this pathology. Study design The GROW-SENS study was conducted using a modified Delphi method of quantitative, semi-quantitative, and qualitative analyses. This is a structured, group communication technique whereby written online surveys can be conducted to collect and unify the opinions of a group of experts about a complex or controversial topic with insufficient scientific evidence. This technique avoids the difficulties and drawbacks inherent to consensus methods based on face-to-face discussion, such as travel, influence biases, and non-confidential interaction [9,10]. The process was carried out in four phases: (1) creation of a scientific committee in charge of the project; (2) kick-off meeting to propose the main topics through a review of the latest literature evidence; (3) three successive rounds of online surveys to gather the opinions of a panel of experts; and (4) analysis of the results and discussion of the conclusions by the scientific committee. Participants The scientific committee coordinated the entire consensus and was made up of six national experts in paediatric endocrinology. The panel of participating experts was selected by the scientific committee and included 56 paediatric endocrinologists from different Spanish provinces. The set of panellists had a minimum of 10 year experience as paediatricians mainly specialising in paediatric endocrinology and they were all members of the Spanish Paediatric Endocrinology Society (SEEP). The panellists were invited to participate in the study by e-mail, informing them of the nature of the study through a presentation letter. Questionnaire At the kick-off meeting, the scientific committee discussed the main strategies for the diagnosis and treatment of short stature. As a result of this meeting, 26 topics were proposed, grouped into four blocks: (1) diagnosis; (2) monitoring of the small-for-gestational-age (SGA) patient with short stature; (3) GH treatment; and (4) treatment adherence. For each block, several questions or statements were proposed which, depending on the type of response, could be dichotomous, multi-response, qualitative, quantitative, and, often, Likert scale. The Likert scale used consisted of 11 items grouped as follows: 0-2, interpreted as not at all important or strongly disagree; 3-4, interpreted as slightly important or disagree; 5-6, interpreted as important or neither agree nor disagree; 7-8, interpreted as very important or agree; and 9-10, interpreted as extremely important or strongly agree. Delphi consensus Consensus was reached in three rounds of consultations held between March 2019 and March 2020. In the first round, the participants answered the online questionnaire with the possibility of adding their opinion with open text. Following analysis of the answers provided, the questions were rephrased in the second round and then again in the third round. The results of the three rounds were tabulated and presented in a descriptive form to be analysed, interpreted, and discussed jointly by the scientific committee to find common ground and provide useful conclusions and recommendations for the clinical practice of paediatric endocrinologists. Analysis and interpretation of the results The data analysed provided semi-quantitative information (scores of the responses collected through Likert scales) and qualitative information (analysis of the experts' discourse in open-ended questions). A minimum of 70% of homogeneous responses was established to consider that there was consensus among the panel of experts on the answer to this question. Ethical considerations All the recommendations set out in the Helsinki declaration were used. All participants gave informed consent and no personal information was recorded at any time. To preserve the confidentiality of the opinions, all information was coded. Results Out of the 56 endocrinologists recruited, 43 completed the first round, 37 completed the second round, and 36 completed all three rounds of the study. Out of the 26 items proposed, consensus was reached on 16 of them (61.5%). Tables 1, 2, 3, and 4 show the topics dealt within each block and the degree of consensus reached on those that exceeded 70%. Block 1: diagnosis of short stature Out of the 13 proposed topics on the diagnosis of short stature, consensus was reached on 8 of them (61.5%). The topic with the highest degree of consensus (100%) was tests considered to be of priority use in Primary Care. Participants agreed that the tests should be complete blood count, biochemical analyses, thyroid function, and screening for coeliac disease. Another topic with a high degree of consensus was the diagnostic value of genetic testing. 98% of the specialists considered karyotyping to be of important diagnostic value. The SHOX gene sequencing study was considered an additional test by 80% of panellists. Exome sequencing and study of bone dysplasias are never performed or are infrequent according to 89% and 76% of respondents, respectively. 65% agreed that performing genetic testing for Noonan syndrome is very rare or never performed. Among the diagnostic criteria for study of short stature with the highest degree of consensus are the prediction of adult stature 2 SD below the target height (94%) and a growth velocity ≤ − 1 SD for more than 1 year (76%). There was consensus on the method of how the calculation of a patient's adult stature prognosis is performed, with the majority of specialists (89%) mentioning that they use bone age to establish it. 72% of the experts use the Spanish Crosssectional Growth Study 2010 as a reference standard. A high percentage of the specialists (86%) did not agree that GH stimulation tests are a decisive test in the diagnosis of GH deficit and 72% consider that it is not necessary to perform two tests to diagnose an isolated GH deficit. No consensus was reached concerning the following topics: most appropriate cut-off point to establish a diagnosis of GH deficit; the need to perform, at this time, a national growth study to establish the reference standards for the Spanish population; use of gonadal steroid priming testing at peripubertal ages; the approach that should be taken in a 13-year-old male patient with a testicular volume of 3 ml and decreased growth velocity; and the clinical conditions that may cause short stature under which a GH deficiency study should be performed. The results obtained from the scores the panellists gave to the statements proposed in block 1 are presented in Table 1. Block 2: small-for-gestational-age patient Regarding the block on monitoring the SGA patient, consensus was reached on two of the three topics proposed (66.7%). The main criteria considered indispensable for starting GH treatment in these patients were: length and weight at birth (95%), stature at 4 years (94%), previous growth pattern (100%), and mean parental stature (97%). In SGA patients, most specialists (88%) did not consider Silver-Russell syndrome to be an exclusion criterion for initiating GH treatment. There was no consensus concerning the appropriate age for the initiation of GH treatment in the non-recovering SGA patient. The results obtained from the scores the panellists gave to the statements in block 2 are presented in Table 2. Block 3: GH treatment Of the 6 proposed topics on GH treatment, consensus was reached on 3 of them (50%). For monitoring of adverse events resulting from GH treatment, in addition to thyroid function tests, bone age, and IGF-I levels, the participants stated that the most important tests are the assessment of carbohydrate metabolism (100%) and lipid metabolism (78%). The parameter with the highest consensus for assessing response to GH treatment in patients with short stature was a significant increase in growth velocity (73%). Only 22% of participants are free to prescribe any GH, 50% can only choose between two or three options, and for 28%, the administration chooses the treatment. There was no consensus concerning either the age for the initiation of GH treatment in children with Prader-Willi syndrome or the waiting time for an oncological patient to start GH treatment after the start of remission. The results obtained from the scores the panellists gave to the statements in block 3 are presented in Table 3. Block 4: treatment adherence In the block on treatment adherence, out of the four topics proposed, consensus was reached on three. Regarding the criteria for a personalised choice of GH delivery device, the specialists indicated the following as the most important: technical characteristics of the device (97%), drug data sheet (73%), hospital criteria (70%), and user preferences (70%). The resources that specialists considered most important for monitoring adherence to GH treatment were: use of electronic recording devices (97%), health education (95%), nursing support (86%), and new e-health technologies (84%). 94% considered the child's self-esteem to be an element favouring adherence. There was no consensus concerning the prioritisation of resources to monitor treatment adherence. The results obtained from the scores the panellists gave to the statements in block 4 are presented in Table 4. Discussion This is the first study in Europe with the Delphi method to assess the diagnosis and management of patients with short stature during childhood. The degree of consensus reached in this study was high: for 16 of the 26 topics proposed, it was higher than 70%. Despite the existence of some expert consensus [1,5], many of the topics related to the diagnosis and treatment of short stature are still a matter of debate. The GROW-SENS study was designed with the aim of finding out the opinion of paediatric endocrinologists on these aspects. The Delphi method used in this paper in relation to short stature and GH is novel in this field. Block 1: diagnosis of short stature The prevalence of pathological short stature in children referred for assessment is around 5% and varies between 1.3% and 19.8%. There is significant variability in establishing when to initiate a short stature study [11][12][13]. There is no evidence-based consensus that determines the tests that must be performed in a study of a child with short stature [5] and the diagnostic yield of the tests used in children with short stature is very low [14]. Different authors propose conducting a karyotype in girls with short stature with an unexplained cause, shorter than their genetic stature or when they present two or more dysmorphic characteristics [15][16][17][18]. Within the scope of Primary Care, it is more common to request a complete blood count, general biochemical parameters, thyroid function, and screening for coeliac disease. It is estimated that 2-8% of children with non-familial short stature without gastrointestinal symptoms have coeliac disease [19][20][21], and this is in agreement with the findings of the study performed. There was controversy as to whether it is advisable to request IGF-I serum levels in Primary Care, due to the difficulties of interpretation and the methodological variability involved. Determination of IGF-I should be the first test to be performed in Specialised Care to study the somatotropic axis, since an IGF-I higher than the 50th percentile associated with a growth velocity higher than the 25th percentile makes a GH deficit highly unlikely [19,22,23]. Regarding the request for genetic tests other than karyotyping, study of the SHOX gene was the most requested by the participants, especially in the case of familial short stature with autosomal dominant inheritance pattern, abnormalities in body proportions, and/or suggestive radiological findings. It should be noted that assessments expressed by participants concerning each genetic test reflect the frequency with which those tests are requested, which may be influenced not only by clinical judgement, but also by availability. The benefit of genetic studies to diagnose short stature increases when they are associated with systematic phenotyping of patients [24]. Regarding the criteria for diagnosing short stature, participants clarified that an isolated height measurement is not sufficient and needs to be verified by successive calculations. It was also clarified that although an extreme growth velocity and short stature (below − 2.5 SD) with poor prognosis for adult stature are two important criteria for the diagnosis of pathological short stature, if the parents' values are very different, this indicator loses sensitivity. Most of the specialists interviewed considered that the most appropriate growth curve is that in the Spanish Crosssectional Study 2010 [25,26]. In spite of this, some of the panellists said that other Spanish curves are also occasionally used [6,27]. Although it was not considered a priority to carry out a national study to assess the reference stature of the Spanish population, the general recommendation is to do so every 10-15 years, especially due to the increase in the different ethnic groups that live in Spain [28]. In fact, there are few studies on the immigrant population in Spain. Many of the participants did not agree that GH stimulation was decisive for their diagnosis. However, they considered this test to be of greater value in ruling out a deficit than in confirming it, i.e., it has greater specificity than sensitivity. There are studies in which a level below 3 ng/ ml may correlate with severe deficits [1]. However, despite new laboratory techniques, no exact cut-off point has been established. Lack of uniformity in the criteria to establish the usefulness and indication of stimulation tests, as well as the cut-off level necessary to diagnose a GH deficit, has been pointed out by various authors internationally [29][30][31]. Stimulation tests remain valid for diagnosis of GH deficit, but one must consider that the measured GH concentration varies depending on the type of test used and the immunoassay employed, so they must be interpreted together with the patient's auxology. The child's self-esteem is a factor that favours GH treatment adherence 94 1 3 In any case, in general, the study of GH deficiency should be considered in all pathologies with an alteration in growth velocity that cannot be explained for any other reason. Gonadal steroid priming to establish the diagnosis in prepubertal ages remains a controversial topic. There is little evidence in studies that support its use, since the group of patients analysed is small and the patients included have different characteristics in terms of chronological age, auxology, and pubertal development. The absence of large, homogenous series in terms of the patients' characteristics would explain why just 30-40% of paediatric endocrinologists use it before studying GH secretion. In any case, it must be performed and interpreted on an individual basis [1,32,33]. With regard to reassessment of GH deficit in adolescence when adult stature is reached, 74% of the experts considered that reassessment is not necessary in cases of isolated GH deficit with levels of IGF-I within normality. This finding contrasts with a Delphi study performed by the Italian Endocrinology Society in which there is consensus regarding the need for reassessment of these patients [34]. In summary, diagnosis of a GH deficit in childhood and adolescence should be based on a combination of factors that include auxology, and radiological and laboratory evaluation, together with clinical experience [35]. Block 2: small-for-gestational-age patient Birth length or birth weight was considered to be the most indispensable criteria for initiating GH treatment in SGA children. However, it was clarified that cases in which this is not known need to be individualised. Furthermore, although it was agreed that parental mean stature could be a criterion for initiating treatment, it was argued that it should not be a limiting factor for initiating treatment in SGA children who meet all other inclusion criteria and Silver-Russell syndrome should not be an exclusion criterion for initiating GH treatment [36,37]. The lack of consensus on the appropriate age to initiate GH treatment may be due to the lack of evidence of superiority of response for early initiation of treatment. Even so, it has been reported that 90% of SGA children can experience a catch-up growth spurt that mostly takes place in the first 12 months of life and is completed by 2 years of age, reaching a stature greater than − 2 SD [38,39]. Block 3: GH treatment To monitor adverse events arising from GH treatment, in addition to thyroid function tests, bone age, and IGF-I levels, the need to check carbohydrate metabolism was identified, with lipid metabolism being less important [33]. However, other tests to be performed on an individual basis were also mentioned, such as orthopaedic examinations and fundus evaluation due to clinically suggested suspicion of endocranial hypertension, etc. The response to GH treatment can be assessed with the increase in stature and growth velocity, expressed in SD or cm/years [40]. The response in the first year of treatment is considered very important due to its high predictive value for gain in adult stature [41]. It was proposed to assess patients' quality of life as an important response parameter, although its quantification may be heterogeneous due to the fact that standardised quality-of-life questionnaires for children with short stature are rarely used in clinical practice. 67% of respondents would initiate GH treatment in children with Prader-Willi syndrome before the age of 2 years, since it has been shown that GH is safe at that age [42]. It is essential to have contact with expert centres with greater experience that can assist in the use of GH in these patients. There is lack of consensus on the waiting time before administering GH in a child with a deficit in oncological remission. Block 4: treatment adherence Treatment adherence conditions the response and efficacy of the treatment and is a common problem in all chronic diseases that require maintained treatment [43]. Poor adherence implies a loss of stature, but there are very few studies that have quantified this and it continues to be discussed whether the degree of adherence differs according to the indication [44]. In the case of a non-responder patient, the attitude chosen first is to check treatment adherence. To monitor adherence, the most valued resource was the electronic record, although it was made clear that this should not replace health education, but it is evident that the same degree of objectivity cannot be achieved through any other method. However, the indications for use in the drug's technical data sheet should be taken into account. Given the importance of the child's self-esteem for good treatment adherence, this should be measured and recorded as a parameter through scales validated for this purpose, although they are rare in clinical practice. Limitations of the study Meetings of experts are, per se, a limitation in terms of the level of evidence. However, out of the more than 200 members of the SEEP, the members with more than 10 year experience in using GH were selected. The participation of members from a country with a predominantly publicly financed health system is another of the study's limitations. Conclusions and future prospects With regard to the implications for everyday clinical practice, and in spite of the heterogeneity of some topics, this study will provide novice paediatric endocrinologists with a guide for critical decisions on the use of GH in childhood. In terms of future research, it is necessary to verify our results with a larger sample with the participation of experts from other countries, which is expected to reinforce the recommendations of this group of experts. The variability in the results reflects aspects in which there is not clear consensus yet, which offers new hypotheses to assess. Acknowledgements The authors would like to thank all of the participants in the panel of experts (ANNEX), Méderic Ediciones for help with the design and coordination, and Fernando Sánchez Barbero, PhD for support in writing this manuscript. And special thanks to Fundación Merck Salud for funding and sponsoring the project and developing the paper. Author contributions RCC has participated in designing the study, developing questions and hypotheses, analysing the information, and drafting the discussion and conclusions. CFR has participated in designing the study, developing questions and hypotheses, analysing the information, and drafting the discussion and conclusions. IGC has participated in designing the study, developing questions and hypotheses, analysing the information, and drafting the discussion and conclusions. FMM has participated in designing the study, developing questions and hypotheses, analysing the information, and drafting the discussion and conclusions. JPLS has participated in designing the study, developing questions and hypotheses, analysing the information, and drafting the discussion and conclusions. JILA has participated in designing the study, developing questions and hypotheses, analysing the information, and drafting the discussion and conclusions. Funding This study was funded by Fundación Merck Salud. Availability of data and materials Not applicable. Code availability Not applicable. Declarations Conflict of interest RCC has no conflicts of interest to declare. She received funds as a consultant from Fundación Merck Salud for this paper. CFR has no conflicts of interest to declare. She received funds as a consultant from Fundación Merck Salud for this paper. IGC has no conflicts of interest to declare. She received funds as a consultant from Fundación Merck Salud for this paper. FMM has no conflicts of interest to declare. She received funds as a consultant from Fundación Merck Salud for this paper. JPLS has no conflicts of interest to declare. He received funds as a consultant from Fundación Merck Salud for this paper. JILA has no conflicts of interest to declare. He received funds as a consultant from Fundación Merck Salud for this paper. Ethics approval Not applicable. Consent to participate The panellists were invited to participate in the study by e-mail, informing them of the nature of the study through a presentation letter. Those who responded to the e-mail gave their consent to participate in the study. Consent for publication All authors of this work provided consent. The panel of participating experts were informed and gave their consent for publication of the results of the consensus. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http:// creat iveco mmons. org/ licen ses/ by/4. 0/.
2021-11-18T14:49:29.038Z
2021-11-17T00:00:00.000
{ "year": 2021, "sha1": "4aa79d55a7f38820bd4c4dd5ae9054be52e03f3d", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s40618-021-01696-0.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "4aa79d55a7f38820bd4c4dd5ae9054be52e03f3d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
257765992
pes2o/s2orc
v3-fos-license
Evaluation of Long-term Performance of the GORE SYNECOR Intraperitoneal Biomaterial in the Treatment of Inguinal Hernias Background: The objective of this study was to analyze device safety and clinical outcomes of inguinal hernia repair with the GORE SYNECOR Intraperitoneal Biomaterial device, a hybrid composite mesh. Methods: This retrospective case review analyzed device/procedure endpoints beyond 1 year in patients treated for inguinal hernia repair with the device. Three objectives were evaluated: procedural endpoint—incidence through 30 days of surgical site infection, surgical site occurrence (SSO), ileus, readmission, reoperation, and death; device endpoint—serious device incidence of mesh erosion, infection, excision/removal, exposure, migration, shrinkage, device-related bowel obstruction and fistula, and hernia recurrence through 12 months; and patient-reported outcomes of the bulge, physical symptoms, and pain. Results: A total of 157 patients (mean age: 67±13 y) with 201 inguinal hernias (mean size: 5.1±5 cm2) were included. Laparoscopic approach and bridging repair were performed in 99.4% of patients. All device location was preperitoneal. No procedure-related adverse events within 30 days were reported. No surgical site infection or SSO events or device-related hernia recurrence occurred through 12 months. Procedure-related serious adverse events occurred in 6 patients; 5 recurrent inguinal hernias (at 1 and 2 y) and 1 scrotal hematoma (at 6 mo). Through 24 months, no SSO events requiring procedural intervention occurred. Through 50 months, 6 (2.98%) patients had confirmed hernia recurrence and 4 (1.99%) patients had hernia reoperation. The patient-reported outcome for pain was reported by 7.9% (10/126) of patients who completed the questionnaire. Conclusions: In this study, inguinal hernia repair with the hybrid composite mesh was successful in most patients and the rate of recurrence was low, further supporting the long-term safety and device performance. M esh-based repair technique for patients with inguinal and/or femoral hernia is strongly recommended and particularly for the prevention of recurrence. The synthetic permanent mesh had been the convention but came with limitations. 1 A meta-analysis of patients with inguinal hernia undergoing repair with fully absorbable mesh, permanent or biological mesh found that patients with an absorbable mesh seem to have less chronic pain compared with repair with permanent meshes without increased risk of recurrence. 2 The recurrence rate for all hernias following repair with an absorbable mesh was 3% in the first year and 8% after 2 to 5 years. Data on recurrence rates in this metaanalysis were largely found to be collected as part of clinical examinations. Among controlled and noncontrolled trials, the recurrence rate with absorbable mesh was 5.6%; (median: 13-mo follow-up; range: 6 to 60 mo). Among research clinical trials only, the recurrence rate was 2% for absorbable versus 1.3% for permanent mesh (median: 18-mo follow-up; range: 12 to 36 mo). Of the recurrence among permanent mesh, 73% had a primary medial hernia. 2 The GORE SYNECOR Intraperitoneal Biomaterial (hereafter, device; W. L. Gore & Associates Inc.) is a hybrid composite mesh made of a bioabsorbable 3-dimensional web comprised of poly (glycolide:trimethylene carbonate) copolymer (PGA:TMC), a permanent macroporous dense polytetrafluoroethylene monofilament knit, and a nonporous bioabsorbable PGA:TMC film. Six to 7 months after implant, the bioabsorbable components are absorbed and only the polytetrafluoroethylene monofilament knit remains. This device is intended for intraperitoneal (IP) placement using an underlay or intraperitoneal onlay technique. The use of nonabsorbable sutures to secure the device is recommended. When the device is placed adjacent to fascia, tissue ingrown on the 3-dimensional web scaffold surface is expected. Further, when the device is placed adjacent to viscera, there should be minimal tissue attachment on the nonporous IP barrier film. 3 The long-term data on use of this device in inguinal hernia is limited. The aim of this retrospective review was to evaluate the performance and safety of this IP device for the repair of inguinal hernias after 1 year. In this case series, the implanting physician chose to implant the device in the preperitoneal plane with the nonporous film positioned adjacent to the peritoneum to promote peritoneum growth to protect the device from bowel adhesions which is in line with the device instructions for use. 3 Study Design The overall multicenter, retrospective study included adult patients with ventral, incisional, parastomal, and inguinal hernias who were treated with the IP device between April 2016 and May 2019. Data were collected across 4 hospitals in the United States. A record search was conducted to identify cases of patients at least 18 years of age, who underwent hernia repair with the use of the IP device and were treated at least 365 days before site initiation. Any implantation <1 year before site initiation was excluded. Cases were further categorized by hernia type, wound class by the Centers for Disease Control and Prevention (CDC) Surgical Wound Classification, and comorbidities. A wound classification of > 1 was excluded. This report details only patients with the inguinal hernia type. The hernia repair technique used was determined by the implanting physician. Additional exclusion criteria included use in the repair of cardiovascular defects, device placement with the antiadhesion barrier adjacent to fascial or subcutaneous tissue, and inability to achieve sufficient device overlap of the hernia defect. Further, evidence of systemic infection, known wound-healing disorder, cirrhosis, current dialysis, immunosuppression, or surgical site infection (SSI) at the time of device placement were also exclusionary. Demographics, medical history, physical examination, adverse event, and device use data were collected retrospectively from existing medical records of eligible patients enrolled in the study with an inguinal hernia. A patientreported outcome (PRO) questionnaire regarding hernia recurrence was administered, and events of surgical or autopsy procedure explant of the device were captured by the site. 4 At 1, 6, 12, 24, and 36 months postoperatively, the questionnaire was administered and adverse events and device use data were evaluated. The study was conducted in accordance with the US Federal regulations and with institutional review board approval. This study and medical writing assistance was funded by W. L. Gore & Associates Inc. Endpoints Three primary endpoints and secondary endpoints were evaluated. Detailed study endpoint definitions are provided in Supplemental Online Resource 1 (Supplemental Digital Content 1, http://links.lww.com/SLE/A369). The first objective was the procedural endpoint and defined as incidence through 30 days of SSI, surgical site occurrence (SSO), ileus, readmission, reoperation, and death. All procedural endpoints were captured as devicerelated or procedure-related with the exception of death which was captured for device-related events only. Severity was captured as serious or nonserious for the SSI, SSO, and ileus events, and as serious only for readmission, reoperation, and death. The second objective was the device endpoint which was defined as serious device incidence of mesh erosion, infection, excision/removal, exposure, migration, shrinkage, device-related bowel obstruction and fistula, and hernia recurrence through 12 months. All device endpoints were captured for events that were device-related or serious in severity. The third objective was PRO of the bulge, physical symptoms, and pain. The PRO used in this study was the Ventral Hernia Recurrence Inventory (VHRI). This PRO is an adapted patient questionnaire that contains 3 "yes/no" questions regarding symptoms commonly associated with hernia recurrence. 5 The instrument was shown to be a valid tool in the assessment of inguinal hernias. 6 The responses on the PRO were not considered adverse events. The secondary endpoints included SSO, SSI, bowel perforation, unexplained or chronic pain, seroma, fistula, or adhesion formation. Only device-related and serious severity events were captured for bowel perforation, unexplained or chronic pain, seroma, fistula, or adhesion formation. Device-related, procedure-related, serious, and nonserious events of SSO and SSI were also captured. Statistical Methods The 95% 2-sided CI was calculated using the exact binomial test for each estimate for the procedural, device, and PRO endpoints and included the all-enrolled patient population. Continuous data were summarized using descriptive statistics. Missing data were not included. Patient Disposition A total of 157 patients were enrolled with 201 inguinal hernias; 44 patients had 2 inguinal repairs on the same day as the primary procedure and 4 patients had a ventral hernia repair. All but 1 patient was enrolled from 1 investigative site. Patients with inguinal hernia were a mean age of 67 ± 13 years and had a mean body mass index of 28 ± 5 kg/m 2 . The population was largely male (90.5%), White (87.9%), and all were non-Hispanic. At baseline, associated comorbidities included hypertension (45.2%), hypercholesterolemia (38.9%), and cancer (25.5%). Few patients were current smokers (11.5%). Previous abdominal aortic surgery was reported in 4 (2.6%) patients. Most patients had indirect hernia type, none were incarcerated or strangulated. No patient had a stoma present. The median hernia size was 4.0 cm 2 . All patients were of the "clean" CDC wound classification. Nearly all patients had a laparoscopic repair (99.4%). A total of 158 devices were implanted; 1 device per hernia for all patients with the exception of 1 patient that required 2 devices, 1 for a concomitant ventral hernia repair. The 10 × 15 cm device was used for a majority of the repairs (120/158; 75.9%). The 15 × 20 cm device was used in 35/158 repairs (22.2%) and the 12 × 12 cm or 20 × 25 cm devices were used in the remaining 3 repairs. All device placements were preperitoneal with the barrier film placed adjacent to the peritoneum. Reoperation of a recurrent inguinal hernia involving a previously placed hernia mesh occurred in 16 patients (10.2%). In 99.5% of procedures, the device was secured with absorbable fixation and used for bridging. The mean follow-up was 31 months (range: 15 to 50 mo). Table 1 details the demographics, medical history, and hernia characteristics at baseline. Primary and Secondary Outcomes Device endpoint events, including hernia recurrence, were to be site-reported as device-related and serious. Through 12 months, none of the 157 patients had serious device-related hernia recurrence, or device incidences of mesh erosion, infection, excision/removal, exposure, migration, shrinkage, device-related bowel obstruction, or fistula. Further, through 12 months, there were no reports of secondary (defined as device-related and serious) events of seroma, fistula, SSI, SSO, adhesion formation, bowel perforation, or unexplained/chronic pain for the 155 patients eligible for the secondary analysis. Procedural endpoint events were to be site-reported as device-related or procedure-related. Through 30 days, none of the 157 patients had SSI, SSO, ileus, readmission, reoperation, or death. Safety and PROs A total of 13/157 (8.3%) patients had serious adverse events (including death) throughout and beyond 3 years of study. A total of 7 deaths were reported but were not related to the device or the procedure. The deaths occurred at the 6-month (n = 1), 1-year (n = 1), 2-year (n = 2), 3-year (n = 2), and beyond 3-year (n = 1) time points. Causes of death were pneumonia-related (n = 4), car accidents (n = 1), heart surgery complications (n = 1), and unknown (n = 1). The remaining 6 serious adverse events were procedure-related and included 5 recurrent inguinal hernias (at 1 and 2 y) and 1 scrotal hematoma (at 6 mo). Hernia Recurrence Through 50 months and of the 201 hernias, 6 patients (2.98%) had a hernia recurrence that was study procedurerelated; none were device-related. Half of the recurrences were within the first 12 months and the remainder were through 50 months. Of the 6 hernia recurrences, 5 were serious and were at the location of the device. Reoperation was required for 4 of the hernia recurrences. Of the 6 hernia recurrences, 3 were within 12 months of which 2 were serious. The patient with recurrence on day 93 was minimally symptomatic and the event was not serious. Another patient developed an inguinal hernia on day 331 that required subsequent repair. The patient with recurrence on day 335 had a small hernia sac and large cord lipoma noted through the internal inguinal ring. The remaining 3 recurrences from 12 through 50 months were all serious. The recurrence on day 377 had an open repair in which cord lipoma was noted but no hernia sac was observed. One patient experienced 2 recurrences starting on day 471. The initial hernia was repaired with a 10 × 15 cm device. The first recurrence for this patient was repaired with a 15 × 20 cm device and the second recurrence required an open repair. Last, a patient with recurrence on day 555 developed urothelial carcinoma which required a cystectomy and prostatectomy at an outside institution, and any manipulation of the mesh during that procedure was not documented. As a result of the interval development of malignancy, additional hernia repair has not been performed. PROs The VHRI questionnaire was completed by 126 (80.3%) patients. The mean time from the procedure to completion of the questionnaire was 31 months (range: 15 to 50 mo). For the question "Do you feel or see a bulge at your treatment site?," 6/127 (4.7%) of patients answered "yes" at any time during the 3 years. When queried "do you feel physical symptoms or pain at the site," 10/126 (7.9%) of patients responded "yes." DISCUSSION This study reviewed 157 patients with 201 inguinal hernias treated with a device for up to 3 years and adds to the body of evidence for the use of the IP hybrid biomaterial for inguinal hernia repair. In this series of patients, there were low rates of recurrence or postprocedure complications. Mesh implant selection is an important component of hernia repair, particularly when considering recurrence and postoperative complication which can have negative consequences in cost and quality of life. [7][8][9] While the mesh device used is intended for IP placement, the implanting physician chose to implant in the preperitoneal plane with the nonporous film positioned adjacent to the peritoneum to promote peritoneum growth to protect the device from bowel adhesions. This placement aligns with the device instructions for use. 3 Published rates of clinically reported or imagingassessed recurrence following inguinal hernia repair with any mesh type ranges from 0.8% to 14%. 1,2,[10][11][12] In the current study, no serious, device-related hernia recurrence through 12 months occurred. Through 50 months of followup, the rate of any hernia recurrence was 3.0%, all were procedure-related, and half occurred in the first 12 months. Half of those undergoing subsequent repair were identified to have a cord lipoma causing the recurrent bulge and not representing a failure of the prior repair. Groin pain is a commonly reported outcome of inguinal hernia repair and important to consider in terms of quality of life. 13 In this retrospective study, groin pain captured as an adverse event was reported (at 2 y) in only 1 patient and was not serious. The VHRI measure was used to query "do you feel physical symptoms or pain at the site" to which 7.9% of patients responded "yes." These lower rates of pain may, in part, be associated with the use of bridging for nearly all procedures. Placement of the mesh without primary closure of the defect is thought to decrease the chances of nerve entrapment and postoperative pain. 14 There are options in mesh material and advantages and limitations to each. In this retrospective case series, all but 1 case was performed at a single site. This series evaluated only cases where the SYNECOR device was utilized for inguinal hernia repair. The site reported a preference for this device because of the material and construct of the device and because it was considered flexible and easy to handle. The 10 × 15 cm device was used for 75.9% of repairs. However, for larger hernias, 22.2% were repaired using the 15 × 20 cm device cut to 15 × 15 cm in size. While mesh materials can be cut down, the availability of a larger standard size for larger repairs that could be cut down to the appropriate size was appealing as it allowed the device to be tailored to the patient rather than be limited to just presized device materials. LIMITATIONS The study results should be considered in light of limitations. There is confidence in the data collected as the data are dependent on standard documentation for procedures as well as thorough chart review. Retrospective data collection is well represented in hernia mesh literature that attempts to describe long-term outcomes, particularly with recurrence and mesh explantation. Though this study included 4 clinical sites, all but 1 patient came from a single site. The clinical investigator was located in an area with limited options for hernia repair. As such, patients tended to return for follow-up. The presentation of data as summary statistics were necessary as there were a low number of recurrence or adverse events that precluded statistical analysis. Though the sample size and number of reported events is low, the data provide an insight into outcomes of patients with inguinal hernia and a mesh-involved repair beyond 3 years. The use of the PRO VHRI survey may be considered a limitation in the evaluation of inguinal hernia. The PRO was developed with a ventral hernia in mind but the concept of remotely querying subjects for readily observable symptoms to inform on the intensity of follow-up is a process in evolution for refinement within the inguinal hernia patient population. The instrument was shown to be a valid tool in the assessment of inguinal hernias. 6 CONCLUSIONS In this analysis, a majority of patients with inguinal repair with the use of a hybrid composite IP device had successful outcomes. These data showed low inguinal hernia recurrence and add to the body of evidence that this device has acceptable safety outcomes and device performance beyond 1 year.
2023-03-28T06:15:56.385Z
2023-03-20T00:00:00.000
{ "year": 2023, "sha1": "ba14776949292b5e4c5a4c6c382ffb9fb26cddcd", "oa_license": "CCBYNCND", "oa_url": "https://journals.lww.com/surgical-laparoscopy/Fulltext/9900/Evaluation_of_Long_term_Performance_of_the_GORE.107.aspx", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "bb22b8483cb30ec6476ea988acd1280f8bb8cf6a", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
16437466
pes2o/s2orc
v3-fos-license
Challenges in Management of Primary Hypoparathyroidism Associated with Autoimmune Polyglandular Syndrome Type 1 We report a case of autoimmune polyglandular syndrome type 1 (APS1) complicated by severe vascular insufficiency due to diffuse vascular calcification. APS1 is characterised clinically by multiple autoimmune conditions and development of at least two components of the triad of mucocutaneous candidiasis, hypoparathyroidism, and autoimmune adrenal insufficiency. We highlight the problems in current serum calcium monitoring methods and suggest that fluctuations in serum calcium concentrations due to difficulties treating hypoparathyroidism may have contributed to the vascular calcification seen in this case. Introduction Autoimmune polyglandular syndrome type 1 (APS1) is a rare condition with autosomal recessive inheritance. It is characterised clinically by multiple autoimmune conditions and development of at least two components of the triad of mucocutaneous candidiasis, hypoparathyroidism, and autoimmune adrenal insufficiency. The following case highlights some of the challenges and complications encountered in managing hypoparathyroidism in this setting. Case Presentation Our index Case (II.4) was the youngest sibling from a family of four living in the North West region of Northern Ireland born to nonconsanguineous parents ( Figure 1). Three of the sibship (all female) who had presented with a variable range of clinical manifestations of APS1 (Table 1) were subsequently shown to be homozygous for a 13 base pair (bp) deletion in exon 8 (c.964del13) in the autoimmune regulator gene (AIRE-1). The other sibling (II.2) was clinically unaffected and has not undergone carrier genetic testing for APS1. Cases II.1 and II.3 are undergoing regular medical followup, with their clinical features summarised in Table 1. Index Case. Our index Case (II.4) was diagnosed with APS1 in childhood, presenting with mucocutaneous candidiasis at age 5. Hypoparathyroidism was diagnosed at age 8 following an admission due to a seizure associated with hypocalcaemia. At age 10 autoimmune adrenal insufficiency was confirmed and treatment commenced with hydrocortisone and fludrocortisone. Serum potassium concentration remained within the reference range. Type 1A Diabetes Mellitus was diagnosed at age 18. Glycaemic control was suboptimal with a number of admissions due to diabetic ketoacidosis, and HbA1c never below 8%. She also developed proliferative retinopathy and underwent vitrectomy following retinal hemorrhage. She developed multiple features of APS1, which are summarised in Table 1. Hypoparathyroidism was treated with oral Alfacalcidol titrated according to serum corrected calcium concentrations. Figure 2 illustrates variations in serum corrected calcium concentrations over time. Urinary calcium excretion was measured intermittently with values ranging from 2.19-4.80 mmol/24 hrs. In the last year of her life, estimated glomerular filtration rate was between 30 and 45 mL/min and serum phosphate between 1.35 and 1.55 mmol/L. She was not treated with a phosphate binder. Her subsequent progress was complicated by recurrent admissions with generalised tonic-clonic seizures and hypocalcaemia. In 2003, she presented with acute ischaemia of the distal tip of her left 5th finger, and in 2004, she was admitted for observation following a collapse episode associated with QTc prolongation on ECG, which was successfully treated with intravenous calcium gluconate with normalisation of the QTc interval. Her progress was complicated by renal calculi, diffuse nephrocalcinosis, and chronic renal failure. In 2006, she complained of intermittent claudication at a distance of 20 yards. Dorsalis pedis and posterior tibial pulses were diminished bilaterally with ankle-brachial pressure indexes >1.0 consistent with vessel calcification. Diffuse large vessel calcification was evident on a plain radiograph of the right leg ( Figure 3). She subsequently developed ulceration around the left great toe, progressing to bilateral lower limb critical ischaemia and rest pain. Magnetic resonance (MR) angiogram of the lower limb vessels showed diffuse vascular calcification, but no focal stenotic lesions were identified. In July 2006, due to ongoing ischaemia her left great toe became necrotic with proximal spread. A conservative treatment plan was agreed, as she was too unwell to proceed to surgery. Pain relief and palliation was achieved using opiates, given subcutaneously by syringe driver and intrathecally. In October 2006, she died, aged 26 years old, as a result of fulminant sepsis secondary to infected gangrene of the left foot. Discussion APS1 is a rare disorder in most populations, with an estimated incidence of 1 in 25,000 in Finland. It is more common in females and is inherited in an autosomal recessive manner. Mutations in the AIRE gene, which encodes a transcription factor cause the syndrome. The AIRE gene is located on chromosome 21 [1]. Over 50 mutations have been reported worldwide but there are common mutations specific to different APS1 patient groups. The 13 base-pair deletion in exon 8 (964del13) found in our case was present in 70% of alleles in a sample of British APS1 patients [2]. The majority of the AIRE-1 mutations are predicted to cause a truncation of the protein, which is consistent with a loss of AIRE-1 function leading to APS1 [3]. The exact role of AIRE-1 in regulation of immune responses is unknown. It has a high degree of interfamilial and intrafamilial phenotypic variability. Genotype-phenotype correlation is not possible due to the extensive evidence of intrafamilial phenotypic variability. Vascular calcification results when the normal balance between factors promoting and inhibiting vascular calcification is disturbed. Four distinct but overlapping forms are described. These are atherosclerotic calcification, medial artery calcification, cardiac valve calcification, and calciphylaxis [4]. It is suggested that cells may be induced to differentiate into osteoblast-like cells which will secrete and deposit extracellular osteoid matrix. Candidate cells may include stem cells, vascular smooth muscle cells, and pericytes. Proposed triggers for this change include bone morphogenetic protein, oxidative stress, hyperphosphatemia, vitamin D, and parathyroid hormone. Proposed inhibitors include pyrophosphate, osteopontin, osteotegrin, and fetuin. Vascular calcification, particularly medial artery, and atherosclerotic calcification is common in the presence of diabetes mellitus and chronic renal failure [5]. Arterial calcification in the presence of APS1 has previously been reported [6]. The present case highlights some of the challenges in the treatment of hypoparathyroidism in APS I. The goal of management of hypoparathyroidism is to alleviate symptoms of hypocalcaemia and avoid complications of hypercalcaemia by maintaining serum calcium concentrations in the low normal range [4]. APS I presents specific challenges in achieving eucalcaemia due to other manifestations of APS I including intestinal malabsorption, coeliac disease, pancreatic exocrine failure, or intestinal lymphangiectasia. Variable PTH reserve and secretion may also lead to calcium excursions and ectopic calcification. Careful monitoring of serum and urinary calcium levels is required in all patients [7]. In our case, serum calcium values were usually in the target low-normal range during followup ( Figure 2). Nevertheless, she still developed diffuse vascular calcification, illustrating the limitations of current therapy, which is nonphysiological. Intermittent monitoring of serum calcium levels may have been insufficiently sensitive to detect serum calcium excursions as this method provides a point estimate rather than an integrated measure of current calcium balance. We recognise that the concomitant presence of Type 1A diabetes mellitus of 8 years duration and chronic renal failure probably contributed to her early peripheral vascular disease; however, the severity of her presentation at a young age and rapidly progressive clinical course suggest that fluxes in calcium homeostasis probably played an important role in her progressive deterioration. In conclusion, patients with APS 1 are recognized to be potentially at risk of premature death due to adrenal crisis, hypocalcaemia, or severe sepsis. Diffuse ectopic calcification and vascular insufficiency is unexpected but has previously been reported [6]. The fatal clinical course in our index Case (II.4) illustrates the complexity of polyglandular disease, some of the challenges in managing hypoparathyroidism in patients with Type 1 diabetes mellitus in APS 1, and the importance of monitoring these patients with appropriate and prompt intervention. The intrafamilial and interfamilial phenotypic variability further complicates and will have implications for the affected siblings of our index case and in particular II.3. Our family highlights the paucity of genotype-phenotype correlation in APS 1 with the degree of intrafamilial variability observed at this stage.
2016-05-04T20:20:58.661Z
2008-04-01T00:00:00.000
{ "year": 2011, "sha1": "050c9ae56aabfbb49475f81e51f14a2cfa05d706", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/crie/2011/281758.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4fab7ee3e822d4ab0881c3b2f434ab2eab64af4d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
39704791
pes2o/s2orc
v3-fos-license
Caveolin-1 Functions as a Novel Cdc42 Guanine Nucleotide Dissociation Inhibitor in Pancreatic β-Cells* The cycling of the small Rho family GTPase Cdc42 is required for insulin granule exocytosis, although the regulatory proteins involved in Cdc42 cycling in pancreatic β-cells are unknown. Here we demonstrate that the caveolar protein caveolin-1 (Cav-1) is a Cdc42-binding protein in β-cells. Cav-1 associated with Cdc42-VAMP2-bound granules present near the plasma membrane under basal conditions. However, stimulation with glucose induced the dissociation of Cav-1 from Cdc42-VAMP2 complexes, coordinate with the timing of Cdc42 activation. Analyses of the Cav-1 scaffolding domain revealed a motif conserved in guanine nucleotide dissociation inhibitors (GDIs), which suggested a novel role for Cav-1 as a Cdc42 GDI in β-cells. The novel role was further supported by: 1) in vitro binding analyses that demonstrated a direct interaction between Cav-1 and Cdc42; 2) GST-Cdc42 interaction assays showing preferential Cav-1 binding to GDP-Cdc42 over that of GTP-Cdc42; 3) Cav-1 depletion studies resulting in an inappropriate 40% induction of activated Cdc42 in the absence of stimuli and also a 40% increase in basal insulin release from both MIN6 cells and islets. Expression of wild-type Cav-1 in Cav-1-depleted cells restored basal level secretion to normal, whereas expression of a scaffolding domain mutant of Cav-1 failed to normalize secretion. Taken together, these data suggest that Cav-1 functions as a Cdc42 GDI in β-cells, maintaining Cdc42 in an inactive state and regulating basal secretion in the absence of stimuli. Through its interaction with the Cdc42-VAMP2-bound insulin granule complex, Cav-1 may contribute to the specific targeting of granules to “active sites” of exocytosis organized by caveolae. Regulated insulin granule exocytosis is elicited by the fusion of a primed and readily releasable pool of plasma membranelocalized granules and through the mobilization and trafficking of intracellularly localized insulin granules (1,2). Granule fusion is initiated through the entry of glucose into the pancreatic ␤-cells via the GLUT2 glucose transporter, leading to elevation of the intracellular ATP/ADP ratio, which in turn closes the ATP-sensitive K ϩ (K ATP ) channels (3,4). This triggers the opening of the voltage-dependent calcium channels and increased intracellular cytoplasmic calcium concentration (5), which culminates in fusion of insulin secretory granules from the multiple intracellular pools (6,7). Granules fuse with the plasma membrane through the pairing of their vesicular soluble NSF 2 attachment protein receptor (v-SNARE) VAMP2 with the plasma membrane target membrane SNARE (t-SNARE) proteins Syntaxin 1A and SNAP-25 (8 -12). Although this provides a mechanism for fusion of granules/vesicles with the plasma membrane, the upstream events required for targeting the granules/vesicles specifically to "active fusion sites" on the plasma membrane where their cognate t-SNAREs reside remains unclear. In addition, the mechanism that restricts fusion of granules near the plasma membrane in the absence of stimulus is not known. The small Rho family GTPase Cdc42 has been demonstrated to co-localize with VAMP2-bound insulin secretory granules in pancreatic ␤-cells (13,14) and may function in targeting insulin-secretory granules to Syntaxin 1A fusion sites at the plasma membrane. Cdc42 is found bound directly to the NH 2 terminus of VAMP2 under basal conditions. Stimulation by glucose triggers the activation of Cdc42, and activated Cdc42 bound to VAMP2 localizes insulin granules at the plasma membrane. Cdc42 binds indirectly to Syntaxin 1A (15), and the interaction is bridged by VAMP2 to form a heteromeric complex (16). The interactions among Cdc42 and the SNARE proteins are functionally important for SNARE-mediated insulin exocytosis (16), as is the cycling of Cdc42 between its GDP-and GTPbound conformations. However, although it is clear that glucose activates Cdc42 at a step proximal to or at K ATP channel closure (17), the precise regulatory proteins involved in this Cdc42 cycling pathway are unknown. Putative regulatory proteins to investigate would probably fall into three categories: 1) guanine nucleotide exchange factors, which catalyze nucleotide exchange and mediate activation; 2) GTPase-activating proteins, which stimulate GTP hydrolysis leading to inactivation; and 3) guanine nucleotide dissociation inhibitors (GDIs), which prevent interaction with the plasma membrane, exchange factors, and downstream effector proteins (for reviews, see Refs. 18 and 19). Cdc42 has been reported to localize to caveolar domains at the plasma membrane (20). Similarly, Syntaxin 1A and SNAP-25 have been found "clustered" in highly active fusion centers by caveolae in neurosecretory PC12 cells (21,22). Caveolae are microdomains that are enriched in cholesterol and sphingolipids (23) located in the plasma membrane as well as in membranous vesicles residing close to the plasma membrane (24). These plasmalemmal organelles have been implicated in a wide variety of cellular functions, including vesicle transport and cell signaling (25). Furthermore, caveolar clustering of SNARE proteins into "fusion centers" has been suggested to increase the efficiency of neurotransmitter secretion (21). Recently, studies have shown electron micrographs of islet ␤-cell membranes revealing caveolar structures that contain the protein caveolin-1 (Cav-1), suggesting that caveolae may also function in insulin exocytosis (26,27). However, there is also a significant proportion of Syntaxin 1A proteins outside of these caveolar domains (21), suggesting that the preferential targeting of neurotransmitter-containing vesicles to caveolar-clustered "fusion centers" probably requires additional cues. Cav-1 appears to have potential as one of the "additional cues" to preferentially direct granules to Syntaxin 1A fusion sites within the caveolar clusters. A key protein of caveolae and plasmalemmal organelles, Cav-1 is a transmembrane protein present on caveolae that forms a hairpin-like structure with both the NH 2 and COOH termini facing the cytoplasm (28). Cav-1 directly binds cholesterol (29) and forms homo-and heterooligomers (28,30,31). Cav-1 contains an oligomerization domain, juxtaposed to a scaffolding domain, and it is the latter of these domains that participates in signal transduction events (32,33). Importantly, Cav-1 has been reported to associate with Cdc42 in other cell types, although the function of this interaction has not been evaluated. In this report, we present evidence to support a novel role for Cav-1 as a Cdc42 GDI and suggest that the Cav-1-Cdc42 association may provide a link between caveolar clustering of SNARE proteins in the plasma membrane and efficient targeting of VAMP2-bound insulin granules to plasma membrane Syntaxin 1A-based fusion centers clustered by caveolae. In support of a functional role for Cav-1 as a Cdc42 GDI, we show that 1) Cav-1 interacted preferentially with GDP-bound Cdc42 and that the interaction was direct; 2) Cav-1 bound to Cdc42 via a conserved GDI motif located within the Cav-1 scaffolding domain; 3) Cav-1 was required to maintain low levels of secretion and inactivation of Cdc42 in the absence of stimuli. Taken together, these data suggest a mechanism whereby Cav-1 binds Cdc42-GDP under basal conditions and glucose stimulates the dissociation of the complex via GTP-loading of Cdc42 to facilitate regulated granule targeting to active fusion sites, ensuring that the appropriate interactions occur in an efficient spatial and temporal manner. EXPERIMENTAL PROCEDURES Materials-Radioimmunoassay grade bovine serum albumin and D-glucose were obtained from Sigma. The rabbit polyclonal anti-caveolin-1, mouse monoclonal anti-Myc (9E10), and actin antibodies were purchased from Santa Cruz Biotechnology, Inc. (Santa Cruz, CA). Mouse monoclonal anti-VAMP2 antibody and rabbit polyclonal anti-VAMP2 were purchased from Synaptic Systems (Gottingen, Germany) and Chemicon (Temecula, CA), respectively. The Syntaxin antibody was obtained from Upstate Biotechnology, Inc. (Lake Placid, NY). Monoclonal anti-Cdc42, caveolin-1, and SNAP-25 antibodies were purchased from BD Biosciences. Rabbit anti-GST antibody was obtained from Affinity BioReagents (Golden, CO). The MIN6 cells and rabbit polyclonal phogrin antibody were a gift from Dr. John Hutton (University of Colorado Health Sciences Center). The n-octyl glucoside and GGTI-2147 were purchased from Research Products International Corp. (Mt. Prospect, IL) and Calbiochem, respectively. Recombinant Cdc42-His proteins were purchased from Cytoskeleton Inc. (Denver, CO). The ECL kit and Hyperfilm-MP were obtained from Amersham Biosciences. Tfx-50 lipofection reagent was purchased from Promega (Madison, WI). Vectashield was obtained from Vector Laboratories (Burlingame, CA). Human C-peptide and rat insulin RIA kits were purchased from Linco Research (St. Charles, MO). Plasmids-The pCIS2-Cav-1 wild-type and pCIS2-Cav-1 SD (F92A/V94A) mutant constructs have been described previously (34) and were a gift from Dr. Michael Quon (National Institutes of Health). The pGEX-Cdc42 construct was a gift from Dr. Lawrence Quilliam (Indiana University School of Medicine). The pSilencer-Cav-1 construct was generated by insertion of annealed complementary double-stranded oligonucleotides encoding 19 nt, GCCCAACAACAAGGCCATG, of canine caveolin-1, followed by a loop region (TTCAAGAG) and then the antisense of the 19 nt. The pSilencer-control construct was generated in an identical manner, using the following 19-nt sequence, which fails to match any known mammalian protein using BLAST (NCBI): GCGCGCTTTGTAGGAT-TCG, as described (35). Oligonucleotides were engineered to encode ApaI and EcoRI sites at the 5Ј-and 3Ј-ends for insertion into the pSilencer1.0 vector (Ambion, Inc., Austin, TX). Nucleotide sequences encoding Cav-1 full-length (FL) amino acids and hydrophilic domain amino acids (residues 1-101) were amplified by PCR using oligonucleotides designed to contain EcoRI and XhoI restriction sites at the 5Ј-and 3Ј-ends, respectively. PCR products were subcloned into the EcoRI and XhoI of the pGEX4T-1 vector. The siCav-1-Ad adenoviral shuttle vector was generated by insertion of the Cav-1 siRNA sequence (5Ј-GCCCAACAACAAGGCCATG) into the 5Ј AflII site and the 3Ј SpeI site of the pMighty vector (Viraquest, North Liberty, IA). The construct was linearized by restriction digestion with NdeI for recombination and virus-packaged with EGFP to enable visualization of infection efficiency. The siCon-Ad shuttle vector was generated by insertion of the Control siRNA sequence (5Ј-GCGCGCTTTGTAGGATTCG) into the AflII site and the 3Ј SpeI site of the pMighty vector and adenovirus made as described above. Cesium chloride-purified adenoviral particles were used at an MOI of 100, and the efficiency of transduction was gauged visually by EGFP fluorescence. Cell Culture and Transient Transfection-CHO-K1 cells were purchased from the American Type Culture Collection (Manassas, VA) and cultured in Ham's F-12 medium supplemented with 10% fetal bovine serum, 100 units/ml penicillin, 100 g/ml streptomycin, and 292 g/ml L-glutamine. At 80 -90% confluence, cells were electroporated with 40 g of DNA as described previously (36). Under these conditions, ϳ70 -80% of cells were transfected. After a 48-h incubation in media cells were harvested in Nonidet P-40 lysis buffer (25 mM Hepes, pH 7.4, 1% Nonidet P-40, 10% glycerol, 50 M sodium fluoride, 10 mM sodium pyrophosphate, 137 mM sodium chloride, 1 mM sodium vanadate, 1 mM phenylmethylsulfonyl fluoride, 10 g/ml aprotinin, 1 g/ml pepstatin, and 5 g/ml leupeptin), and lysates were cleared by microcentrifugation for 10 min at 4°C for subsequent use in co-immunoprecipitation experiments. MIN6 cells were cultured in Dulbecco's modified Eagle's medium (with 25 mM glucose) supplemented with 15% fetal bovine serum, 100 units/ml penicillin, 100 g/ml streptomycin, 292 g/ml L-glutamine, and 50 M ␤-mercaptoethanol, as described (17,37,38). MIN6 cells plated in 10-cm tissue culture dishes at 40 -60% confluence were electroporated with 300 g of plasmid DNA per cuvette (one 10-cm dish/cuvette) to obtain ϳ50% transfection efficiency using a procedure previously described (39). After 48 h of incubation, cells were washed twice with and incubated for 2 h in 1 ml of modified Krebs-Ringer bicarbonate buffer (MKRBB) (17,37) and stimulated with glucose (20 mM) or KCl (50 mM). For studies with the geranylgeranylation inhibitor, GGTI-2147, MIN6 cells were cultured overnight in medium supplemented with vehicle (Me 2 SO) or 20 M GGTI-2147 followed by incubation in MKRBB with vehicle or GGTI-2147 for 2 h. Insulin secreted into the MKRBB was quantitated by radioimmunoassay. Cells were subsequently lysed in Nonidet P-40 lysis buffer to generate cleared cell detergent homogenates for quantitation of insulin content by insulin RIA and for co-immunoprecipitation assays. For measurement of human C-peptide release, MIN6 cells were transiently co-transfected with the human proinsulin expression vector (pCB6/INS), a kind gift from Dr. Chris Newgard (Duke University) using Tfx-50 with 2.5 g of DNA/construct/ 35-mm dish of cells. 48 h following transfection, cells were incubated in glucose-free MKRBB for 2 h, and MKRBB was collected for quantitation of human C-peptide released by radioimmunoassay. Recombinant Proteins and Interaction Assays-GST-Cdc42 fusion protein was expressed in Escherichia coli and purified by glutathione-Sepharose affinity chromatography as described (40). GST-Cdc42 linked to glutathione-Sepharose beads was loaded with GTP␥S or GDP as previously described (16,41). Briefly, 10 g of GST-Cdc42 protein linked to Sepharose beads was incubated in buffer (0.1 M Tris, pH 7.4, 1 mM EDTA, 2 mM dithiothreitol, 0.2 M NaCl) at a final concentration of 0.1 mM GTP␥S or GDP for 10 min at 30°C and combined with 50 mM MgCl. GTP␥S-or GDP-loaded GST-Cdc42 Sepharose was incubated in PBS, pH 7.4 (supplemented with 2.5 mM MgCl 2 ) with cleared detergent lysates prepared from CHO-K1 cells (325 g of protein/reaction) or MIN6 cells (1 mg of protein/reaction) for 2 h at 4°C. Following three washes with PBS supplemented with 2.5 mM MgCl 2 , proteins were eluted from the Sepharose beads and subjected to electrophoresis on 12% SDS-PAGE followed by transfer to polyvinylidene difluoride (PVDF) membrane for immunoblotting. Co-immunoprecipitation and Immunoblotting-MIN6 cell lysates (3 mg) cleared in a lysis buffer containing 0.25% Triton X-100 and 60 mM n-octyl glucoside, a nonionic detergent for solubilizing membrane-bound proteins, were combined with 3 g of rabbit anti-Cdc42, mouse anti-Cav-1, or mouse anti-VAMP2 antibody for 2 h at 4°C, followed by a second incubation with protein G Plus-agarose for 2 h. The resultant immunoprecipitates were subjected to electrophoresis on 12% SDS-PAGE followed by transfer to PVDF membranes for immunoblotting. Primary antibodies were used at 1:250 -1000 dilutions, and secondary antibodies conjugated to horseradish peroxidase diluted at 1:5000 for visualization by ECL. Subcellular Fractionation-Subcellular fractions were isolated as previously described (42). Briefly, MIN6 cells at 70 -80% confluence were washed with cold PBS and harvested into 1 ml of homogenization buffer (20 mM Tris-HCl, pH 7.4, 0.5 mM EDTA, 0.5 mM EGTA, 250 mM sucrose, and 1 mM dithiothreitol containing the following protease inhibitors: leupeptin (10 g/ml), aprotinin (4 g/ml), pepstatin (2 g/ml), and phenylmethylsulfonyl fluoride (100 M). Cells were disrupted by 10 strokes through a 27-gauge needle, and homogenates were centrifuged at 900 ϫ g for 10 min. Postnuclear supernatants were centrifuged at 5500 ϫ g for 15 min, and the subsequent supernatant was centrifuged at 25,000 ϫ g for 20 min to obtain the secretory granule fraction in the pellet. The supernatant was further centrifuged at 100,000 ϫ g for 1 h to obtain the cytosolic fraction. All steps were performed at 4°C. Highly purified plasma membrane fractions were obtained using a protocol by Hubbard et al. (43). Briefly, the postnuclear pellet from the initial 900 ϫ g centrifugation was mixed with 1 ml of Buffer A (0.25 M sucrose, 1 mM MgCl 2 , and 10 mM Tris-HCl, pH 7.4) and 2 volumes of Buffer B (2 M sucrose, 1 mM MgCl 2 , and 10 mM Tris-HCl, pH 7.4). The mixture was overlaid with Buffer A and centrifuged at 113,000 ϫ g for 1 h to obtain an interface containing the plasma membrane. The interface was collected and diluted to 2 ml with homogenization buffer for centrifugation at 3000 ϫ g for 10 min, and the resulting pellet was collected as the plasma membrane fraction. Fractions were assayed for soluble protein content as described (44). Immunofluorescence and Confocal Microscopy-MIN6 cells at 40% confluence plated onto glass coverslips were incubated in MKRBB for 2 h, followed by stimulation with 20 mM glucose, and then fixed and permeabilized in 4% paraformaldehyde and 0.1% Triton X-100 for 10 min at 4°C. Fixed cells were blocked in 1% bovine serum albumin and 5% donkey serum for 1 h at room temperature, followed by incubation with primary antibody (1:100) for 1 h. Cells were then washed with PBS and incubated with Texas Red secondary antibody (1:100) for 1 h. Cells were washed again in PBS and overlaid with Vectashield mounting medium, and coverslips were mounted onto slides for confocal fluorescence microscopy using a Zeiss 510 confocal microscope. Images presented within a figure were captured using identical settings unless otherwise specified. Cdc42 Activation Assay and Immunoblotting-A glutathione S-transferase (GST) fusion protein, corresponding to the p21binding domain of p21-activated kinase (Pak1) was used to specifically detect and interact with the GTP form of Cdc42 in MIN6 cell plasma membrane fractions using the EZ-Detect Cdc42 activation kit from Pierce. Briefly, plasma membrane fractions were prepared from cells incubated in MRKBB buffer for 2 h at 37°C and stimulated with glucose for 3 min. Freshly made fractions (100 g) were combined with 20 g of PAK1 p21-binding domain-agarose for 1 h at 4°C. After three washes with lysis buffer, proteins were eluted from the agarose beads and subjected to electrophoresis on 12% SDS-PAGE, followed by transfer to PVDF membrane. Membranes were immunoblotted with mouse anti-Cdc42 or rabbit anti-GST antibodies, and proteins were visualized by ECL. Adenoviral Transduction of Isolated Mouse Islets-Pancreatic mouse islets were isolated as previously described (35), as modified from Lacy and Kostianovsky (45). Briefly, pancreata from 8 -10-week-old male C57B16J mice were digested with collagenase and purified using a Ficoll gradient. After isolation, islets were immediately transduced at an MOI of 100 with either siCon-Ad or siCav-1-Ad CsCl-purified particles for 1 h at 37°C. Islets were washed twice and incubated for 48 h in RPMI 1640 at 37°C and 5% CO 2 . Transduction efficiency was determined to be greater than 95% of cells in all experiments as gauged by EGFP fluorescence. EGPF fluorescent islets were hand-picked for static culture insulin secretion analysis. Static Culture-Fresh islets from wild-type mice were handpicked into groups of 10, preincubated in KRBH (10 mM Hepes (pH 7.4), 134 mM NaCl, 5 mM NaCO 3 , 4.8 mM KCl, 1 mM CaCl 2 , 1.2 mM MgSO 4 , 1.2 mM KHPO 4 ) containing 2.8 mM glucose and 0.1% bovine serum albumin for 4 h. Media were collected to measure insulin secretion, and islets were solubilized in Nonidet P-40 lysis buffer as described above to determine cellular insulin content by RIA. Statistical Analysis-All data are expressed as mean Ϯ S.E. Data were evaluated for statistical significance using Student's t test. Cav-1 Interacts with Cdc42 and SNARE Proteins- The presence of Cav-1 protein has been reported in multiple ␤-cell lines such as HIT-T15 and INS-1 (27). To first verify that Cav-1 was also present in the MIN6 ␤-cell line, cells were lysed in a buffer containing 0.25% Triton X-100 and 60 mM n-octyl glucoside, a nonionic detergent for solubilizing membrane-bound proteins (32), and lysates were subsequently used for Cav-1 immunoprecipitation. As seen in Fig. 1A, Cav-1 protein was immunoprecipitated by anti-Cav-1 antibody, demonstrating its presence in MIN6 cells. Moreover, VAMP2 and Cdc42 were co-immunoprecipitated with Cav-1, suggesting association among these proteins. IgG control antibody failed to immunoprecipitate any of the three proteins. Reciprocal immununoprecipitation with anti-VAMP2 and anti-Cdc42 antibodies co-precipitated Cav-1, confirming the finding that Cav-1 associated with Cdc42 and VAMP2 (Fig. 1B). This association was specific for the Cav-1 isoform of caveolin, since Cav-2 protein was undetectable in MIN6 cell lysate and was not co-precipitated by Cdc42 or VAMP2 antibodies (data not shown). VAMP2 and Cdc42 localize together at the plasma membrane and on insulin secretory granules in ␤-cells (14,16,46). The plasma membrane compartment contains several pools of granules, named according to their proximity to the plasma membrane (i.e. immediate releasable, readily releasable, or intracellular storage). To investigate the pool of granules to which Cav-1 was localized, we performed subcellular fractionation experiments. As described previously, MIN6 cells can be fractionated by differential centrifugation into plasma membrane (PM), cytosolic, and insulin granule storage (Gran) fractions (16,42). The fractionation procedure was validated in two ways: 1) demonstration of highest insulin content in the insulin MIN6 cell lysates were prepared from cells incubated in MKRBB for 2 h and harvested in lysis buffer containing Triton X-100 and n-octyl glucoside. A, cleared cell lysates (2 mg of protein) were immunoprecipitated either with rabbit anti-Cav-1 or rabbit IgG for 2 h at 4°C. Immunoprecipitates (IP) were resolved on 12% SDS-PAGE, and proteins were transferred to PVDF for immunoblotting (IB) with mouse anti-Cav-1, mouse anti-VAMP2, and mouse anti-Cdc42. B, cleared cell lysates (2 mg of protein) were immunoprecipitated either with mouse anti-VAMP2 or mouse anti-Cdc42 for 2 h at 4°C. Immunoprecipitates were resolved on 12% SDS-PAGE, and proteins were transferred to PVDF for immunoblotting with mouse anti-Cav-1, mouse anti-VAMP2, and mouse anti-Cdc42. granule fraction, with less in the PM and little if any in the cytosolic fraction (data not shown; see Refs. 16, 47, and 48); 2) presence of marker proteins Syntaxin 1A in the PM, VAMP2 and phogrin in the Gran, and Cdc42 in all three fractions ( Fig. 2A). Although Cdc42 is a soluble protein, it also localizes to membrane fractions via associations with other membranebound proteins (16) and post-translational modifications (19). Among these fractions, Cav-1 was found localized to the plasma membrane and insulin granule fractions, consistent with other reports localizing Cav-1 to both the plasma membrane and to plasmalemmal vesicles and/or granules (26 -28). We next questioned which pool of granules Cav-1 was associated with and whether its association with the Cdc42-VAMP2 bound granules occurred in a glucose-sensitive manner. PM or Gran fractions prepared from MIN6 cells left unstimulated or stimulated with glucose for 3 min were used for co-immunoprecipitation with anti-VAMP2 (Fig. 2B). To specifically assess the ability of Cav-1 to associate with Cdc42 localized to granules and not free Cdc42, anti-VAMP2 antibody was used for co-immunoprecipitation. VAMP2 co-immunoprecipitated both Cdc42 and Cav-1 from both PM and Gran fractions under basal and glucose-stimulated conditions. However, co-precipitation of Cav-1 was significantly reduced in the PM fraction from cells stimulated with glucose for 3 min (Fig. 2B, lanes 1 and 2), the time that corresponds to a significantly increased level of activation of Cdc42 (17). In contrast, there were no detectable changes in the amount of co-precipitated Cav-1 from Gran fractions in response to glucose stimulation (Fig. 2B, lanes 3 and 4). Quantitation by optical density scanning revealed the decrease in Cav-1 co-precipitation with VAMP2 from the PM fraction to be 60 Ϯ 6% (p Ͻ 0.05) (Fig. 2C). This decrease failed to occur in cells stimulated with KCl, indicating that the dissociation of Cav-1 from the VAMP2-Cdc42 complex required glucose. The response was also determined to be specific to D-glucose, since L-glucose failed to elicit dissociation of Cav-1 from Cdc42 and thus was not a default response to osmotic changes (data not shown). These data indicated that Cav-1 associated with Cdc42-VAMP2-bound granules in a pool near the plasma membrane and in a glucose-sensitive manner. Cav-1 Associates with GDP-bound Cdc42-Cav-1 dissociation from the Cdc42-VAMP2 granules may have resulted from glucose-induced modifications to Cav-1, Cdc42, or VAMP2. Since we have previously shown that glucose induces the activation of Cdc42 within 3 min (17), we first questioned whether Cav-1 might preferentially interact with GDP-bound Cdc42. GST-Cdc42 protein was purified from E. coli and complexed to Sepharose beads loaded with either GDP or GTP␥S, a nonhydrolyzable analogue of GTP, and immediately incubated beads with cleared MIN6 lysates prepared from cells stimulated with glucose for 3 min (Fig. 3A). Precipitation of GST-Cdc42-GDP beads resulted in markedly increased amounts of Cav-1 compared with GST-Cdc42-GTP␥S beads (Fig. 3A, lanes 2 and 3). In addition, GST alone failed to associate with Cav-1, indicating that the interaction between Cdc42-GDP and Cav-1 was specific (Fig. 3A, lane 4). In multiple experiments, the GST-Cdc42-GDP also bound Cav-1 from lysates prepared from unstimulated MIN6 cells (data not shown), although the band intensity of Cav-1 was consistently stronger in reactions using the glucose-stimulated lysates. One explanation for this may be that the glucose-stimulated dissociation of endogenous Cav-1-Cdc42 complexes freed Cav-1, increasing the amount of Cav-1 available for interaction with the exogenous GST-Cdc42-GDP. To investigate if the interaction of Cav-1 with Cdc42 required an islet cell-specific factor, Cav-1 was expressed in CHO-K1 cells, and cleared cell lysates were prepared for incubation with GST-Cdc42 beads loaded with GDP or GTP␥S (Fig. 3B). Like the binding observed in the MIN6 cell lysates, Cav-1 preferentially associated with the GDP-loaded Cdc42 compared with GTP␥S-Cdc42 (Fig. 3B, lanes 2 and 3) and failed to bind to the GST-beads alone (Fig. 3B, lane 4). These data indicated that Cav-1 preferentially interacted with GDP-bound Cdc42 and that the interaction was not restricted to ␤-cells. Cdc42 Interacts with a Conserved Motif Present in Cav-1 and Other GDI Proteins-Since Cav-1 preferentially bound to the GDP-form of Cdc42, we assessed its putative role as a GTPase Immunoprecipitated proteins were separated on 12% SDS-PAGE and transferred to PVDF for immunoblotting. C, optical density scanning quantitation of three independent sets of PM fractions immunoblotted for Cav-1. Data were normalized to unstimulated ϭ 1 for each fraction per experiment and are shown as means Ϯ S.E. *, p Ͻ 0.05 versus unstimulated using unpaired Student's t test. Cav-1 Functions as a Novel CDC42 GDI in Pancreatic ␤-Cells GDI. Cav-1 contains a Ras-binding region within the scaffolding domain (amino acids 82-101), and alignment of this region of Cav-1 with known mammalian Rho and Rab GDIs revealed a 10-amino acid region of high conservation (Fig. 4). Of these 10 residues, all were conserved among Cav-1 sequences in mice, rats, canines, chickens, and humans. More than 45% of the residues shared identity with a similar region in sequence alignments with murine skeletal muscle isoform of RabGDI-2 (mGDI-2), murine Rab GDI-␤, and porcine Rab GDI-2 (shaded). In the other 5-6 residues of this region, there remained ϳ27% sequence similarity (conservative substitution). A core sequence of 6 residues is overall highly conserved and was defined by the consensus motif FT-VT⌽Y, where ⌽ represents an arginine or lysine residue. To determine if the binding interaction between Cdc42-GDP and Cav-1 required these conserved residues of Cav-1, GST-Cdc42 loaded with GDP or GTP␥S was combined with lysates prepared from CHO-K1 cells overexpressing Myc-tagged wildtype (Cav-Wt) or mutant (Cav-Mut, point mutations F92A and V94A) forms of Cav-1 (34). The Myc-Cav-Wt protein bound preferentially with the GDP-bound GST-Cdc42 (Fig. 5, lanes 1 and 5). In contrast, Myc-Cav-Mut failed to bind to GST-Cdc42-GDP or -GTP␥S (Fig. 5, lanes 2 and 6). Neither Cav-Wt nor Cav-Mut bound to GST beads alone (Fig. 5, lanes 3 and 4). The lack of Cav-Mut binding to GST-Cdc42-GDP was not due to reduced protein expression, since Cav-Wt and Cav-Mut proteins were equivalently expressed in lysates (Fig. 5, lanes 7 and 8). These data suggested that the interaction between Cdc42-GDP and Cav-1 was mediated by the consensus motif present in the Ras-binding/scaffolding domain of Cav-1. In addition to binding to GDP-bound forms of GTPases, GDIs form high affinity complexes with the geranylgeranyl membrane-targeting moiety present at the COOH terminus of the GTPase (for reviews, see Refs. 18, 49, and 50). In MIN6 cells, the geranylgeranyltransferase inhibitor GGTI-2147 was found to decrease the association between Cav-1 and Cdc42 by 22 Ϯ 2% (p Ͻ 0.05). This decrease is consistent with that of other reports showing similar functional effects of this inhibitor (51,52). These data suggested that Cav-1 more efficiently interacted with the geranylgeranylated form of Cdc42, in the same manner that GDIs form high affinity complexes with lipid moieties of GTPases. Cav-1 Directly Interacts with Cdc42-GDI proteins interact directly with their GTPases. To determine whether the interaction between Cav-1 and Cdc42 was direct or indirect, recombi-γ FIGURE 3. Caveolin-1 interacts preferentially with GDP-bound Cdc42. GST-Cdc42 linked to Sepharose beads was preloaded with GDP or GTP␥S and incubated with MIN6 cell lysates prepared from cells stimulated with glucose for 3 min (A) or with CHO-K1 detergent cell lysates prepared from cells expressing recombinant Cav-Wt protein (B). GST protein alone linked to Sepharose beads was used a control for specificity of binding. Beads were washed, and proteins were eluted for resolution on 12% SDS-PAGE and transferred to PVDF for immunoblotting (IB) with Cav-1 and GST antibodies. Data shown are representative of 3-5 independent experiments. FIGURE 4. Amino acid sequence comparison between Cav-1 and various mammalian forms of GDI proteins. GenBank TM accession numbers for Cav-1 (NM_001003296); mouse skeletal muscle GDI-2 (Rab mGDI-2, AAB16908); Rab GDI-␤ (L36314) and GDI-2 (NM_001001643) are indicated. Amino acid residue numbers are listed to the right of each sequence, and the conserved residues of the binding motif are shaded. ⌽, basic amino acid Arg or Lys. Cav-1 Depletion Leads to Increased Basal Secretion-Functionally, GDIs regulate access of GTPases to exchange factors and effectors present at sites of activation in specific membrane compartments. To evaluate the functional requirement for Cav-1 in regulating glucose-stimulated insulin secretion, we generated two siRNAs directed against different regions of the canine Cav-1 cDNA in a plasmid delivery system (pSilencer1.0; Ambion). Initially, CHO-K1 cells were electroporated with both Cav-1 siRNAs to test for effectiveness in a high expression transfection system. Both Cav-1 siRNAs resulted in 90 -95% depletion of endogenous Cav-1 (data not shown). When electroporated into MIN6 cells, of which ϳ50% of cells are transfected on average, we observed ϳ50% depletion of endogenous Cav-1 when compared with control siRNA-transfected cells (Fig. 7A), whereas levels of the SNARE protein SNAP-25 were unaffected. Functionally, MIN6 cells expressing Cav-1 siRNA released 40% more insulin in the absence of secretagogue compared with cells expressing control siRNA (Fig. 7B). Considering that the basal rate was elevated by ϳ40% in a cell population where only 50% of cells had Cav-1 depleted by the siRNA, this increase in basal secretion is very high. Control siRNA-expressing cells exhibited a 2-fold increase in insulin release within 10 min of glucose stimulation, consistent with other reports using the MIN6 cell line (35). However, cells expressing the Cav-1 siRNA showed no increase in secretion beyond basal secretion level release. Cellular insulin content was unaffected in cells depleted of Cav-1 by siRNA, under either unstimulated or glucose-stimulated conditions (data not shown). Thus, these data demonstrated that siRNA-mediated depletion of Cav-1 resulted in significant elevation in basal insulin secretion with no significant responsiveness to glucose within 10 min, suggesting that Cav-1 plays a role in maintenance of basal secretion and may be required for glucose-induced initiation of secretion. To determine if the requirement for Cav-1 in the maintenance of basal secretion was related to its ability to interact with Cdc42 through the conserved FT-VT⌽Y motif, Cav-1 levels were restored in Cav-1 siRNA-depleted cells by transfection of either Cav-Wt or Cav-Mut DNA. MIN6 cells were simultaneously co-transfected with the human proinsulin cDNA, since it is immunologically distinct from the mouse C-peptide secreted from the MIN6 cells and is used to detect secretion specifically from transfected cells (53,54). In the human C-peptide assay system, basal release was significantly elevated in Cav-1 siRNA-transfected cells relative to control cells (Fig. 7C, bars 1 and 2). Although the release was only 10% higher than control in this system, rather than the 40% observed in the electroporated cells, the elevation of basal level with the Cav-1 siRNA was statistically significant and observed in all experiments. This discrepancy may have resulted from dilution of the effect coming from some cells in the population not having taken up all three types of DNA during the transfection. Consistent with a requirement for Cav-1 in maintenance of basal insulin release, replenishment of Cav-Wt expression in Cav-1depleted cells normalized basal secretion down to the level exhibited by control cells (Fig. 7C, bars 3 and 4). However, Cav-Mut expression in Cav-1-depleted cells failed to normalize basal secretion in Cav-1-depleted cells (Fig. 7C, bars 5 and 6). This functional difference was not the result of differential cellular localization of Cav-Wt versus Cav-Mut, since both were found localized to both intracellular membranes and plasma membrane in an identical manner (Fig. 7D). Moreover, the functional difference was not attributed to differences in Cav-Wt and Cav-Mut expression levels, since both were found to be expressed in equivalent abundances under control conditions and both showed similar depletion by Cav-1 siRNA (Fig. 7E). Therefore, these data suggest that the interaction of Cdc42 with the conserved FT-VT⌽Y motif of Cav-1 mediates basal secretion. To determine whether the depletion of Cav-1 from primary islets would similarly result in increased basal secretion, freshly isolated wild-type mouse islets were transduced with Cav-1 (siCav-1-Ad) or control (siCon-Ad) siRNA-expressing adenoviral particles (packaged with enhanced green fluorescent protein for positive identification of transduced islets). 48 h following transduction, the green fluorescent protein-expressing islets were identified and hand-picked into batches of 10 for analysis of insulin secretion under static culture conditions. Islets transduced with siCav-1-Ad showed a 1.9-fold increase in basal secretion compared with siCon-Ad islets, whereas insulin content remained similar in siCon-and siCav-1-expressing islets (Fig. 8). These data are supportive of a potential physiological role for Cav-1 in the islet to maintain low levels of secretion in the absence of appropriate stimuli (basal conditions). Cav-1 Maintains Cdc42 in the Inactivated State-Since GDIs preferentially interact with the GDPbound form of the GTPase and maintain the GTPase in the inactive state, we next questioned whether the depletion of Cav-1 would result in the inappropriate activation of Cdc42 under basal conditions. MIN6 cells were transduced with the siCon-Ad and siCav-1-Ad adenoviral particles (MOI ϭ 100), stimulated with glucose for 3 min, or left unstimulated and then fractionated to examine Cdc42 activation in the PM compartment. Cdc42 activation assays were performed using the various PM fractions (Fig. 9A). PM fractions prepared from cells transduced with siCon-Ad showed low basal level Cdc42 activation, which increased upon stimulation with glucose for 3 min (Fig. 9B). In contrast, cells transduced with siCav-1-Ad showed 40% higher levels of basal Cdc42 activation, and glucose failed to elicit an increase in Cdc42 activation beyond basal levels. These findings are consistent with the functional data showing a 40% increase in basal levels of insulin secretion from siCavdepleted MIN6 cells (Fig. 7B). Taken together, these data show that Cav-1 is required to maintain Cdc42 in its inactive state under basal conditions and provide further support for the characterization of Cav-1 as a GDI for Cdc42. DISCUSSION In this report, we have presented data that suggest a novel role for Cav-1 as a link between Cdc42, SNARE proteins, and caveolae in insulin granule exocytosis. Mechanistically, under basal conditions, Cav-1 functions as a GDI by binding to the inactive GDP-bound Cdc42 present on VAMP2-bound insulin granules localized near the plasma membrane in close proximity to Syntaxin 1A fusion centers, restricting fusion of granules. Upon glucose stimulation, Cdc42 becomes activated and dissociates from Cav-1, allowing access of Cdc42-VAMP2-bound granules to interact with caveolae-localized Syntaxin 1A fusion centers. Importantly, although it has been previously reported that Cav-1 can associate with Cdc42 (55), our report is the first to FIGURE 7. Cav-1 depletion by siRNA-mediated knockdown results in increased basal secretion. MIN6 cells were electroporated with pSilencer1.0-control or Cav-1 siRNA plasmid DNAs, and after a 48-h incubation, cells were incubated in glucose-free MKRBB for 2 h. Cells were subsequently left unstimulated or were stimulated with 20 mM glucose for 10 min, after which buffer was collected and detergent lysates were prepared. A, immunoblot (IB) analysis of Cav-1 protein abundance (SNAP-25 was assayed as a control for siRNA specificity). B, insulin released into the buffer was assayed by radioimmunoassay, and data were normalized to insulin content for each experiment. Data represent 4 -6 independent sets of cells; p Ͻ 0.05 versus control siRNA. C, restoration of basal secretion by expression of Cav-Wt but not Cav-Mut in Cav-1-depleted MIN6 cells. MIN6 cells were transiently co-transfected with either control or Cav-1 siRNA plus either Cav-Wt, Cav-Mut, or vector DNA. All cells were also co-transfected with human proinsulin DNA as a reporter of granule secretion specifically from transfectable cells. Human C-peptide released into the MKRBB after a 2-h incubation was measured by RIA. Data represent the mean Ϯ S.E. from four independent experiments (normalized to control siRNA ϭ 100 for each DNA construct in each experiment) *, p Ͻ 0.05 versus control siRNA. D, protein localization of wild-type or mutant forms of recombinant caveolin-1 in MIN6 cells. MIN6 cells were electroporated with Cav-Wt or Cav-Mut and incubated for 48 h, followed by fixation and permeabilization for immunostaining with anti-Myc antibody. Confocal immunofluorescent images shown were taken using a Zeiss 510 with a ϫ100 objective and a ϫ3 zoom. E, protein expression of wild type or mutant forms of recombinant Cav-1 in control or Cav-1depleted CHO-K1 cells. CHO-K1 cells were co-electroporated with control or Cav-1 siRNA plus either Cav-Wt or Cav-Mut. Detergent cell lysates were prepared, and protein was resolved on 12% SDS-PAGE for immunodetection of Myc-tagged recombinant (upper band) and endogenous (lower band) caveolin-1 proteins. Data are representative of three independent electroporation experiments. Cav-1 Functions as a Novel CDC42 GDI in Pancreatic ␤-Cells document that Cav-1 plays a novel role as a GDI for Cdc42 and participates directly in the regulation of insulin exocytosis. Ras-related proteins are usually found in the inactive GDPbound state in resting cells, although GTP is in a higher concentration in the cytosolic compartment. In pancreatic ␤-cells, a mechanism must exist to keep Cdc42 inactive until activation is required, and we propose that this mechanism is mediated through the GDI activities of Cav-1. Cav-1 shows sequence homology with several GDIs. Some of the defining features of GDIs are to 1) maintain Rho/Rab GTPases in the inactive conformation, 2) prevent the biological response of activation, and 3) bind in a direct manner to the GTPase. We showed that Cav-1 fits these criteria for a GDI: 1) Cav-1 preferentially interacted with the GDP-bound form of GST-Cdc42; 2) siRNA-mediated depletion of Cav-1 resulted in dysregulated insulin release from isolated mouse islets and clonal ␤-cells; and 3) Cav-1 bound directly to Cdc42. Whereas most GDI-GTPase complexes are found in the cytosol (for a review, see Ref. 56), Cav-1 is membrane-localized. However, RhoGDI-3 is also noncytosolic and found to be in the detergent-resistant subcellular membrane fraction (57). Another Cdc42 GDI has been shown to exist in ␤-cells but is cytosolic (58). Thus, Cav-1 might function specifically with the plasmalemmal pool of Cdc42 on granules, whereas the other GDI mediates Cdc42 activities in the cytosol. Other activities held by GDIs are to prevent activation and modulate cycling between compartments. We showed that depletion of Cav-1 resulted in the inappropriate activation of Cdc42 in the plasma membrane compartment under basal conditions, suggesting that Cav-1 is needed to prevent inappropriate Cdc42 activation in the absence of relevant stimuli. The discovery that glucose-stimulated activation of Cdc42 correlates with its dissociation from Cav-1 raises the issue of whether Cav-1 is an upstream signal for Cdc42 activation. One mechanism by which Cav-1 might function upstream of Cdc42 would be that Cav-1 undergoes tyrosine phosphorylation in response to glucose stimulation, which leads to Cdc42 activation. For example, DerMardirossian et al. (59) showed that phosphorylation of RhoGDI by Pak-1 resulted in selective release of Rac1 from the GDI complex, leading to interaction with guanine nucleotide exchange factors and activation. We have recently shown that Pak-1 is in MIN6 cells and interacts with Cdc42 (16) and therefore could potentially mediate the interaction between Cav-1 and Cdc42. However, Rho and RabGDIs have been shown to be regulated by both phosphorylation and dephosphorylation such that phosphorylation or dephosphorylation of the GDI modulates its interaction with GTPases. For example, it has been shown that RhoGDI is constitutively phosphorylated in resting neutrophils and that dephosphorylation of RhoGDI results in decreased affinity for RhoA (60). Conversely, phosphorylation at Tyr 249 of Rab GDI-2 is required for membrane release of the GTPase Rab in 3T3-LI adipocytes (61). Tyrosine-phosphorylated Cav-1 is reported to cluster/dimerize more than nonphosphorylated Cav-1 (62). However, it will be important to determine if the phosphorylation of Cav-1 is induced by glucose and if it is required for Cdc42 activation to place it in this pathway upstream of Cdc42. Alternatively, conversion of Cdc42 to its GTP-bound state may be the trigger for dissociation of Cav-1, and Cav-1 phosphorylation may occur as a downstream event. It has also been reported that Cdc42 can be phosphorylated in response to stimuli (63), and thus Cdc42 phosphorylation may be part of the dissociation mechanism from Cav-1. A mechanism involving glucose-induced modification of Cdc42 rather that modification of Cav-1 would fit well with our data that showed that Cav-1 present in lysates prepared from glucose-FIGURE 8. Cav-1 depletion from isolated islets results in increased basal insulin secretion. Islets isolated from wild-type C57BL/6J mice were immediately transduced at an MOI of 100 with adenoviral particles packaged with EGFP encoding siControl (siCon-Ad ) or siCav-1 (siCav-1-Ad ) for 1 h. Islets were cultured for 48 h followed by insulin secretion analysis. Islets showing green fluorescent protein fluorescence were hand-picked into groups of 10 and incubated in KRBH buffer containing 2.8 mM glucose for 4 h. Insulin released into KRBH buffer was measured by radioimmunoassay. Data represent the mean Ϯ S.E. from three independent experiments (normalized to basal ϭ 1 for each experiment). *, p Ͻ 0.05 versus siCon-Ad. FIGURE 9. Cav-1 depletion results in inappropriate Cdc42 activation under basal conditions. A, MIN6 cells were transduced at an MOI of 100 with adenoviral particles packaged with EGFP encoding siControl (siCon-Ad) or siCav-1 (siCav-1-Ad) for 1 h and cultured for 48 h. MIN6 cells were left unstimulated or stimulated with glucose (20 mM) for 3 min followed by subcellular fractionation. PM fractions were subjected to Cdc42 activation assays, and proteins were separated on 12% SDS-PAGE and transferred to PVDF for immunoblotting (IB). B, optical density scanning quantitation of three independent sets of PM fractions immunoblotted for Cav-1 and GST. Data were normalized to unstimulated ϭ 1 for each fraction per experiment and are shown as means Ϯ S.E. *, p Ͻ 0.05 versus unstimulated using unpaired Student's t test. Data are representative of three independent sets of fractions. stimulated MIN6 cells was capable of binding well to the exogenous GST-Cdc42-GDP. Investigations exploring the mechanism of Cdc42 dissociation from Cav-1 and association with guanine nucleotide exchange factors in islet ␤-cells are under way in our laboratory. Cav-1 depletion mediated by siRNA expression revealed that Cav-1 functionally restricts secretion in the absence of appropriate stimuli in both isolated islets and MIN6 cells. We also showed that glucose failed to increase release of insulin within 10 min in MIN6 cells. However, glucose stimulation for 30 min did elicit a slight increase in secretion (data not shown), suggesting that Cav-1 either functions in early secretory events only or that Cav-1 depletion slowed the overall rate of secretion. Since glucose stimulated a dissociation of Cdc42 from Cav-1 within 3 min, we focused upon the role of Cav-1 during basal and early events in secretion. Cav-1 has also been reported to act as a negative regulator of numerous signaling molecules, including mitogen-activated protein kinase (64), nitric oxide and eNOS (65,66), G proteins, and growth factor receptors (67)(68)(69). In addition, Cav-1 inhibits insulin action in rat adipose cells through a reduction in the recruitment of GLUT4 to the cell surface, a process that is VAMP2-dependent and can be mediated by Cdc42 (34,70), suggesting that Cav-1 has many different functions. Thus, the novel role for Cav-1 as a GDI may underlie other signaling events where Cav-1 acts in a restrictive capacity. Whereas Cav-Wt could restore basal secretion to Cav-1-depleted cells, Cav-Mut could not. Since Cav-Wt binds to Cdc42 but Cav-Mut does not, these data suggest that Cav-1 binding to Cdc42 is important for function. More specifically, Cav-1 bound to GDP-Cdc42 through its scaffolding domain, which is consistent with the binding of Cav-1 to the GTPase Ras as reported previously (71). Moreover, we have also identified two critical residues required for this interaction as Phe 92 and Val 94 within the GDI consensus motif of Cav-1, defined here as FT-VT⌽Y. It has also been demonstrated that expression of a truncated form of Cav-1 expressing only residues 82-101 functionally affects the basal GTPase activity and GTP binding of purified heterotrimeric G proteins (71). This is consistent with a model whereby Cav-1 binding negatively regulates GTP binding of Cdc42 and serves to hold Cdc42 in an inactive conformation in the absence of secretagogue. This would restrict access of granules juxtaposed to the membrane surface from inappropriately releasing insulin in the absence of stimuli. This concept is consistent with our data showing the interaction between VAMP2-Cdc42 and Cav-1 in the plasma membrane fraction under basal conditions and also with the release of Cav-1 upon glucose stimulation and activation of Cdc42. There are conflicting data in the literature as to whether Syntaxin 1A and other SNARE proteins are localized into or juxtaposed to caveolar cholesterol-rich lipid microdomains. For example, two studies report the inclusion of Syntaxin 1A, SNAP-25, and VAMP2 in lipid rafts in pancreatic ␤-cells and neurosecretory PC12 cells (21,72). Conversely, Lang et al. (22) suggest that in PC12 cells, syntaxin clusters are distinct from cholesterol-dependent membrane rafts, since they are Triton X-100-soluble and do not co-localize with raft markers. This notion is further supported by atomic force microscopy studies of syntaxin in liposomes, where it was determined that syntaxin is excluded from sphingomyelin-enriched domains in a cholesterol-dependent manner (73). In our hands, immunoprecipitation of Syntaxin 1A resulted in the co-immunoprecipitation of Cav-1 from MIN6 lysates (data not shown), suggesting that the proteins are either in close enough proximity to bind or interact indirectly through a common binding partner. Since Cdc42 binds to Syntaxin 1A through direct interaction with VAMP2, and we show here that Cav-1 binds directly to Cdc42, the interaction between Syntaxin 1A and Cav-1 may be dependent upon interaction of each with the same subset of Cdc42 molecules. Data presented here show that depletion of Cav-1 in islets or MIN6 ␤-cells resulted in a significant increase in basal level insulin release, which correlated with the inappropriate activation of Cdc42 under basal conditions. However, this finding is in conflict with results obtained from the Cav-1 knock-out mice, which were reported to have normal fasting plasma insulin levels and develop hyperinsulinemia when placed on a high fat diet (74). However, plasma insulin levels are not an exclusive readout of insulin secretion; rather, these levels represent the net result of insulin secretion plus insulin action. Moreover, fasting plasma insulin levels in mice are highly variable, and thus the potential to detect any increase in fasting insulin levels may have been obscured by the variability. Thus, to examine the requirement of Cav-1 in insulin secretion, the islet (specifically, the ␤-cells of the islet) must be studied. To do this, we isolated islets from Cav-1 knock-out mice (Jackson Laboratories). However, we found that islet size from the knock-out mice was significantly smaller than in control mice and that the amount of insulin released was 10-fold less than that released by the C57BL/6 mouse islets tested in parallel studies. The background strain of the Cav-1 knock-out mice may have impacted the islet morphology and function as well as the level of plasma insulin. The Cav-1 knock-out mice (Jackson Laboratories) are primarily a mix of Sv129 and C57BL/6 with a minor contribution from the SJL background (contributed from the originating ES cell line). However, it has been shown that Sv129T2 mice have higher fasting plasma glucose levels and lower fasting insulin levels compared with other strains, such as C57BL/6, and that Sv129T2 mice were glucose-intolerant and secreted significantly less insulin in response to glucose compared with C57BL/6 mice (75). Another possibility may be that the Cav-1 knock-out mice underwent an adaptive response to the chronic absence of Cav-1, which impacted their islet development and/or function. Thus, for these reasons, we chose to acutely deplete Cav-1 using siRNA-mediated depletion from C57BL/6 islets to investigate the physiological importance of Cav-1 in the process of insulin secretion. However, to assess the physiological impact of Cav-1 depletion upon insulin secretion in vivo, the generation of Cav-1 conditional knock-out mice on the C57BL/6 mouse strain background will be required. In conclusion, we report a novel interaction between Cdc42 and a GDI consensus motif FT-VT⌽Y within the scaffolding domain of Cav-1 and show this interaction to be functionally important for the overall maintenance of regulated insulin secretion. These data provide evidence for a novel function for Cav-1 as a Cdc42 GDI and a possible link between caveolae-localized SNARE fusion centers and Cdc42 targeting of granules to the plasma membrane. Last, this interaction was recapitulated in CHO-K1 cells as well, suggesting that the interaction may be part of a general mechanism for vesicle targeting to the SNARE machinery at the plasma membrane and may not be specific to the islet ␤-cell.
2018-04-03T04:36:20.179Z
2006-07-14T00:00:00.000
{ "year": 2006, "sha1": "8323fa0d4b04696510342628dc24654f55ebb22d", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/281/28/18961.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "caa23d07e6eabbf7d90a28c292ab1e7cc77c9208", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
266299558
pes2o/s2orc
v3-fos-license
Research on the Threshold Determination Method of the Duffing Chaotic System Based on Improved Permutation Entropy and Poincaré Mapping The transition from a chaotic to a periodic state in the Duffing chaotic oscillator detection system is crucial in detecting weak signals. However, accurately determining the critical threshold for this transition remains a challenging problem. Traditional methods such as Melnikov theory, the Poincaré section quantitative discrimination method, and experimental analyses based on phase diagram segmentation have limitations in accuracy and efficiency. In addition, they require large computational data and complex algorithms while having slow convergence. Improved permutation entropy incorporates signal amplitude information on the basis of permutation entropy and has better noise resistance. According to the characteristics of improved permutation entropy, a threshold determination method for the Duffing chaotic oscillator detection system based on improved permutation entropy (IPE) and Poincaré mapping (PM) is proposed. This new metric is called Poincaré mapping improved permutation entropy (PMIPE). The simulation results and the verification results of real underwater acoustic signals indicate that our proposed method outperforms traditional methods in terms of accuracy, simplicity, and stability. Introduction In recent years, the study of signal detection methods based on the Duffing chaotic oscillator has gained significant attention in the field of weak signal detection [1][2][3][4].This approach leverages the sensitivity of the chaotic oscillator to extremely weak periodic signals and its immunity to noise.The Duffing oscillator detection system can be expressed as d 2 x(t)/dt 2 + µ • dx(t)/dt + ax + bx 3 = r cos(ωt), where µ, a, b, r are the parameters that can influence the characteristics of the system.Prior to utilizing the Duffing system for target signal detection, a driving force r with the same frequency as the target signal is preset in the system, and the amplitude of the driving force is adjusted to bring the system to the critical chaotic state.Let s(t) be the signal to be detected, when s(t) is input into the Duffing system, the equation transforms to d 2 x(t)/dt 2 + µ • dx(t)/dt + ax + bx 3 = r cos(ωt) + s(t).The presence of the target signal is established by examining the system's state before and after the signal is introduced.If the system remains in a chaotic state, it signifies that the signal under test lacks a target signal.On the other hand, if the system transitions to a periodic state, it indicates the existence of the target signal [5].The size of the driving force added to the Duffing chaotic oscillator system directly determines whether the system is in a chaotic or periodic state.Thus, the threshold of the driving force in the Duffing chaotic oscillator is a crucial parameter that significantly impacts the efficiency of detecting weak signals, and its solution is an essential prerequisite for chaos oscillator detection [6][7][8][9].However, as what will be discussed in Section 2, this threshold that leads the system to the critical chaotic state is varied owing to the influence of noise and the frequency to be detected.Hence, it is of great importance to determine this parameter quickly and accurately.It is noteworthy that the issue of determining the threshold for the Duffing system is not equivalent to the problem of chaos identification.Near the threshold, the system undergoes a transition from a critical chaotic state to a periodic state and remains invariant thereafter.Therefore, the key to confirming the threshold lies in identifying when the system state undergoes a transition and remains stable. Several scholars have investigated threshold determination methods, including Melnikov analysis [10], phase diagram method [11], power spectrum method [12], Poincaré section method [13][14][15], and 0-1 test [16].However, these methods have their limitations.The Melnikov theory analysis method has a low solution accuracy and a high implementation difficulty, while the phase diagram method has a poor computational accuracy and lacks adaptability.The power spectrum analysis method is unable to distinguish between periodic, random noise and chaotic signals effectively, and the Poincaré section method is subjective and not suitable for automatic recognition by computers.According to [17], the weakness of the abovementioned methods are listed in Table 1. Table 1.Weakness of traditional threshold determination methods. Melnikov analysis High implementation difficulty Phase diagram method Poor computational accuracy Power spectrum method Unable to distinguish between periodic and chaotic dynamics Poincaré section method Subjective 0-1 test Poor computational accuracy Entropy is commonly used to measure the complexity of a signal, with higher entropy values indicating greater chaos [18,19].There are several popular entropy algorithms, including approximate entropy [20,21], sample entropy [22], and permutation entropy [23,24].However, these algorithms have limitations, such as the high computational complexity of the sample entropy algorithm and the low signal resolution of the permutation entropy algorithm [25][26][27].To address these issues, Chen et al. proposed an Improved Permutation Entropy (IPE) algorithm in 2019, which considers both the amplitude and order information of signals and has good anti-noise performance [28]. In 2020, Huang Ze-hui proposed a threshold determination method based on multiscale entropy to overcome the limitations of the Duffing system's current method [29].This approach leverages the differences in the multi-scale entropy of the Duffing system in different states.However, this method requires a repeated calculation of the sequence's entropy value to find the most complex sub-sequence and its corresponding multiscale entropy value, resulting in significant computational complexity and its not being suitable for the real-time detection of the system.This paper proposes a method for determining the threshold and state of a Duffing chaotic oscillator detection system based on Improved Permutation Entropy and Poincaré section theory using the PMIPE approach.The goal is to address the problem of threshold determination in Duffing systems.This method involves adding different driving forces to a Duffing oscillator system with predetermined frequencies and parameters in a weak signal detection system, using the IPE algorithm to calculate the complexity of the Poincaré section sequence under different driving forces, and comparing entropy values to determine whether the system is in a periodic or chaotic state This process enables the determination of the threshold of the Duffing chaotic oscillator detection system.Compared to the method based on multi-scale entropy, this approach is simpler, more efficient, and suitable for real-time signal detection.Unlike the multi-scale entropy method, it does not require the calculation of the entropy value for the entire sequence or the determination of the maximum entropy value of the system. Duffing Oscillator System The Duffing oscillator, a widely studied chaotic oscillator renowned for its intricate dynamics, is defined by a nonlinear term in its equation.Initially derived from the nonlinear dynamical equation describing the forced oscillations of a damped pendulum, the Duffing oscillator model is a result of considering a damped and periodically driven elastic system with high-order powers disregarded.The equation of motion for the undamped and undriven elastic system is given as: After the inclusion of damping and periodic driving forces, the Duffing equation is obtained: Here, µ represents the damping ratio, and ax + bx 3 denotes the nonlinear term, with a and b being positive real-number system parameters.The amplitude of the driving force is denoted by r, while ω refers to the circular frequency of the periodic driving force.In this study, µ = 0.5, a = 1, b = 1 were used, respectively.Moreover, all experiments in this study were conducted on a computer with an Intel(R) Core(TM) i7-10510U CPU. By adjusting the amplitude of the driving force, the Duffing system traverses a range of states, including the initial state, homo-clinic orbit state, bifurcation state, chaotic state, and periodic state.Equation ( 2) is a simple deterministic equation with a unique solution x(t), but the presence of the nonlinear term in the equation gives rise to complex dynamical properties, such as chaotic behavior.Numerical methods, such as the fourth-order Runge-Kutta algorithm, are required to simulate and solve the nonlinear differential equation and obtain a solution. Impact of Noise on Duffing Oscillator System In weak signal detection, target signals are often accompanied by noise, which can affect the system itself.By adding a noise signal to Equation (2), the Duffing equation can be written as follows: where ∆n(t) denotes the noise signal with a variance of ∆.Different variance noise signals are added to the Duffing oscillator system in Equation (3); let ω = 2π × 20, the system outcomes are shown in Figures 1-3.It is seen that the variance of noise directly influences the dynamics of outcome of the system.When r = 0.826 and ∆ = 0.001, the system is in a periodic state.As ∆ increases to 0.04, the system is in a chaotic state.The system will go back to a periodic state if the driving force is improved to 0.827.Therefore, the system requires a larger driving force value to enter the periodic state when there is more noise.As noise increases, the degree of disorder in the system increases.To transition from disorder to order, the ordered components within the system must be strengthened to overcome the increasing disorder.Furthermore, the enhancement of external noise leads to an increase in the chaotic critical threshold. The results above indicate a slight impact of noise on the threshold, with a marginal increase observed as the noise level rises.It is important to note that considering noise in threshold calculations is applicable only when the noise is stationary and its characteristics are known.This involves calculating the threshold using Equation (3).However, if the noise is non-stationary or its characteristics are unknown, it is advisable to disregard the influence of noise in threshold calculations, and the threshold can be determined using Equation (2).The results above indicate a slight impact of noise on the threshold, with a increase observed as the noise level rises.It is important to note that considerin threshold calculations is applicable only when the noise is stationary and its ch tics are known.This involves calculating the threshold using Equation (3).Howe noise is non-stationary or its characteristics are unknown, it is advisable to disr influence of noise in threshold calculations, and the threshold can be determi Equation (2). Poincaré Mapping The Poincaré section is a geometric method proposed by the renowned Poincaré in the late 19th century.It selects a suitable section in the multi-dim phase space and analyzes the properties of nonlinear systems by observing the tion of intersection points between the section and the system trajectory.This m places the N-order continuous system flow with an N-1 order discrete system an the system order while ensuring that the limit set of the discrete system corre the limit set of the continuous system flow. In the n-dimensional phase space ( 1 section is chosen appropriately, and a Poincaré section is defined by fixing a pa jugate variables ( i x , i dx dt ) at a certain value on this section.As the Poinca intersects with the system trajectory, it maps the continuous trajectory in th phase space to a series of discrete points on the section, represented as T being the Poincaré map).The Poincaré section exhibits the following patterns (1) When there is a fixed point or a few discrete points on the Poincaré section Poincaré Mapping The Poincaré section is a geometric method proposed by the renowned physicist Poincaré in the late 19th century.It selects a suitable section in the multi-dimensional phase space and analyzes the properties of nonlinear systems by observing the distribution of intersection points between the section and the system trajectory.This method replaces the N-order continuous system flow with an N-1 order discrete system and reduces the system order while ensuring that the limit set of the discrete system corresponds to the limit set of the continuous system flow. In the n-dimensional phase space (x 1 , dx 1 /dt, x 2 , dx 2 /dt, • • • , x n , dx n /dt), a section is chosen appropriately, and a Poincaré section is defined by fixing a pair of conjugate variables (x i , dx i /dt) at a certain value on this section.As the Poincaré section intersects with the system trajectory, it maps the continuous trajectory in the original phase space to a series of discrete points on the section, represented as P n+1 = TP n (with T being the Poincaré map).The Poincaré section exhibits the following patterns: (1) When there is a fixed point or a few discrete points on the Poincaré section, the motion trajectory is periodic; (2) When the Poincaré section consists of dense points with self-similar structures, the motion trajectory is chaotic. Figure 4 illustrates the intersection points between the Duffing system and the Poincaré section in different states.The three-dimensional and two-dimensional plots of the intersection points in the chaotic state of the Duffing system and the Poincaré section are shown in (a), respectively.Similarly, (b) present the plots of the intersection points in the periodic state of the Duffing system and the Poincaré section, respectively.The red dots represent the intersection points between the Poincaré section and the chaotic oscillator.Evidently, when the Duffing system is in a chaotic state, the values of the Poincaré section demonstrate a high degree of randomness.Conversely, when the Duffing system is in a periodic state, the values of the Poincaré section exhibit minimal fluctuations.By definition, entropy measures the complexity of a system, and the entropy value of the motion trajectory is higher in the chaotic state than in the periodic state.Therefore, the entropy value of the Poincaré section can be used to distinguish between chaotic and periodic states of the system. Evidently, when the Duffing system is in a chaotic state, the values of the Poincaré section demonstrate a high degree of randomness.Conversely, when the Duffing system is in a periodic state, the values of the Poincaré section exhibit minimal fluctuations.By definition, entropy measures the complexity of a system, and the entropy value of the motion trajectory is higher in the chaotic state than in the periodic state.Therefore, the entropy value of the Poincaré section can be used to distinguish between chaotic and periodic states of the system. Improved Permutation Entropy Algorithm The Improved Permutation Entropy Algorithm (IPE) improves upon the traditional Permutation Entropy Algorithm (PE) by addressing the issue of missing amplitude information [28].This algorithm is capable of extracting more information from complex sequences while reducing computational complexity and enhancing signal resolution.The algorithmic flow is as follows: (1) Normalize the time series {x 1 , x 2 , • • • , x N } through the cumulative distribution function shown in the following equation, where µ and σ 2 represent the mean and variance of the time series, respectively. (3) Symbolize the first column Y(:, 1) of the phase space Y using the Uniform Quantification Operator (UQO) and calculate the corresponding symbolization result for the first column S(:, 1) of the phase space S. Here, L denotes a predetermined discretization parameter; ∆ represents the discrete interval and meets ∆ = (y max − y min )/L; y max and y min represent the maximum and minimum values of the sub-sequence y, respectively. (4) The corresponding symbolization result S(:, k) for the k column Y(:, k) (2 ≤ k ≤ m) of the phase space Y is obtained using the following formula: where floor indicates rounding down. (5) With reference to the symbol patterns' definition in the Permutation Entropy Algorithm, the improved permutation entropy (IPE) regards every row of the symbolized phase space S as a "pattern" π l , 1 ≤ l ≤ L m and utilizes the term Symbol Pattern (SP) in the algorithm.Calculate the probability p l of each SP in the symbol phase space; according to the definition of Shannon entropy, the improved permutation entropy can finally be expressed as: When only one element in the probability distribution of the SP is 1 and the other elements are 0, IPE takes the minimum value 0. When the probability distribution follows a uniform distribution, IPE takes the maximum value ln(L m ).Therefore, IPE can be normalized.In this study, normalized entropy values were used. Threshold Determination Method for Duffing System Based on PMIPE In this paper, a threshold determination method for the Duffing system based on PMIPE is proposed by combining Poincaré map and IPE.The calculation steps are as follows: (1) Determine the frequency and other parameters of the Duffing oscillator system based on the signal to be detected by the weak signal detection system. (2) Impose distinct driving forces on the Duffing oscillator system to induce periodic and chaotic states, respectively.(3) Calculate the Poincaré section sequences of the Duffing oscillator system, in chaotic and periodic states, correspondingly, to obtain a set of Poincaré section sequences in varied states.(4) Use the IPE algorithm to calculate the complexity of this set of Poincaré section sequences and obtain the curve of complexity as a function of driving force.( 5) Using entropy = 0.15 as a critical standard, if entropy < 0.15, it is considered that the system is in a stable periodic state, with entropy values exceeding 0.15 defined as non-periodic entropy.( 6) Determine the threshold of the duffing detection system as the maximum driving force that has a non-periodic entropy. In practical applications, the signal to be detected typically contains noise.As discussed in Section 2.2, the impact of noise on the determination of the threshold can be neglected if the noise is non-stationary or its characteristics are unknown. This research approach is illustrated in the flowchart presented in Figure 5. Threshold Determination Method for Duffing System Based on PMIPE In this paper, a threshold determination method for the Duffing system based on PMIPE is proposed by combining Poincaré map and IPE.The calculation steps are as follows: (1) Determine the frequency and other parameters of the Duffing oscillator system based on the signal to be detected by the weak signal detection system.(2) Impose distinct driving forces on the Duffing oscillator system to induce periodic and chaotic states, respectively.(3) Calculate the Poincaré section sequences of the Duffing oscillator system, in chaotic and periodic states, correspondingly, to obtain a set of Poincaré section sequences in varied states.(4) Use the IPE algorithm to calculate the complexity of this set of Poincaré section sequences and obtain the curve of complexity as a function of driving force.( 5) Using entropy = 0.15 as a critical standard, if entropy <0.15, it is considered that the system is in a stable periodic state, with entropy values exceeding 0.15 defined as non-periodic entropy.( 6) Determine the threshold of the duffing detection system as the maximum driving force that has a non-periodic entropy. In practical applications, the signal to be detected typically contains noise.As discussed in Section 2.2, the impact of noise on the determination of the threshold can be neglected if the noise is non-stationary or its characteristics are unknown. This research approach is illustrated in the flowchart presented in Figure 5. Influence of Different Parameters on Improved Permutation Entropy When utilizing the improved permutation entropy to solve the threshold of the Duffing system, it is necessary to select appropriate parameters such as embedding dimension, data length, and time delay so that the IPE of the Poincaré section sequence of the Duffing system can differentiate the sample entropy of chaotic and periodic states. ( 1) Influence of Embedding Dimension on IPE To investigate how embedding dimension m influences improved permutation entropy, we set the power frequency of the Duffing system as 10 Hz, sampling interval as 0.001 s, and sampling time as 10 s.Additionally, we set different power values r = 0.838 and r = 0.80, respectively, to put the Duffing system in periodic and chaotic states.We used the IPE algorithm to calculate the Poincaré section sequences of the aforementioned Duffing system, and the results are shown in Figure 6.It can be observed that when the embedding dimension is between 1 and 5, the entropy values exhibit significant disparities.As the embedding dimension increases, the entropy values of both types of Poincaré section sequences diminish, with the entropy value of the chaotic sequence plummeting Influence of Different Parameters on Improved Permutation Entropy When utilizing the improved permutation entropy to solve the threshold of the Duffing system, it is necessary to select appropriate parameters such as embedding dimension, data length, and time delay so that the IPE of the Poincaré section sequence of the Duffing system can differentiate the sample entropy of chaotic and periodic states. ( 1) Influence of Embedding Dimension on IPE To investigate how embedding dimension m influences improved permutation entropy, we set the power frequency of the Duffing system as 10 Hz, sampling interval as 0.001 s, and sampling time as 10 s.Additionally, we set different power values r = 0.838 and r = 0.80, respectively, to put the Duffing system in periodic and chaotic states.We used the IPE algorithm to calculate the Poincaré section sequences of the aforementioned Duffing system, and the results are shown in Figure 6.It can be observed that when the embedding dimension is between 1 and 5, the entropy values exhibit significant disparities.As the embedding dimension increases, the entropy values of both types of Poincaré section sequences diminish, with the entropy value of the chaotic sequence plummeting at a faster rate.Consequently, when computing the Poincaré section entropy value using improved permutation entropy, this investigation suggests selecting an embedding dimension of 1 ≤ m ≤ 5. at a faster rate.Consequently, when computing the Poincaré section entropy value using improved permutation entropy, this investigation suggests selecting an embedding dimension of 1 5 m   .(3) Influence of time delay on IPE We set the Duffing system as in Step (1) in this Section and calculated the entropy values of the periodic and chaotic states for time delays ranging from 1 to 8 points.Figure 8 depicts the impact of time delay on the IPE algorithm.When the time delay is between 1 and 4 points, the entropy values have significant differences.As the time delay increases, the entropy value of the Poincaré section sequence of the chaotic system also rises, tending (2) Influence of Data Length on IPE We set the Duffing system as in Step (1) in this Section and calculated the entropy values of the periodic and chaotic states for 100 to 10,000 Poincaré section points.Figure 7a depicts the influence of diverse data lengths on the IPE algorithm, while Figure 7b offers a local zoom-in.Notably, for data lengths above 500, the entropy values for periodic and chaotic states attain a stable tendency.Such tendencies facilitate differentiating system states based on entropy values.(3) Influence of time delay on IPE We set the Duffing system as in Step (1) in this Section and calculated the entropy values of the periodic and chaotic states for time delays ranging from 1 to 8 points.Figure 8 depicts the impact of time delay on the IPE algorithm.When the time delay is between 1 and 4 points, the entropy values have significant differences.As the time delay increases, the entropy value of the Poincaré section sequence of the chaotic system also rises, tending (3) Influence of time delay on IPE We set the Duffing system as in Step (1) in this Section and calculated the entropy values of the periodic and chaotic states for time delays ranging from 1 to 8 points.Figure 8 depicts the impact of time delay on the IPE algorithm.When the time delay is between 1 and 4 points, the entropy values have significant differences.As the time delay increases, the entropy value of the Poincaré section sequence of the chaotic system also rises, tending toward stability.Comparatively, the entropy value of the periodic sequence experiences faster growth.Notably, prolonged time delays lead to a loss of the periodic sequence's characteristic information.Therefore, this study recommends selecting a small time delay, preferably not exceeding 4 points, when calculating Poincaré section entropy values using improved permutation entropy.toward stability.Comparatively, the entropy value of the periodic sequence exper faster growth.Notably, prolonged time delays lead to a loss of the periodic sequ characteristic information.Therefore, this study recommends selecting a small time preferably not exceeding 4 points, when calculating Poincaré section entropy values improved permutation entropy. Simulation of Threshold Determination for Duffing Oscillator System with Different Frequency and Driving Forces Sinusoidal driving forces with varying frequencies and amplitudes were app the system with a sampling time ranging from 10 to 30 s and a sampling interval o s.The system was subjected to driving forces with a range from 0.824 to 0.827 w interval of 0.001.Notably, we also consider a larger range of r , please see Appen for detailed results.Based on the conclusion in Section 4.1, the embedding dimens was set to 4 and the time delay set to 1 point, and the data length was greater tha The improved permutation entropy of the system is analyzed with respect to the tude of a 10 Hz sinusoidal signal, as shown in Figure 9.The analysis shows that t proved permutation entropy decreases and then stabilizes after the driving force a tude exceeds 0.8257.The system state when r = 0.8257 is shown in Figure 10.The c threshold of the system is determined to be r = 0.8257 using the IPE-Poincaré m Figure 11 shows that the system is in a periodic state when r = 0.8258, which prov the threshold should be 0.8257. Simulation of Threshold Determination for Duffing Oscillator System with Different Frequency and Driving Forces Sinusoidal driving forces with varying frequencies and amplitudes were applied to the system with a sampling time ranging from 10 to 30 s and a sampling interval of 0.001 s.The system was subjected to driving forces with a range from 0.824 to 0.827 with an interval of 0.001.Notably, we also consider a larger range of r, please see Appendix A for detailed results.Based on the conclusion in Section 4.1, the embedding dimension m was set to 4 and the time delay set to 1 point, and the data length was greater than 500.The improved permutation entropy of the system is analyzed with respect to the amplitude of a 10 Hz sinusoidal signal, as shown in Figure 9.The analysis shows that the improved permutation entropy decreases and then stabilizes after the driving force amplitude exceeds 0.8257.The system state when r = 0.8257 is shown in Figure 10.The critical threshold of the system is determined to be r = 0.8257 using the IPE-Poincaré method.Figure 11 shows that the system is in a periodic state when r = 0.8258, which proves that the threshold should be 0.8257. preferably not exceeding 4 points, when calculating Poincaré section entropy values improved permutation entropy. Simulation of Threshold Determination for Duffing Oscillator System with Different Frequency and Driving Forces Sinusoidal driving forces with varying frequencies and amplitudes were app the system with a sampling time ranging from 10 to 30 s and a sampling interval o s.The system was subjected to driving forces with a range from 0.824 to 0.827 w interval of 0.001.Notably, we also consider a larger range of r , please see Appen for detailed results.Based on the conclusion in Section 4.1, the embedding dimens was set to 4 and the time delay set to 1 point, and the data length was greater tha The improved permutation entropy of the system is analyzed with respect to the tude of a 10 Hz sinusoidal signal, as shown in Figure 9.The analysis shows that t proved permutation entropy decreases and then stabilizes after the driving force a tude exceeds 0.8257.The system state when r = 0.8257 is shown in Figure 10.The c threshold of the system is determined to be r = 0.8257 using the IPE-Poincaré m Figure 11 shows that the system is in a periodic state when r = 0.8258, which prov the threshold should be 0.8257.The improved permutation entropy of a 20 Hz sinusoidal signal detection sy analyzed with respect to the driving force amplitude, as shown in Figure 12.The a indicates that the improved permutation entropy decreases and then stabilizes a driving force amplitude exceeds 0.8254.The system state when r = 0.8254 is sh Figure 13, where a chaotic state can be easily found.Hence, the critical threshold system is determined to be r = 0.8254 using the IPE-Poincaré method.Figure 14 that the system is in a periodic state when r = 0.8255, which proves that the th should be 0.8255.The improved permutation entropy of a 20 Hz sinusoidal signal detection sy analyzed with respect to the driving force amplitude, as shown in Figure 12.The indicates that the improved permutation entropy decreases and then stabilizes a driving force amplitude exceeds 0.8254.The system state when r = 0.8254 is sh Figure 13, where a chaotic state can be easily found.Hence, the critical threshol system is determined to be r = 0.8254 using the IPE-Poincaré method.Figure 14 that the system is in a periodic state when r = 0.8255, which proves that the th should be 0.8255.The improved permutation entropy of a 20 Hz sinusoidal signal detection system is analyzed with respect to the driving force amplitude, as shown in Figure 12.The analysis indicates that the improved permutation entropy decreases and then stabilizes after the driving force amplitude exceeds 0.8254.The system state when r = 0.8254 is shown in Figure 13, where a chaotic state can be easily found.Hence, the critical threshold of the system is determined to be r = 0.8254 using the IPE-Poincaré method.Figure 14 shows that the system is in a periodic state when r = 0.8255, which proves that the threshold should be 0.8255.Figure 14.When f = 20 Hz and r = 0.8255, the system is in a periodic state. x/V x/V•s -1 Figure 14.When f = 20 Hz and r = 0.8255, the system is in a periodic state. x/V x/V•s -1 Figure 14.When f = 20 Hz and r = 0.8255, the system is in a periodic state. Similarly, the improved permutation entropy of a 100 Hz sinusoidal signal detection system is analyzed with respect to the driving force amplitude, as shown in Figure 15.The analysis shows that the improved permutation entropy decreases and then stabilizes after the driving force amplitude exceeds 0.8248.The system state when r= 0.8248 is shown in Figure 16.Therefore, the critical threshold of the system is determined to be r= 0.8248 using the IPE-Poincaré method.Figure 17 shows that the system is in a periodic state when r= 0.8249, which proves that the threshold should be 0.8248.Table 2 compares the true thresholds with the results obtained from Figures 9-17; it can be seen that, for all three cases mentioned above, our method provides a very accurate evaluation. Entropy 2023, 25, x FOR PEER REVIEW 13 of Similarly, the improved permutation entropy of a 100 Hz sinusoidal signal detecti system is analyzed with respect to the driving force amplitude, as shown in Figure 15.T analysis shows that the improved permutation entropy decreases and then stabilizes af the driving force amplitude exceeds 0.8248.The system state when r = 0.8248 is show in Figure 16.Therefore, the critical threshold of the system is determined to be r = 0.82 using the IPE-Poincaré method.Figure 17 shows that the system is in a periodic state wh r = 0.8249, which proves that the threshold should be 0.8248.Table 2 compares the tr thresholds with the results obtained from Figures 9-17; it can be seen that, for all thr cases mentioned above, our method provides a very accurate evaluation.Similarly, the improved permutation entropy of a 100 Hz sinusoidal signal detection system is analyzed with respect to the driving force amplitude, as shown in Figure 15.The analysis shows that the improved permutation entropy decreases and then stabilizes after the driving force amplitude exceeds 0.8248.The system state when r = 0.8248 is shown in Figure 16.Therefore, the critical threshold of the system is determined to be r = 0.8248 using the IPE-Poincaré method.Figure 17 shows that the system is in a periodic state when r = 0.8249, which proves that the threshold should be 0.8248.Table 2 compares the true thresholds with the results obtained from Figures 9-17; it can be seen that, for all three cases mentioned above, our method provides a very accurate evaluation. Comparison and Analysis of Different Methods For the 10 Hz sinusoidal signal detection system, the maximum multi-scale entrop method from literature [19] and the PMIPE method proposed in this paper were used t calculate the system threshold, with a Duffing sequence length of 400,000.For the max mum multi-scale entropy method, the length of each sub-sequence was set to 30,000, th initial number of chromosomes to six, and the crossover point to a random number.Th calculation outcomes are presented in Figure 18.It is noteworthy that both methods ca correctly determine the critical threshold of the system.However, the maximum mult scale entropy method required 573.95 s to calculate, while the IPE-Poincaré method too only 30.69 s.Thus, the IPE-Poincaré method significantly reduces the computational com plexity, rendering it more suitable for real-time computations.For comparison, we als use the 0-1 test method and Lyapunov exponents to evaluate the threshold; results ar given in Figure 19.Obviously, the 0-1 test value becomes constant after r = 0.8253 meaning that its threshold evaluation result is 0.8253, which is not very accurate.As fo the Lyapunov exponent method, it assigns higher values to chaotic states and smaller va ues to periodic states.However, it is hard to determine an accurate threshold.This ma because of the inappropriate parameter selection; therefore, the calculation of Lyapuno exponents can be easily influenced by parameter selection.Table 3 compares the results o the abovementioned four methods.It can be seen that the proposed method and MS achieve the highest evaluation accuracy, followed by the 0-1 test and Lyapunov exponen Notice that our method only takes 30.69 s to complete this experiment while all of th other methods require more than 500 s.Hence, our method outperforms traditional algo rithms in both accuracy and computation cost. Comparison and Analysis of Different Methods For the 10 Hz sinusoidal signal detection system, the maximum multi-scale entropy method from literature [19] and the PMIPE method proposed in this paper were used to calculate the system threshold, with a Duffing sequence length of 400,000.For the maximum multi-scale entropy method, the length of each sub-sequence was set to 30,000, the initial number of chromosomes to six, and the crossover point to a random number.The calculation outcomes are presented in Figure 18.It is noteworthy that both methods can correctly determine the critical threshold of the system.However, the maximum multi-scale entropy method required 573.95 s to calculate, while the IPE-Poincaré method took only 30.69 s.Thus, the IPE-Poincaré method significantly reduces the computational complexity, rendering it more suitable for real-time computations.For comparison, we also use the 0-1 test method and Lyapunov exponents to evaluate the threshold; results are given in Figure 19.Obviously, the 0-1 test value becomes constant after r = 0.8253, meaning that its threshold evaluation result is 0.8253, which is not very accurate.As for the Lyapunov exponent method, it assigns higher values to chaotic states and smaller values to periodic states.However, it is hard to determine an accurate threshold.This may because of the inappropriate parameter selection; therefore, the calculation of Lyapunov exponents can be easily influenced by parameter selection.Table 3 compares the results of the abovementioned four methods.It can be seen that the proposed method and MSE achieve the highest evaluation accuracy, followed by the 0-1 test and Lyapunov exponent.Notice that our method only takes 30.69 s to complete this experiment while all of the other methods require more than 500 s.Hence, our method outperforms traditional algorithms in both accuracy and computation cost. Verification of the Real Underwater Acoustic Signal To verify the effectiveness of the threshold determination method based on PMIP in solving real underwater acoustic detection systems, a set of measured ship signals an ambient noise was selected as sample data.The waveform of the measured data is show in Figure 20, and its frequency domain waveform is shown in Figure 21.It can be seen th the measured underwater acoustic ship signal contains a sinusoidal signal with a fr quency of 50.27Hz.Using the same method as in this paper, we set the frequency of 50. Verification of the Real Underwater Acoustic Signal To verify the effectiveness of the threshold determination method based on PMIPE in solving real underwater acoustic detection systems, a set of measured ship signals and ambient noise was selected as sample data.The waveform of the measured data is shown in Figure 20, and its frequency domain waveform is shown in Figure 21.It can be seen that the measured underwater acoustic ship signal contains a sinusoidal signal with a frequency of 50.27Hz.Using the same method as in this paper, we set the frequency of 50.27 Hz in the duffing oscillator system.The system threshold can be obtained as r = 0.825, as shown in Figure 22. Verification of the Real Underwater Acoustic Signal To verify the effectiveness of the threshold determination method based on PMIPE in solving real underwater acoustic detection systems, a set of measured ship signals and ambient noise was selected as sample data.The waveform of the measured data is shown in Figure 20, and its frequency domain waveform is shown in Figure 21.It can be seen that the measured underwater acoustic ship signal contains a sinusoidal signal with a frequency of 50.27Hz.Using the same method as in this paper, we set the frequency of 50.27Hz in the duffing oscillator system.The system threshold can be obtained as r = 0.825, as shown in Figure 22.We adjusted the driving force amplitude r to 0.825 so that the system was in a critical chaotic state, and then we added the real underwater acoustic signal to the detection system.When we added the ship signals to the system, the system phase diagram transitioned from the chaotic state shown in Figure 23a to the periodic state shown in Figure 23b.When we added the ambient noise to the system, the system phase diagram transitioned from the chaotic state shown in Figure 23a to the periodic state shown in Figure 23c.Comparing Figure 23b,c shows that the system realized the detection of target signals among real underwater acoustic signals.Therefore, the PMIPE method can accurately calculate the system threshold.We adjusted the driving force amplitude r to 0.825 so that the system was in a crit- ical chaotic state, and then we added the real underwater acoustic signal to the detection system.When we added the ship signals to the system, the system phase diagram transitioned from the chaotic state shown in Figure 23a to the periodic state shown in Figure 23b.When we added the ambient noise to the system, the system phase diagram transitioned from the chaotic state shown in Figure 23a to the periodic state shown in Figure 23c.Comparing Figure 23b,c shows that the system realized the detection of target signals among real underwater acoustic signals.Therefore, the PMIPE method can accurately calculate the system threshold. Analysis of Anti-Noise Performance of Threshold Determination Methods For the 20 Hz sine signal Duffing oscillator detection system, with the driving force amplitude set to r = 0.8254, a same-frequency signal with an amplitude of 0.001 V was added to the system as the detected signal, which put the system in a periodic state.We then added noise signals with different signal-to-noise ratios (SNRs) to the detected signal and calculated the IPE-Poincaré value to assess the change in IPE under different SNRs in both chaotic and periodic states.The results are shown in Figure 24.It can be observed that when the SNR is greater than 20 dB, noise has little influence on the IPE-Poincaré value.As the SNR decreases, the critical threshold for chaos increases due to the Analysis of Anti-Noise Performance of Threshold Determination Methods For the 20 Hz sine signal Duffing oscillator detection system, with the driving force amplitude set to r = 0.8254, a same-frequency signal with an amplitude of 0.001 V was added to the system as the detected signal, which put the system in a periodic state.We then added noise signals with different signal-to-noise ratios (SNRs) to the detected signal and calculated the IPE-Poincaré value to assess the change in IPE under different SNRs in both chaotic and periodic states.The results are shown in Figure 24.It can be observed that when the SNR is greater than 20 dB, noise has little influence on the IPE-Poincaré value.As the SNR decreases, the critical threshold for chaos increases due to the amplification of external noise.Despite this, the system remains in a chaotic state due to the small amplitude of the driving force signal.It has a similar entropy value to that of a system with only noise signals but no same-frequency signals.Therefore, when the SNR is greater than −20 dB, the periodic state of the system can be effectively determined, achieving the goal of signal detection. Entropy 2023, 25, x FOR PEER REVIEW 18 of 22 amplification of external noise.Despite this, the system remains in a chaotic state due to the small amplitude of the driving force signal.It has a similar entropy value to that of a system with only noise signals but no same-frequency signals.Therefore, when the SNR is greater than −20 dB, the periodic state of the system can be effectively determined, achieving the goal of signal detection. Conclusions The problem addressed in this paper revolves around the challenging task of determining the threshold in the Duffing system.Emphasis is placed on leveraging the substantial distinctions between the Poincaré section values in both periodic and chaotic states of the Duffing system.An advanced permutation entropy of the Poincaré section composition sequence is employed to establish a correlation between the driving force amplitude and the improved permutation entropy.The utilization of a PMIPE-based threshold determination technique is then introduced, and this algorithm is applied to the detection system of sinusoidal signals with different frequencies.A comparative analysis is conducted with the maximum multiscale entropy, 0-1 test, and Lyapunov exponent methods to gauge the efficacy of the proposed technique.The results of our investigation illustrate that the algorithm we propose exhibits a remarkable accuracy to determine the system's threshold.Moreover, our method needs much less computational cost compared with the traditional methods, and our algorithm also performs well under noisy conditions.In summary, our study provides an innovative and effective resolution to the intricate challenge of threshold determination in the Duffing system, showcasing the potential of the proposed algorithm in addressing this pertinent issue.Figures A1-A3 show how the IPE varies across a broader range of the driving force for signal detection systems operating at frequencies of 10 Hz, 20 Hz, and 100 Hz, respectively.The step-length is set as 0.001.For all three cases, our method consistently yields higher IPE values when the driving force is less than 0.82, indicating a heightened complexity in the dynamics within this range.As the value of r increases, a notable abrupt decline in IPE occurs, signifying a transition of the system from a critical chaotic state to a periodic state.Subsequently, the IPE stabilizes at a lower constant level.According to our threshold determination strategy, the recommended thresholds for the signal detection systems operating at frequencies of 10 Hz, 20 Hz, and 100 Hz are determined to be 0.8257, 0.8254, and 0.8248, respectively.Notably, these values align seamlessly with the results expounded upon in Section 4.2 of this paper. Figure A4 compares the results of threshold evaluations from various methods applied to a signal detection system operating at 10 Hz.Once again, these findings corroborate those detailed in Section 4.3 of this paper.Notably, both MSE and our algorithm converge on a threshold value of 0.8257, while the 0-1 test yields a slightly different assessment at 0.8253.The application of the Lyapunov exponent method proves challenging in threshold determination, possibly attributed to the sensitivity of the results to parameter selection. Figure 1 . Figure 1.The system is in a periodic state when the variance of noise signal is 0.001 and amplitude r = 0.826. Figure 2 . Figure 2. The system is in a chaotic state when the variance of noise signal is 0.04 and amplitude r = 0.826. Figure 1 .Figure 1 . Figure 1.The system is in a periodic state when the variance of noise signal is 0.001 and the power amplitude r = 0.826. Figure 2 . Figure 2. The system is in a chaotic state when the variance of noise signal is 0.04 and amplitude r = 0.826. Figure 2 .Figure 3 . Figure 2. The system is in a chaotic state when the variance of noise signal is 0.04 and the power amplitude r = 0.826. Figure 3 . Figure 3.The system is in a periodic state when the variance of noise signal is 0.04 and the power amplitude r = 0.827. Figure 4 . Figure 4. Intersection diagram of Duffing system and Poincaré section in different states.The * represent the intersection point of the Duffing system and Poincaré cross-section; The colored lines represent the phase trajectory of a duffing system.(a) Three-dimensional diagram of the intersection of chaotic states and Poincaré sections of Duffing system; (b) three-dimensional diagram of the intersection of periodic states and Poincaré sections of Duffing system. Figure 4 . Figure 4. Intersection diagram of Duffing system and Poincaré section in different states.The * represent the intersection point of the Duffing system and Poincaré cross-section; The colored lines represent the phase trajectory of a duffing system.(a) Three-dimensional diagram of the intersection of chaotic states and Poincaré sections of Duffing system; (b) three-dimensional diagram of the intersection of periodic states and Poincaré sections of Duffing system. Figure 5 . Figure 5.The flowchart of threshold determination method based on PMIPE. Figure 5 . Figure 5.The flowchart of threshold determination method based on PMIPE. Figure 6 .Figure 7 . Figure 6.The influence of embedding dimension on IPE algorithm. Figure 6 . Figure 6.The influence of embedding dimension on IPE algorithm. Entropy 2023 , 25, x FOR PEER REVIEW 9 of 22 at a faster rate.Consequently, when computing the Poincaré section entropy value using improved permutation entropy, this investigation suggests selecting an embedding di- Figure 6 .Figure 7 . Figure 6.The influence of embedding dimension on IPE algorithm.(2)Influence of Data Length on IPE We set the Duffing system as in Step (1) in this Section and calculated the entropy values of the periodic and chaotic states for 100 to 10,000 Poincaré section points.Figure7adepicts the influence of diverse data lengths on the IPE algorithm, while Figure7boffers a local zoom-in.Notably, for data lengths above 500, the entropy values for periodic and chaotic states attain a stable tendency.Such tendencies facilitate differentiating system states based on entropy values. Figure 7 . Figure 7. Influence of data length on IPE algorithm.(a) Full view; (b) partial magnification view. Figure 8 . Figure 8.The influence of time delay on IPE algorithm. Figure 8 . Figure 8.The influence of time delay on IPE algorithm. Figure 8 . Figure 8.The influence of time delay on IPE algorithm. Figure 9 . Figure 9.The entropy change of the 10 Hz signal detection system. Figure 9 . Figure 9.The entropy change of the 10 Hz signal detection system. Figure 10 . Figure 10.When f = 10 Hz and r = 0.8257, the system is in a chaotic state. Figure 11 . Figure 11.When f = 10 Hz and r = 0.8258, the system is in a periodic state. Figure 10 .Figure 10 . Figure 10.When f = 10 Hz and r = 0.8257, the system is in a chaotic state. Figure 11 . Figure 11.When f = 10 Hz and r = 0.8258, the system is in a periodic state. Figure 11 . Figure 11.When f = 10 Hz and r = 0.8258, the system is in a periodic state. Figure 12 . Figure 12.The entropy change of the 20 Hz signal detection system. Figure 13 . Figure 13.When f = 20 Hz and r = 0.8254, the system is in a chaotic state. Figure 14 . Figure14.When f = 20 Hz and r = 0.8255, the system is in a periodic state. 1 Figure 12 . 22 Figure 12 . Figure 12.The entropy change of the 20 Hz signal detection system. Figure 13 . Figure 13.When f = 20 Hz and r = 0.8254, the system is in a chaotic state. Figure 13 . 22 Figure 12 . Figure 13.When f = 20 Hz and r = 0.8254, the system is in a chaotic state. Figure 13 . Figure 13.When f = 20 Hz and r = 0.8254, the system is in a chaotic state. Figure 15 . Figure 15.The entropy change of the 100 Hz signal detection system. Figure 16 . Figure 16.When f = 100 Hz and r = 0.8248, the system is in a chaotic state. Figure 15 . Figure 15.The entropy change of the 100 Hz signal detection system. Figure 15 . Figure 15.The entropy change of the 100 Hz signal detection system. Figure 16 . Figure 16.When f = 100 Hz and r = 0.8248, the system is in a chaotic state. Figure 16 . Figure 16.When f = 100 Hz and r = 0.8248, the system is in a chaotic state. Figure 17 . Figure 17.When f = 100 Hz and r = 0.8249, the system is in a periodic state. Figure 17 . Figure 17.When f = 100 Hz and r = 0.8249, the system is in a periodic state. Figure 18 .Figure 19 . Figure 18.Results of threshold calculation by different methods. Figure 22 . Figure 22.The entropy change of the real signal detection system. Figure 22 . Figure 22.The entropy change of the real signal detection system. Figure 22 . Figure 22.The entropy change of the real signal detection system.Figure 22.The entropy change of the real signal detection system. Figure 22 . Figure 22.The entropy change of the real signal detection system.Figure 22.The entropy change of the real signal detection system. Figure 23 . Figure 23.Detecting real underwater acoustic signals.(a) The duffing system did not add a real underwater acoustic signal; (b) the duffing system adds the ship signals; (c) the duffing system adds ambient noise. Figure 23 . Figure 23.Detecting real underwater acoustic signals.(a) The duffing system did not add a real underwater acoustic signal; (b) the duffing system adds the ship signals; (c) the duffing system adds ambient noise. Figure 24 . Figure 24.IPE results of chaotic state and periodic state under different SNRs. Figure A1 . Figure A1.The entropy change of the 10 Hz signal detection system.The driving force ranges from 0.7 to 0.9, with a step length of 0.001. Figure A1 . 22 Figure A2 . Figure A1.The entropy change of the 10 Hz signal detection system.The driving force ranges from 0.7 to 0.9, with a step length of 0.001.Entropy 2023, 25, x FOR PEER REVIEW 20 of 22 Figure A2 . Figure A2.The entropy change of the 20 Hz signal detection system.The driving force ranges from 0.7 to 0.9, with a step length of 0.001. Figure A2 . Figure A2.The entropy change of the 20 Hz signal detection system.The driving force ranges from 0.7 to 0.9, with a step length of 0.001. Figure A3 .Figure A3 . Figure A3.The entropy change of the 100 Hz signal detection system.The driving force ranges from 0.7 to 0.9, with a step length of 0.001. Figure A2 . Figure A2.The entropy change of the 20 Hz signal detection system.The driving force ranges from 0.7 to 0.9, with a step length of 0.001. Figure A3 .Figure A4 . Figure A3.The entropy change of the 100 Hz signal detection system.The driving force ranges from 0.7 to 0.9, with a step length of 0.001. Table 2 . Comparison of true thresholds with the results obtained by our method. Table 2 . Comparison of true thresholds with the results obtained by our method. Table 3 . Comparison of traditional threshold determination methods with our method. Table 3 . Comparison of traditional threshold determination methods with our method. Table 3 . Comparison of traditional threshold determination methods with our method.
2023-12-16T16:32:55.933Z
2023-12-01T00:00:00.000
{ "year": 2023, "sha1": "ef4b4e5e79c6a2e75f11398fdca1637c4b0bf46e", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1099-4300/25/12/1654/pdf?version=1702517180", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9d43183f72a34123b2a9f7be99219807c6fa5efb", "s2fieldsofstudy": [ "Physics", "Engineering" ], "extfieldsofstudy": [ "Medicine", "Computer Science" ] }
8542695
pes2o/s2orc
v3-fos-license
Estimating the extent of the health hazard posed by high-production volume chemicals. We used structure-activity relationship modeling to estimate the number of toxic chemicals among the high-production volume (HPV) group. We selected 200 chemicals from among the HPV chemical list and predicted the potential of each for its ability to induce a variety of adverse effects including genotoxicity, carcinogenicity, developmental, and systemic toxicity. We found a significantly less than expected proportion of toxic chemicals among the HPV sample when compared to a reference set of 10,000 chemicals representative of the universe of chemicals. toxicity. The majority of these chemicals are not part of the learning sets used to derive the SAR models, thereby eliminating the possibility of tautological artifacts. Although SAR projections may not have perfect predictivity, the current study seeks to assess the prevalence of toxicants among HPV chemicals. Such estimates based on SAR techniques can be derived for populations of molecules provided the SAR model has been validated and its predictivity is known (10)(11)(12). HPV chemical selection. A sample of 200 chemicals was selected from among the HPV chemicals (7). The chemicals chosen were randomly selected and a) were pure and unique substances; b) were organic; c) were nonpolymeric; and d) did not contain metals. Reference chemicals. A reference set of 10,000 chemicals representing the universe of chemicals was used as a control set. The composition of this set is consistent with estimates produced by the National Academy of Science (13). This set was derived through sampling chemical structure libraries and the National Institutes of Health Developmental Therapeutics Program. This reference set was used to assess whether the HPV chemicals represent a greater or lesser toxicologic risk than other chemicals. For this evaluation we compared the percentage of chemicals predicted to be toxic in the HPV sample to the percentage of chemicals predicted to be toxic in the reference chemical set. SAR predictions. We used the CASE/ MULTICASE program (MULTICASE Inc., Beachwood, OH) (14)(15)(16) to predict the toxicity of the sampled HPV chemicals and the 10,000 chemicals in the reference set. All chemicals from both groups were predicted for their ability to induce a number of different toxic end points (Table 1). Each toxic end point was predicted separately. The predictions are based on the occurrence of identified molecular features that have been previously identified as significantly related to toxicity for each end point. The CASE modeling process begins with the compilation of a set of chemical structures (typically in Smiles code) and an experimentally derived biological activity value. These data are placed into a learning set for the program. Each chemical in the learning set is broken down, in silico, to all possible fragments from 2 to 10 heavy (i.e., nonhydrogen) atoms. Each fragment is labeled with the name and activity of its parent chemical. Upon completion of this process, the program organizes the list of fragments and tabulates the number of chemicals containing each of them. The program then identifies those fragments that were identified predominantly in active chemicals and refers to these fragments as biophores. The selection of biophores is based on the binary experimental results of each chemical. For example, biophores for a cancer causation model are identified that are predominantly found in chemicals that tested positive for carcinogenicity compared to those that were noncarcinogenic. The particular potency value associated with each biophore is then determined from the experimental potencies for the chemicals making up the biophore. The total list of biophores is then used to derive a global quantitative SAR (QSAR) equation. These biophores serve as the basis for both predictive and mechanistic analysis of toxicity. The MULTICASE module then selects from the list of biophores the most important one based on its occurrence in the largest number of chemicals in the learning set. At this point in the MULTICASE routine, a congeneric series of chemicals has been identified, with the biophore being the unifying feature. MULTICASE then performs a series of defined chemical substitutions of the atoms in the first biophore (e.g., one halogen for another halogen or a nitrogen for a carbon in aromatic systems) and then searches for these expanded definitions of the biophore in the library of previously identified significant fragments. All chemicals containing the biophore and the expanded definitions are grouped together. Thus a biophore may consist of a single feature or a family of chemically similar features. Using the molecules contained in this family of chemicals as a new learning set, MULTICASE identifies modulators of their activity. These modulators may be chemical, physicochemical, or quantum mechanical parameters. Modulators augment or decrease the activity of the chemicals containing the biophore. Some values and coefficients are localized to particular atoms of a chemical (e.g., a charge or highest occupied molecular orbit coefficient on an individual atom derived by a modified Hückel method). The biophore and identified modulators are then used to derive local QSAR equations for chemicals within this subset. If the entire learning set is congeneric, then the single biophore and associated modulators may explain the activity of the entire set; this usually does not occur and there will be a group of molecules not explained by the single biophore and associated modulators. When this happens, the program will remove from consideration the molecules already explained and will search for the next biophore. The process is iterated until all of the active molecules in the learning set have been explained or until no significant fragments can be found to explain them. The resulting list of biophores can then be used in mechanistic studies or to predict the activity of yet untested molecules (10). For example, upon submission for evaluation, MULTICASE will determine if an unknown molecule contains a biophore. If the molecule does not contain a biophore, it will be predicted, by default, to be inactive. When the molecule contains a biophore, the program will make a qualitative prediction that the chemical is biologically active with an associated probability that this prediction is correct. Moreover, MULTICASE will inspect the molecule for the presence of modulators associated with this biophore. The program then incorporates the parameters for the identified modulators into the QSAR equation and produces a quantitative prediction for the potency of the chemical. In essence, although biophores are the determining structures, the modulators will determine whether and to what extent the biological potential of the chemical containing the biophore is expressed. Application of the CASE and MULTI-CASE programs results in four submodels (17,18). These are two models to estimate potency and two to estimate probability of activity. Because each of them may reflect different facets of the toxicologic phenomena under study, they are combined to give an overall Bayesian probability of the toxicity for each chemical tested. A chemical is considered active if its Bayesian probability is > 0.6 and negative if it is < 0.4. SAR models. A number of validated and characterized SAR models (11) of toxicologic phenomena were used in the course of these studies. These included the induction of mutations in Salmonella. That database was developed under the aegis of the U.S. National Toxicology Program (NTP) (19)(20)(21)(22)(23). SAR models based on subsets of that database have been described (24)(25)(26). SAR models of the ability to induce error-prone DNA repair in Escherichia coli (SOS Chromotest; EBPI, Brampton, Ontario, Canada) (27,28), mutations in cultured mouse lymphoma cells (29), sister chromatid exchanges (SCEs), chromosomal aberrations in cultured Chinese hamster ovary (CHO) cells (30), and unscheduled DNA synthesis in primary rat hepatocytes (31) have been described previously, as have models of the potentials for inducing SCEs (32) and micronuclei in vivo (33). We used two rodent carcinogenicity databases: the Carcinogenic Potency Database (CPDB) assembled by Gold and associates (34)(35)(36)(37)(38); and the rodent carcinogenicity database generated under the auspices of the NTP (39,40). SAR models of these databases have also been described (41)(42)(43). We combined the individual projections derived from these different databases using Bayes' theorem, described previously (17,18) to yield a single prediction of carcinogenicity. The SAR model of cellular toxicity was based on assays using cultured BALB/c-3T3 cells (44). A chemical was considered cytotoxic if its IC 50 (concentration that inhibits 50% growth) value was ≤ 1 µM. The SAR model of lethality to minnows was derived from previously published data (45). The SAR model for lethality to rats (LD 50 ; 50% lethal dose) was based on data on 1,411 orally administered chemicals extracted from the Registry of Toxic Effects of Chemical Substances (46). In that SAR model, toxicity was defined as LD 50 ≤ 7.2 mmol/kg. The SAR model of α 2 u-globulin nephropathy in male rats (54) was based on data kindly supplied by L.D. Lehman-McKeenan from the Procter and Gamble Company (Cincinnati, OH). The predictive ability of each model was estimated by its ability to correctly predict the activity of chemicals not used to build the model but for which we knew the true experimental results. These values are listed in Table 1 as concordance (i.e., percent correct predictions over total predictions). These values were calculated based on pooling multiple 10-fold cross-validation results. Each learning set was divided 10 times into learning and validation sets. Each learning set was used to derive a model, and this model was then used to predict the activity of the chemicals left out in the validation set. Because the activity of the chemicals in the validation set was known, we could determine the number of correct predictions and estimate the concordance for each model. Results and Discussion The HPV chemicals can be considered to present an elevated toxicologic risk to humans and to the environment based solely on their large production volume and the consequent potential for exposure (55). However, it would be of interest to know whether the HPV chemicals, as a group, are more or less toxic than "average" chemicals. To assess this, we compared the proportion of chemicals in the HPV sample predicted to be toxic to the proportion of chemicals predicted to be toxic in the reference set representing the universe of chemicals. These comparisons were done one toxic end point at a time. Unexpectedly, for all toxic effects assessed except one (the in vitro induction of SCEs), the proportion of chemicals predicted to be toxic among the HPV sample was significantly less than the proportion of chemicals predicted to be toxic in the reference set ( Table 1). The question obviously arises as to the reason for this decrease in the number of potentially toxic HPV chemicals when compared to what would be expected from a random sample of chemicals. This is particularly relevant given that the underlying reason for the HPV Challenge program is that little is known about the toxicities of the HPV chemicals (55). From this reasoning, it can be assumed that hazardous chemicals were not excluded from production based on the results of toxicologic prescreens. A more detailed analysis of the mutagenic/genotoxic potentials indicate that with respect to the possibility for inducing mutations in Salmonella, the proportion of HPV chemicals predicted to be mutagens was significantly less than that for the reference set (19.5% vs. 31.5%, p = 0.0001; Table 1). Interestingly, it has recently been reported that of 46 HPV chemicals tested for Salmonella mutagenicity, 20% were mutagens (56). Moreover, this same report showed an increase in the proportion of mutagens when comparing HPV chemicals to all chemicals in commerce. This is in concordance with our predictions. Predictions based on other assays designed to assess mutagenic and genotoxic activity in prokaryotes or cultured cells (Table 1) showed the same pattern; (i.e., the proportion of HPV chemicals predicted to induce these effects was lower that for the chemicals in the reference set). The only exception to this is the proportion of chemicals predicted to induce SCEs in cultured CHO cells. However, the ability to induce SCEs in vitro is not restricted to genotoxicants and may, in fact, reflect cell toxicity (57). The Salmonella mutagenicity assay is usually the first screen used, but the results are frequently confirmed by an in vivo test for genotoxicity. The assay frequently used to confirm that in vitro assay is the mouse micronucleus assay (58). The proportion of chemicals predicted as in vivo micronucleus inducers among the HPV sample is also significantly less than that for the reference set ( Table 1). The same is true for the other in vivo assay, the induction of SCEs in mice (Table 1). It should be noted that although the micronucleus assay is confirmatory when the Salmonella assay indicates the potential for mutagenicity, the micronucleus assay response can also be elicited by nongenotoxicants such as inhibitors of tubulin polymerization and of microtubular integrity, as well as by aneugens (59,60). This may explain the greater projected proportion of micronuclei inducers when compared to Salmonella mutagens. Based on predicted positive responses in both the Salmonella mutagenicity and the micronucleus assays, which define in vivo genotoxicants, we estimate that 8% of the chemicals in the HPV sample possess that potential, in contrast to 23% of chemicals in the reference set (Table 1), thus further suggesting that the HPV chemicals, as a group, represent less of a genotoxic risk than chemicals at large. The major function of many mutagenicity and genotoxicity assays is to help identify carcinogens that may pose a risk to humans (58). Based on predictions made by several SAR models derived from rodent carcinogenicity data, the HPV sample is estimated to be significantly less likely to induce cancers than the reference chemicals (16.5% vs. 33.5%, p < 0.00001; Table 1). However, these proportions are based on rodent cancer bioassays in which animals are exposed up to the maximum tolerated dose for their lifetime. It is doubtful that this is an apt model for human exposure. On the other hand, the majority of recognized human carcinogens are also mutagenic and/or genotoxic (61)(62)(63). To evaluate the prevalence of genotoxic carcinogens (39), we predicted the proportion of chemicals that would induce cancers in rodents and mutagenicity in Salmonella (i.e., genotoxic carcinogens). Again, there was a significant decrease in the proportion of chemicals predicted for these end points between the HPV sample and the reference set (4.5% vs. 16%, p < 0.00001; Table 1). The HPV sample is predicted to have a lower proportion of chemicals that are developmental toxicants for hamsters or humans (Table 1). That sample was also predicted to have a lower proportion of inducers of allergic contact dermatitis, sensory irritation, and eye irritation (Table 1). Finally, the HPV sample was predicted to have a much lower proportion of systemic toxicants than the reference set (Table 1). With respect to environmental effects, the HPV sample was predicted to contain significantly fewer aquatic toxicants than the reference set (Table 1). However, the estimated environmental biodegradability of the two groups was not significantly different. Conclusion In this study we predicted the occurrence of chemicals capable of inducing 10 separate toxicologic end points in a sample of HPV chemicals and compared these values to those from a reference set of 10,000 chemicals. Regardless of the nature of the toxicologic phenomenon, the subset of HPV chemicals was estimated to contain a significantly lower proportion of toxicants than the reference set. Although it can be expected that the potential for human contact with the HPV chemicals is great, the potential for individual members of the group to induce health effects is less than expected. The reason for this lower proportion of toxicants in the HPV sample and presumably in the entire HPV list is unknown. However, it may reflect chemical properties of this group that allow them to be used as chemical stocks. These would include greater stability and lower reactivity, two useful properties for storage and transport of chemicals. Thus the HPV chemicals, for utility and handling ease, are not typically reactive. Presumably these chemicals, during the processing to final products, are transformed into multiple reactive intermediates in less than HPV quantities. During the course of this study, Zeiger and Margolin (56) estimated the proportion of mutagens in a subset of HPV chemicals using preexisting data. Their results matched ours, leading us to conclude that our sample of HPV chemicals is representative of the group and that our predictions are in accord with experimental results.
2014-10-01T00:00:00.000Z
2001-09-01T00:00:00.000
{ "year": 2001, "sha1": "6531eb352798962da084b5b96708b7fe2f34aa88", "oa_license": "pd", "oa_url": "https://ehp.niehs.nih.gov/doi/pdf/10.1289/ehp.01109953", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6531eb352798962da084b5b96708b7fe2f34aa88", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
237938940
pes2o/s2orc
v3-fos-license
Sleep as a Priority: 24-Hour Movement Guidelines and Mental Health of Chinese College Students during the COVID-19 Pandemic Research on the combined role of 24-hour movement behaviors (sleep, sedentary behavior [SB], and physical activity) in adult mental health, though important, is in its infancy. In the context of Canadian 24-hour movement guidelines integrating quantitative recommendations for sleep, SB, and moderate-to-vigorous physical activity (MVPA), this study aimed to examine the associations between meeting guidelines and mental health among college students. The study used a cross-sectional sample of 1846 Chinese college students surveyed online in August 2020. Through network analysis and multivariate analysis of covariance, the individual and combined associations between meeting 24-hour movement guidelines and the levels of depression and anxiety after adjusting sociodemographic factors were analyzed. Results indicated that meeting the sleep guideline had stronger associations with depression and anxiety than meeting the SB or MVPA guideline. Specifically, compared to meeting no guidelines, meeting the sleep guideline (alone or in combination with other guidelines) was associated with significantly lower levels of depression and anxiety; meeting both SB and MVPA guidelines was also associated with a significantly lower level of depression. Hence, meeting more guidelines, especially adhering to a healthy sleep routine, may play an important role in promoting the mental health of young adults. Introduction Since the outbreak of the novel coronavirus (COVID- 19), the pandemic has caused extraordinary life changes and stress. Concomitant to the unprecedented changes (e.g., quarantine measures and fear of being infected), aggravated public mental health problems have been reported in all age groups [1,2], especially college students [3,4]. Studies have reported that after the COVID-19 outbreak, the prevalence of depression and anxiety symptoms among Chinese college students was up to 56.8% and 41.1%, respectively [5,6]. Such a phenomenon may be related to home confinement widely adopted during the pandemic [7]. Considering that COVID-19 remains an unparalleled and ongoing crisis, it is essential to clarify the critical modifiable factors of mental health problems among college students to implement interventions appropriate to relieve and manage their mental burden. Sleep, sedentary behavior (SB), and physical activity (PA) have been identified as independent factors of health in college students. Daily hours of individuals are distributed among sleep, SB, and PA (also referred to as 24-hour movement behaviors). Hence, their individual or combined effects on health outcomes have received increasing attention from researchers. A large body of literature has illustrated the individual role of each 24-hour movement behavior in mental health among college students, even during the COVID-19 pandemic [3,8]. For example, college students with short sleep durations reported more depression symptoms than their counterparts [8]. A large-scale longitudinal study reported that less PA was a significant contributor to develop negative psychological symptoms among college students [3]. Relative to PA, SB was less studied but also considered to be an important factor affecting mental health during the pandemic [9]. Of note, in the field of time-use epidemiology, researchers have suggested that 24-hour movement behaviors are codependent given that the increase of one behavior would lead to the decrease of another behavior [10]. More importantly, time allocation may have important impacts on a series of health outcomes [10]. Hence, researchers have begun to adopt an integrated perspective to study the health implications of 24-hour movement behaviors across populations of all ages (e.g., children and adolescents) in recent years. Guided by this integrated movement paradigm and evidence from this field, Canada has first developed and released 24-hour movement guidelines for specific age groups, including quantitative recommendations for sleep, SB, and moderate-to-vigorous physical activity (MVPA) [11][12][13]. Canadian 24-hour movement guidelines acknowledge that focusing on a single behavior has limitations and suggest that a combination of sleep, SB, and PA matters for healthy development. Since the release of the guidelines, researchers have carried out studies to explore the association between guidelines adherence and some health-related outcomes (e.g., adiposity, fitness, quality of life) [14,15]. Because this is a relatively new topic, there are some research gaps to be addressed. First of all, compared to the abundant research on the association between meeting guidelines and better physical health [16], little is known about whether meeting guidelines is also related to better mental health. In addition, owing to the relatively late publication of the adult version of Canadian 24-hour movement guidelines, existing literature on this topic is mainly focused on children and adolescents [15] and findings from adults are relatively limited. Furthermore, less is clear about the relative importance of each recommendation in the guidelines, which would be particularly critical to design effective interventions. A recent study of Slovenian adults showed that associations between 24-hour movement guidelines and stress were mainly explained by meeting the sleep guideline, rather than the MVPA or SB guideline [17]. Nonetheless, more research is needed to establish the specific association between meeting guidelines and mental health among adults. Additionally, the guidelines are based on the research results before the COVID-19 pandemic [12], so it has yet to verify the desirable role of meeting guidelines in health during the pandemic. Moreover, the relationships between specific movement behavior and health outcomes may be affected by other movement behaviors [18,19] and sociodemographic factors [15,20]. To take these confounding variables into account synchronously, network analysis may be an applicable approach to reveal the relations between movement behaviors and health outcomes in an intuitive way. Network analysis aims to establish connections through multiple interactions between variables from graphical representations [21]. From a network perspective, we can better understand connections between variables in a complex system. Indeed, to facilitate a better understanding of the collective impact of 24-hour movement behaviors on health, researchers have begun to introduce network analysis into this field [22]. This study, therefore, aimed to investigate the associations between 24-hour movement behaviors (in isolation or combination) and mental health problems in Chinese college students during the COVID-19 pandemic. Participants and Procedure The current cross-sectional study was conducted with an online survey in August (21st-31st) 2020. Due to the pandemic, Chinese college students spent the spring semester of 2020 in the form of online classes and confronted the uncertainty of academic and career development until September 2020, when they could return to educational settings for the fall semester. Therefore, at the time of our survey, most Chinese college students had stayed at home for more than six months. We adopted a convenient sampling method to recruit Chinese college students as participants via social media platforms (e.g., Wechat, QQ). Participants were asked to complete a questionnaire via "Wenjuanxing" (https://www. wjx.cn/, accessed on 3 September 2021), a Chinese online survey platform. Participants provided online consent before filling out the questionnaire. Those who had completed the questionnaire (approximately 15 min to finish) were given ten RMB (Chinese currency) via online payment as a gratuity for their time taken to respond. In total, 1942 students, from 30 provinces and autonomous regions (mainly from the Guangdong province), participated in the survey, and 1846 (response rate = 95.1%) provided valid answers and composed the final analytical sample. Sociodemographic Factors Participants reported sociodemographic information, including age (years), gender (male/female), body mass index (BMI), family structure (full/divorced/other), parental educational level (middle school or below/high school/college or university/master or above), number of siblings (none/one or more), number of friends (none/one to two/ three to five/six or more), residence (urban/rural), and perceived family affluence. Perceived family affluence was assessed by the MacArthur Scale [23,24]; the total score of the scale ranges from 1 (bottom rung) to 10 (top rung), with higher scores indicating higher perceived family affluence. 24-Hour Movement Behaviors PA and SB were assessed via the Chinese version of the International Physical Activity Questionnaire-Short Form (IPAQ-SF) [25]. The IPAQ-SF asks participants to recall aspects of their PA over the past seven days, including the time spent on sitting (SB), walking, moderate PA, and vigorous PA. In the current study, MVPA time was represented by the total weekly accumulation of minutes spent on vigorous and moderate PA. Sleep duration was measured by the question from the Chinese version of the Pittsburgh Sleep Quality Index (PSQI): "During the past month, how many hours of actual sleep did you get at night?" [26]. Chinese versions of IPAQ-SF and PSQI have been widely used among the Chinese population and have shown good psychometric properties [27][28][29][30]. According to the Canadian 24-Hour Movement Guidelines for Adults aged 18-64 years, college students should achieve sufficient sleep (7 to 9 h per night), minimize SB (accumulated 8 h or less per day), and be physically active (at least 150 min of moderate to vigorous PA [MVPA] per week) in a healthy 24 h, so participants who reach criteria above were considered as meeting the 24-hour movement guidelines. Mental Health Problems The level of depression was measured by the Chinese version of the 9-item Patient Health Questionnaire (PHQ-9) [31]. Each item was reported with a 4-point Likert scale, and a higher total score indicated a more severe level of depression symptoms. Total scores of 5, 10, 15, and 20 were the cut-off scores of mild, moderate, moderately severe, and severe levels of depression. The level of anxiety was evaluated by the Zung Self-rating Anxiety Scale (SAS) in the Chinese version [32]. The SAS consists of 20 items, each rated by a 4-point Likert scale. After multiplying the raw score by 1.25, the integer part is retained to obtain the standard score, and a higher standard score suggests a more severe level of anxiety. Total scores of 50, 60, and 70 were the cut-off scores of mild, moderate, and severe levels of anxiety. The Chinese versions of the PHQ-9 and SAS are both reliable instruments and have been widely used among Chinese adults [30,[33][34][35]. The Cronbach alphas for the PHQ-9 and SAS in this study were 0.91 and 0.87, respectively. Network Analysis We specified two networks to model relations between 24-hour movement guideline adherence and mental problems separately for depression and anxiety. Each network also included sociodemographic factors as covariates. The "Fruchterman-Reingold" algorithm was applied to have data presented in the relative space, among which the variables with stronger relevance remained together, and the variables with weak relevance were mutually exclusive [36]. The pairwise Markov random field model was used to improve the accuracy of the partial correlation network estimated from L1 regularized neighborhood regression. The least absolute contraction and selection operator was used to obtain regularization and control the network sparsity [37]. The Extended Bayesian Information Criterion parameter was adjusted to 0.5 to create a network with greater parsimony and specificity [38]. The network analysis uses Least Absolute Shrinkage and Selection Operator regularized algorithms to obtain the precision matrix (weight matrix). To indicate the importance of each node (variable) in the network, centrality indexes (expected influence and closeness) were also calculated and provided in supplementary materials (Tables S2 and S4). We performed the above analyses using JASP software version 0.14.1 (JASP Team, Amsterdam, The Netherlands). Multivariate Analysis of Covariance (MANCOVA) MANCOVA was conducted to examine the associations between combinations of the 24-hour movement guidelines with depression and anxiety. According to participants' adherence to the 24-hour movement guidelines, they were classified into eight mutually exclusive groups: None, Sleep only, SB only, MVPA only, Sleep + SB, Sleep + MVPA, SB + MVPA, and All three. Pairwise post hoc comparisons (Bonferroni test) were then performed to examine the differences of depression and anxiety levels across these eight groups after adjusting for socio-demographic variables. MANCOVA was performed in SPSS for Windows, version 26.0 (IBM Corp, Armonk, NY, USA). Statistical significance was set at p < 0.05 (two-tailed) for interpreting the results. Sample Characteristics The final sample consisted of 1846 participants (mean age = 20.67 ± 1.61, 36.0% males). The proportion of participants meeting the sleep, SB, and MVPA guideline was 69.9%, 68.9%, and 48.5%, respectively. The proportion of combinations of these guidelines varied from 3.6% (MVPA only) to 27.0% (all three). On the PHQ-9 (M = 6.83, SD = 5.19), 63.5% participants reported mild to severe depression symptoms, respectively. On the SAS (M = 41.79, SD = 9.82), 21.8% of participants reported mild to severe anxiety symptoms, respectively. Detailed characteristics are provided in Table 1. (Tables S1 and S3). Figure 2 shows levels of depression and anxiety across combinations of 24-hour movement guidelines after adjusting for confounding variables. Compared with "None", depression levels were significantly lower in following combinations: "Sleep only" (p = 0.043), "Sleep + SB" (p < 0.001), "Sleep + MVPA" (p = 0.003), "SB + MVPA" (p = 0.008), and "All three" (p < 0.001). Generally, as the number of guidelines met increased, the levels of depression showed a downward trend. Compared with "None", anxiety levels were significantly lower in the following combinations: "Sleep only" (p = 0.002), "Sleep + SB" (p < 0.001), "Sleep + MVPA" (p < 0.001), and "All three" (p < 0.001). Detailed results of pairwise post-hoc comparisons are provided in supplementary materials (Tables S5 and S6). Figure 2 shows levels of depression and anxiety across combinations of 24-hour movement guidelines after adjusting for confounding variables. Compared with "None", depression levels were significantly lower in following combinations: "Sleep only" (p = 0.043), "Sleep + SB" (p < 0.001), "Sleep + MVPA" (p = 0.003), "SB + MVPA" (p = 0.008), and "All three" (p < 0.001). Generally, as the number of guidelines met increased, the levels of depression showed a downward trend. Compared with "None", anxiety levels were significantly lower in the following combinations: "Sleep only" (p = 0.002), "Sleep + SB" (p < 0.001), "Sleep + MVPA" (p < 0.001), and "All three" (p < 0.001). Detailed results of pairwise post-hoc comparisons are provided in supplementary materials (Tables S5 and S6). Letters are denoted to mark the differences between groups. Groups with the same letter are not significantly different and groups that are significantly different get different letters. The error bars represent the 95% confidence intervals of standard error. Results are adjusted for age, gender, body mass index, family structure, parents' educational level, number of siblings, number of friends, residence, and perceived family affluence. SB = meet the sedentary guideline; MVPA = meet the moderate-to-vigorous physical activity guideline. Discussion This study investigated the associations between adherence to 24-hour movement guidelines and levels of depression and anxiety by integrating network analysis in the context of the COVID-19 pandemic. We found that meeting the sleep guideline had stronger associations with depression and anxiety than meeting the SB or MVPA guidelines. Compared to meeting none of the guidelines, meeting the sleep guideline only, meeting sleep + SB guidelines, meeting sleep + MVPA guidelines, and meeting all three guidelines were associated with significantly lower levels of depression and anxiety; meeting SB + MVPA guidelines was also associated with a significantly lower level of depression. These findings would deepen our understanding of the role of 24-hour movement behaviors on mental health. Interpretations for the main findings are as follows. We identified that sleep guideline adherence was significantly related to lower levels of depression and anxiety. This finding corroborates those of previous literature [39,40]. Discussion This study investigated the associations between adherence to 24-hour movement guidelines and levels of depression and anxiety by integrating network analysis in the context of the COVID-19 pandemic. We found that meeting the sleep guideline had stronger associations with depression and anxiety than meeting the SB or MVPA guidelines. Compared to meeting none of the guidelines, meeting the sleep guideline only, meeting sleep + SB guidelines, meeting sleep + MVPA guidelines, and meeting all three guidelines were associated with significantly lower levels of depression and anxiety; meeting SB + MVPA guidelines was also associated with a significantly lower level of depression. These findings would deepen our understanding of the role of 24-hour movement behaviors on mental health. Interpretations for the main findings are as follows. We identified that sleep guideline adherence was significantly related to lower levels of depression and anxiety. This finding corroborates those of previous literature [39,40]. For instance, a recent study on an adult sample (mean age = 48 ± 14 years) reported that participants meeting the sleep guideline were about twice as likely to have less stress than those who failed to meet [17]. Additionally, Tang et al. found that Chinese college students who slept less than 6 h per night (i.e., not meeting the sleep guideline) reported more depression symptoms than others during the COVID-19 outbreak [8]. Longitudinal data from 29,251 healthy Korean adults also showed that the efficacious sleep duration to reduce future anxiety symptoms was 7-9 h a day [40], which is exactly the recommended sleep duration in the 24-hour movement guidelines [12]. Meeting the sleep guideline means that the sleep duration per day falls within the range considered healthy (e.g., 7-9 h per night recommended for adults aged 18-64 years), while not meeting the sleep guideline includes two possible conditions: insufficient sleep or excessive sleep. A recent study on 28,202 Chinese adults found a U-shaped dose-response relationship between night sleep duration and depression symptoms [41]. Similarly, a meta-analysis of prospective studies reported that too short and long sleep duration were both significantly associated with an increased risk of depression in adults [39]. Inadequate sleep can directly lead to daytime sleepiness and unsuccessful emotion regulation strategies, resulting in more negative and less positive emotions, all of which are risks for psychiatric conditions (e.g., depression and anxiety) [42]. Furthermore, abnormal sleep duration may result from poor sleep quality, while there is robust evidence to support an association between poor sleep quality and depression [43]. All of these may partly account for the associations between the sleep guideline and mental health indicators in the current study. Moreover, the sleep guideline yielded a more robust association with depression and anxiety than the SB and MVPA guideline. This finding coincides with a recently published study, which reported that participants meeting the sleep guideline only, any combination of two guidelines, or all three guidelines reported less stress than those meeting none of the guidelines, except for those who met the MVPA or SB guideline only [17]. Our findings also support previous results based on samples of other age groups. For example, a recent longitudinal study on youth concluded that adherence to the sleep guideline, rather than MVPA or SB guideline, was the most consistent predictor of depression symptoms [44]. A systematic review on children and adolescents also suggested that meeting the sleep guideline appeared to be correlated with more mental health benefits than meeting the PA guideline [45]. These findings highlight the relative importance of the sleep guideline in the 24-hour movement guidelines. Meanwhile, considering that sleep occupies a large proportion of time among the three 24-hour movement behaviors, we call for giving priority to the sleep guideline when implementing the 24-hour movement guidelines on campus and encourage those who meet none of the guidelines to start by cultivating a good sleep routine, so as to obtain greater mental health benefits. A somewhat surprising finding was that meeting the SB or MVPA guideline alone was not associated with significantly lower levels of depression and anxiety, given the substantial evidence illustrating the desirable impact of limited SB and increased PA on aspects of mental well-being [46,47]. A possible explanation for our results is the use of a self-reported measure of SB and PA, which might have lacked the accuracy needed to measure the duration spent on SB and MVPA. Although meeting the SB or MVPA guidelines showed a relatively weak association with depression and anxiety in this study, meeting SB and MVPA guidelines concurrently was associated with a significantly lower level of depression. Moreover, we found that adherence to more 24-hour movement guidelines was associated with lower risks of depression in a trend of dose-response manner: the depression level decreased as more guidelines were achieved and was the lowest when all three guidelines were met. The same or similar dose-response relationship trend was also reported in previous literature examining the implications of the 24-hour movement guidelines [14,17,48], although their outcomes and population of interest differed from the current study. These findings imply that the benefits of healthy behavior may be superimposed. Therefore, in addition to meeting the sleep guideline, meeting the SB and MVPA guidelines should not be overlooked due to proven multiple health benefits derived from moving more and sitting less [28,49]. Nevertheless, when the outcome was anxiety, the dose-response pattern did not emerge. Owing to the scarcity of studies investigating 24-hour movement guidelines and mental health among adults, the existing evidence is too little to draw a definite conclusion. Further investigations are warranted for a clear explanation of these results. Strengths and Practical Implications The novelties of this study include the use of a novel network perspective to understand the associations between adherence to 24-hour movement guidelines and mental health problems among college students. Network analysis presents the innate complexity of these relationships, intuitively, in a graphical format, so we were able to discern unique relationships between 24-hour movement behaviors and mental health problems. More importantly, this study is one of the first to assess the associations in young adults, adding evidence to the implications of the Canadian 24-hour movement guideline in a Chinese adult sample. Considering that the Canadian 24-hour movement guidelines have cultural adaptations to different young populations, this study adds to the evidence on the updates of the guidelines in the future. This study also provides some practical implications for protecting mental health in the context of the current pandemic situation. Results of the present study respond to the recent call for adopting healthy movement behaviors to facilitate positive trajectories of mental health following the COVID-19 pandemic [50]. In a global survey during COVID-19-related home confinement, Trabelsi et al. asserted that health education and support for sleep and PA need to be promoted in order to maintain health during the pandemic [7]. In particular, this study underlines the important role of proper sleep duration in coping psychologically with ongoing special events such as this COVID-19 pandemic. However, research indicated that sleep deteriorated in response to the pandemic, and the prevalence of short sleep and long sleep (not meeting the sleep guideline) was higher than that before the pandemic [51,52]. Therefore, it is essential to keep the general population well informed about the importance of sleep and healthy sleep habits through public health education. Based on the existing literature, some recommendations are feasible and effective for individuals to take action and improve sleep [53][54][55]. First, keep in mind that it is normal to perceive lifestyle disruption due to the pandemic, and believe that it is possible to find a balance again. Second, keep a regular sleep-wake schedule and reserve 7-9 h per night for sleep. Third, avoid entertainment or work activities in bed, and reduce screen use before sleep. Fourth, carry out some physical activities, preferably in the daytime. Fifth, try to get natural daylight during the day and have dim light during the evening. Limitations and Future Directions Nevertheless, several limitations should be considered in the interpretation of our findings. First, the cross-sectional design precludes confirmation of causality between movement behaviors with depression and anxiety. Physical inactivity, prolonged sitting time, and abnormal sleep duration can also be the consequences of depression or anxiety [56,57], or reciprocal associations exist between these movement behaviors and mental health, as some literature has proposed [58,59]. Second, the self-reported data of movement behaviors and mental health problems, despite PHQ-9 and SAS being psychometrically valid, might have been prone to inaccuracy due to potential recall biases and social desirability. Since accurate assessment of movement behaviors is crucial for defining recommendations for health promotion at a population level, objective measures of movement behaviors, such as pedometers and accelerometers, are preferable in future studies to improve the accuracy of health-related data. Feasible, consumer-grade products (e.g., mobile applications, wearable devices) also make it possible to collect and analyze movement behaviors in a large-scale study [60,61]. Future research is needed to confirm and build upon this study with objective measures of movement behaviors. Third, our study recruited samples using a convenience sampling procedure, so the representative-ness of the study sample cannot be guaranteed. Fourth, the Canadian 24-hour movement guidelines also include a quantitative recommendation on the time spent on recreational screen-based SB (≤3 h), which we did not measure in this study. Finally, although we included a series of confounding variables in our study, some other important correlates were not considered, such as dietary behaviors, which are known to be correlated with mental health problems [62]. Accordingly, future research should account for these limitations when drafting a study plan to investigate 24-hour movement guideline adherence and mental health. Conclusions Adherence to the 24-hour movement guidelines was related to lower levels of depression and anxiety among Chinese college students. Greater benefits could be seen when all three guidelines were met. Notably, the benefits were mainly attributed to meeting the sleep guideline. Therefore, promoting adherence to the 24-hour movement guidelines, particularly prioritizing a healthy sleep routine, should be encouraged among college students for better mental health. Supplementary Materials: The following are available online at https://www.mdpi.com/article/ 10.3390/healthcare9091166/s1, Table S1: Weight matrix of the network for depression, Table S2: Centrality measures per variable in the network for depression, Table S3: Weight matrix of the network for anxiety, Table S4: Centrality measures per variable in the network for anxiety, Table S5: Results of pairwise post-hoc comparisons (Depression), Table S6: Results of pairwise post-hoc comparisons (Anxiety). Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: The data analyzed during the current study are available from the corresponding author on reasonable request. Conflicts of Interest: The authors declare no conflict of interest.
2021-09-09T13:11:21.441Z
2021-09-01T00:00:00.000
{ "year": 2021, "sha1": "8527777f9b8d2a85ce24fb7a787b5f18ae96620a", "oa_license": "CCBY", "oa_url": "https://europepmc.org/articles/pmc8468601?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "4733839b92077f013787d5186a7496d4f3ae3cbe", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
248093776
pes2o/s2orc
v3-fos-license
Activity of Benzimidazole Derivatives and their N Heterocyclic Carbene Silver Complexes Against Leishmania major Promastigotes and Amastigotes : Little progress was conducted concerning discovering new efficient antileishmanial drugs for many years. Hence, the disease has become a global health problem meanwhile. Benzimidazole derivatives and heavy metal complexes have shown potent antiparasitic activities. The present work is intended to evaluate fourteen synthetic benzimidazolium salts and N -heterocyclic silver carbene complexes against Leishmania major. Promastigotes and amastigotes of L. major were cultured in vitro to evaluate compound-induced inhibitory effects, and isolated mouse macrophages were used for cytotoxicity evaluation. Reactive oxygen species (ROS) formation was detected for all compounds as a possible mode of action. The silver complexes 3d and 3e revealed significant activity against L. major promastigotes with IC 50 values of 6.4 and 5.5 µg mL -1 , and SI of 1.77 and 2.02, respectively. Both complexes showed higher ROS production in promastigotes than in macrophages. Further in vivo and enzyme inhibition studies are recommended to evaluate the potential of these compounds as new antileishmanial. Introduction Leishmaniasis is a serious parasitic protozoal insect-borne disease with severe morbidity and mortality and commonly occurs in tropical and subtropical countries [1]. The causative agents are flagellated protozoans from the genus Leishmania comprising more than twenty species [2]. Two clinical forms dominate leishmaniasis diseases. The one form is cutaneous leishmaniasis (CL), the most common form. It has appeared as a serious international health problem based on its devastating skin effects, which affect the daily life of many patients. Furthermore, a distinct morbidity increase becomes evident because of malnutrition, infections, and chronic stress. CL has globally reached endemic proportions in about 90 countries of five continents, with more than 700,000 reported cases annually [3,4]. Skin ulcers are major disease lesions, which often lead to marked disfigurement. More than 90% of all CL cases are reported from eight countries: Saudi Arabia, Afghanistan, Pakistan, Syria, Peru, Iran, Algeria, and Sudan [5]. The absence of a vaccine aggravated the situation [6,7]. Visceral leishmaniasis (VL) is the second form of leishmaniasis dominant in rural areas of India, Bangladesh, Brazil, Sudan, and Nepal [8]. L. major and L. tropica are the species that are responsible for the most CL cases worldwide. The majority of cases based on the infection with L. major in the arid regions of Saudi Arabia and other Arabian countries can be attributed to the presence of the sand fly Phlebotomus papatasi in these regions [9][10][11]. The estimated annual incidence in Saudi Arabia is higher than 4.000, and the zoonotic form of CL has emerged due to the spread of leishmaniasis during the 20 th century [10,11]. Because of the wide distribution of desert rodents (reservoir animals) and sand flies (vector of leishmaniasis), CL is endemic in many provinces of Saudi Arabia [12][13][14]. A recent evaluation of the prevalence of leishmaniasis in the Qassim province in Central Saudi Arabia showed that 50% of the cases were based on infection with L. major, 29% with L. tropica, and only 4% with the minor species L. infantum/donovani [2]. Since the 1960s, pentavalent antimonials have been the basic treatment regime for all forms of leishmaniasis. Meglumine antimoniate (Glucantime®) and sodium stibogluconate (Pentostam®) are commonly applied drugs. These drugs are injected intravenously or via the intramuscular route. Still, many severe side effects were observed, such as accumulation of antimony in the pancreas, serum aminotransferase elevations, and electrocardiographic disorders [15]. However, the treatment with antimonials is not sufficient anymore because of the emergence of antimony resistance [16,17]. Amphotericin B (AmB), its desoxycholate (Fungizone®), and its liposomal formulation (AmBisome®) can be applied as second-line therapy for VL patients [18]. Amphotericin B application is recommended for patients who do not respond to antimonial therapy anymore. Still, there are considerable side effects such as pain in the bones, fever, and renal toxicity. Moreover, the high cost of amphotericin B therapy limited its uses. Therefore, there is an urgent demand to discover and introduce new drugs for the safe and efficient treatment of leishmaniasis. Metal-based drugs are very efficient in treating many human clinical disorders, including cancer and infectious diseases [19]. Among the transition metal NHC (N-heterocyclic carbene) complexes, NHC-silver complexes have been widely studied for various medicinal applications due to their simple synthesis, high air and moisture stability, and biological properties. Their properties comprise antibacterial, anticancer, anti-inflammatory, and antiseptic activities [20]. NHC-silver complexes with significant antimicrobial and anticancer properties were disclosed, which were more effective than compounds of other transition metals while showing only low toxicity for humans. Silver-NHCs were also able to overcome drug resistance and tackle antibiotic-resistant bacteria, fungi, and parasites [21,22]. Sporadic studies of the cellular toxicity mechanisms of silver(I) compounds suggest that Ag + ions kill organisms by various mechanisms [23][24][25]. In the present work, 14 benzimidazolium salts and their NHC-silver complexes were investigated for their in vitro antileishmanial activities against L. major amastigotes and promastigotes. They were also evaluated for their toxicity against mice macrophages in vitro as well as for their ability to induce reactive oxygen species (ROS) production in L. major promastigotes. Committee of Bioethics guidelines and approved by the Committee of Research Ethics, Deanship of Scientific Research, Qassim University, Saudi Arabia (20-03-02/30, September 2020). The method described by Osorio et al. [27] was used to maintain the parasite virulence by injection of 1 x 10 6 promastigotes of the stationary-phase to the hind footpads of BALB/c mice (passing maintenance). Eight weeks after inoculation, L. major amastigotes were collected. For the transformation of amastigotes to promastigotes, Schneider's medium supplemented with fetal bovine serum (FBS) 10% and antibiotics were used to incubate the parasites in culture flasks at 26°C. Amastigote-derived promastigotes of less than three passages were used for infection [28]. Activity of compounds against L. major promastigotes. Logarithmic-phase promastigotes were cultured in a complete RPMI 1640 medium (Invitrogene, USA) supplied with FBS of a concentration of 10%. Parasites after dispatching were placed at concentrations of 10 6 cells mL -1 into 96-wells plates (the final volume was completed to 200 µL/well). The test compounds were tested at different concentrations (25,8.3, 2.9, and 0.93 µg mL -1 ). Amphotericin B (reference compound) at concentrations of 25, 8.3, 2.9, and 0.93 µg mL -1 was used as a positive control. Plates were incubated for 3 days at 26°C for the evaluation of the antiproliferative effects. Spectrophotometric techniques by applying the MTT (tetrazolium salt of (3-(4,5-dimethylthiazole-2-yl)-2,5-diphenyl tetrazolium bromide) assay were used for assessing the number of viable promastigotes. Result data was generated with an ELISA reader (spectrophotometer) at 570 nm. Three data sets were obtained from three independent experiments. The results were expressed as IC50 values (inhibitory concentration killing 50% of the parasites) [28]. Activity of compounds against L. major intramacrophage amastigotes. Drug evaluation against L. major intramacrophage amastigotes was carried out according to the method described previously by Calvo-Álvarez et al. [29]. Briefly, peritoneal macrophages were obtained as previously mentioned by Dos Santos et al. [30]. Then, 96-wells ELISA plates were used for harvesting 5 x 10 4 cells/well with phenol red-free RPMI 1640 medium supplied by 10% FBS, and kept for 4 h at 37°C and 4% CO2. Thereupon, a pipette was used for media discarding and washing with PBS 150 µL. Then, L. major promastigotes were added to each well (at a ratio of 1 macrophage in phenol red-free RPMI 1640 medium with 10% FBS per 10 promastigotes). The plates were incubated for 24 h at 37°C in humidified 5% CO2 atmosphere to increase the rate of amastigote infection and differentiation, followed by washing with PBS three times in order to remove free promastigotes. The cells were overlaid with free phenol red RPMI 1640 medium with or without test compound, which was added at concentrations of 25, 8.3, 2.9, and 0.93 µg mL -1 , and the cells were incubated at 37°C in humidified 5% CO2 atmosphere for 72 h. Amphotericin B was used as a positive control at concentrations of 25, 8.3, 2.9, and 0.93 µg mL -1 . In order to evaluate the infected macrophage percentage, microscopy was used after removing the medium and washing, fixation, and staining of the cells with Giemsa dye. The assay was performed three times. The results were expressed as IC50 values [31]. Toxicological evaluation of compounds by MTT assay. Macrophages were collected peritoneally from mice, as previously reported by Dos Santos et al. [30]. Complete phenol red-free RPMI 1640 medium supplied with 10% FBS was used for the cultures incubated at 37°C in 5% humidified CO2. The test compounds were added at different concentrations (25, 8.3, 2.9, and 0.93 µg mL -1 ) to 96-well plates containing viable macrophages at a concentration of 5×10 3 cells/well. After 72 h incubation, the cultures were washed with PBS, then 100 µL MTT was added to each well at a concentration of 1 mg mL -1 . The cells were incubated for 4 h whereupon the supernatant was removed, and 150 μL DMSO was added to each well. A spectrophotometer was used for the colorimetric evaluation of the cells at 540 nm. DMSO (1%) without compounds was applied as a negative control. The results were visualized as the concentration of the test compound that caused 50% cell growth inhibition (CC50) [31]. ROS formation assay. Reactive oxygen species (ROS) were evaluated spectrophotometrically using the nitroblue tetrazolium (NBT) dye. The slightly modified method previously described by Pramanik et al. was used for the test with the intracellular amastigotes [32]. After incubation of infected and non-infected macrophages with test compounds (concentrations of 8.3 and 2.9 µg mL -1 ) for 2 days in 96-well plates, the cells were washed with PBS. Then 100 µL of NBT (0.5 µg ml -1 ) was added to each well and incubated for 20 min in a CO2 incubator. Thereupon, 2M KOH (120 µL) was added to each well, followed by incubation for 5 min. Then, DMSO (140 µL) was added to each well, and the plates were put in a shaker for 10 min. Finally, the optical density was read at 620 nm using a spectrophotometer. Lipopolysaccharide (LPS) of a concentration of 1 µg/mL was used as a positive control. For the evaluation of ROS in L. major promastigotes, the procedure of Tunc et al. was applied with slight modifications [33]. Promastigotes were cultured in a complete RPMI 1640 medium (Invitrogene, USA) supplied with 10% FBS. Dispatched parasites were used at a concentration of 10 6 cells mL -1 in 96-wells plates. Test compounds were added at concentrations of 8.3 and 2.9 µg mL -1 , while NBT was added at a concentration of 0.5 µg mL -1 , followed by incubation at 26 °C for 24 h. 2M KOH (120 µL) was added to each well followed by incubation for 5 min. Then, DMSO (140 µL) was added to each well. After that, the plates were put in a shaker for 10 min. Finally, the optical density was read at 620 nm by a spectrophotometer. Lipopolysaccharide (LPS, 1 µg/mL) was used as a positive control. Statistical analysis. The average from three experiments was mentioned as mean ± SD (standard deviation). Mean differences were calculated with ANOVA, and significant difference levels between the groups were analyzed by LSD. Significant different level values were considered at p ≤ 0.05 and p ≤ 0.001. SPSS 21.0 computer software was used to perform the above methods. IC50 and CC50 values were calculated by Boltzmann's dose-response analysis employing Sigmoidal Curve Fit, using Origin 8.1 computer software. SI (selectivity index) was calculated by dividing CC50 over IC50. Antileishmanial activity. Compounds 2a-g and 3a-g were initially tested for their activity against L. major promastigotes. Four compounds (2d, 2e, 3d, and 3e) revealed activity at doses of 25 µg mL -1 with EC50 values of 25 (2d), 22 (2e), 6.4 (3d) and 5.5 µg mL -1 (3e), respectively ( Figure 1, Table 1). The other compounds were inactive against L. major promastigotes. The methyl groups of the benzimidazolium scaffold and the diisopropylamine side chain of compounds 2d and 2e seem to exert improved activities against promastigotes compared with the benzimidazolium systems of the other derivatives of the compound 2 series. In addition, a significant increase of the intrinsic activity of 2d and 2e against promastigotes was achieved by conversion of these benzimidazolium compounds into the corresponding chloride-Ag(I)-NHC complexes 3d and 3e. Unlike the results from the experiments with L. major promastigotes, all test compounds 2a-g and 3a-g showed moderate activities against L. major amastigotes with EC50 values in the range of 13.7 to 17.8 µg mL -1 ( Figure 2, Table 1). There was no significant activity difference between the benzimidazolium compounds 2 and the silver complexes 3. Indeed, the benzimidazolium derivative 2a was the most active compound against the L. major amastigotes. Except for the silver complexes 3d and 3e, all test compounds were more active against amastigotes than against promastigotes. This is an interesting discovery since effects on amastigotes are considered to be more relevant for antileishmanial drug design than effects on promastigotes [3]. * P < 0.05, ** P < 0.01, amphotericin B (AmB) was used as positive control. Cytotoxic activity against macrophages. Next, all test compounds were investigated for their antiproliferative activity against macrophages to determine whether there is a reasonable selectivity of the test compounds for the parasite cells. All compounds possess a dose-dependent cytotoxic activity against the isolated macrophages (Table 1). 2a 2b 2c 2d 2e 2f 2g 3a 3b 3c 3d 3e 3f 3g inhibition% compounds 25 The CC50 values of all test compounds are in the range of 7.5 to 14.9 µg mL -1 , which are in the activity range of the reference drug AmB (Figure 3, Table 1). Silver complex 3a was the most toxic compound (CC50 = 7.5 µg mL -1 ), while its close silver analog 3b was the least toxic one (CC50 = 14.9 µg mL -1 ). Both compounds differ only by one methyl substituent at the N-benzyl moiety. Thus, already slight modifications of this moiety have the potential to modulate the toxicity to macrophages distinctly. The SI values of all test compounds for the amastigotes were less than 1, indicating no selectivity here. In the case of the promastigotes, silver complexes 3d and 3e displayed a slight selectivity with SI values of 1.8 and 2.0, respectively (Table 1). ROS production. Leishmania parasites causing cutaneous leishmaniasis can be especially sensitive for ROS, and, thus, the design of new compounds, which induce ROS formation in parasite cells and infected macrophages, is a promising strategy to obtain new antileishmanial drug candidates [35]. Hence, compounds 2a-g and 3a-g were evaluated for their ability to induce ROS formation in L. major promastigotes and macrophages. All test compounds showed dosedependent ROS formation. The compounds 2d, 3d, and 3e exhibited significantly increased ROS production (p < 0.01) in promastigotes at a concentration of 8.3 µg mL -1 , which is in line with their inhibitory effects on promastigotes shown in Table 1 (Figure 4a). ROS formation by 3d and 3e reached or exceeded the ROS induction by the positive control LPS. Hence, 3d and 3e can be considered as potent ROS inducers in L. major promastigotes. In infected macrophages, only the silver complexes 3d and 3e showed significant ROS increase at a concentration of 8.3 µg mL -1 (Figure 4b). Similar results were observed for non-infected macrophages (Figure 4c). Thus, there is no difference between infected and non-infected macrophages in terms of compound-induced ROS formation. However, ROS formation by 3d and 3e in macrophages was lower than in promastigotes and lower than ROS formation by LPS in macrophages. The differing ROS-inducing activities of 3d and 3e depending on the cell line can explain the higher inhibitory activities of both complexes against promastigotes when compared to their activities against macrophages (Table 1). A number of antileishmanial drugs showed ROS-inducing properties in promastigotes correlated with parasite cell death, and the 8-aminoquinoline derivative sitamaquine caused oxidative stress in L. donovani promastigotes by targeting succinate dehydrogenase [36]. In terms of silver compounds, silver nanoparticles induced ROS formation in L. tropica promastigotes and amastigotes, which was increased distinctly upon irradiation with UV light [37]. Conclusions The antileishmanial activities of the compounds presented in this study deserve further research efforts based on two reasons. On the one hand, the high activity of the silver carbene complexes 3d and 3e against promastigotes depends strictly on the silver atom and the benzimidazolylidene ligands. The distinct increase of ROS levels in promastigotes treated with 3d and 3e provides evidence for an efficient parasite-eliminating mechanism of these complexes. Further in vivo studies with compounds 3d and 3e as prophylaxis for L. major infection seem to be promising and studies concerning structure-activity relations and inhibition of parasite enzymes. On the other hand, the general activities of the test compounds 2a-g and 3a-g against L. major amastigotes, which are independent of the presence of silver and from benzimidazolium modifications, suggest a different mode of action in these intramacophage cells. Except for 3d and 3e, all compounds of the tested series showed higher activity against amastigotes than against promastigotes. Hence, based on the lead structures of the described compounds, the design of further new drug candidates with improved and selective activity against intracellular amastigotes appears to be possible in the future.
2022-04-12T15:03:23.354Z
2022-03-25T00:00:00.000
{ "year": 2022, "sha1": "ee7f85bc56fc1c4e0fd3ca32cc22d4bf1a1b0578", "oa_license": null, "oa_url": "https://doi.org/10.33263/briac132.135", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "55100dfdbe78e48de1671b8a1a2bba56cd2225d9", "s2fieldsofstudy": [ "Medicine", "Chemistry" ], "extfieldsofstudy": [] }
232017968
pes2o/s2orc
v3-fos-license
Mental fatigue prediction during eye-typing Mental fatigue is a common problem associated with neurological disorders. Until now, there has not been a method to assess mental fatigue on a continuous scale. Camera-based eye-typing is commonly used for communication by people with severe neurological disorders. We designed a working memory-based eye-typing experiment with 18 healthy participants, and obtained eye-tracking and typing performance data in addition to their subjective scores on perceived effort for every sentence typed and mental fatigue, to create a model of mental fatigue for eye-typing. The features of the model were the eye-based blink frequency, eye height and baseline-related pupil diameter. We predicted subjective ratings of mental fatigue on a six-point Likert scale, using random forest regression, with 22% lower mean absolute error than using simulations. When additionally including task difficulty (i.e. the difficulty of the sentences typed) as a feature, the variance explained by the model increased by 9%. This indicates that task difficulty plays an important role in modelling mental fatigue. The results demonstrate the feasibility of objective and non-intrusive measurement of fatigue on a continuous scale. Introduction Acute mental fatigue is revealed as a critical issue across the general working population, as work shifts from being physically to mentally challenging [1,2]. Acute mental fatigue is caused by sustained cognitive processing over a period of time [3]. We use acute mental fatigue interchangeably with mental fatigue for the rest of this paper. Fatigue, physical as well as mental, is a relevant problem especially for people with neurological disorders like Amyotrophic Lateral Sclerosis (ALS), Cerebral Palsy (CP), or Multiple Sclerosis (MS) [4][5][6], as a result of increased fatigability [7]. People with neurological disorders, having restricted use of limbs and reduced oral abilities, are increasingly using an augmented and alternative system with eye-tracking to work and communicate. Fatigue, which we consider to incorporate mental fatigue, can cause reduced quality and quantity of communication [5]. In the current study, we tested the feasibility of mental fatigue prediction on a continuous scale with healthy volunteers during an eye-typing task. Several studies have explored mental fatigue detection, caused by a prolonged cognitive task, from features measured using eye-tracking, such as pupil diameter, blinks and saccades. Tonic a1111111111 a1111111111 a1111111111 a1111111111 a1111111111 changes in pupil diameter are linked to mental fatigue and arousal via neural activity in locus coeruleus and as mental fatigue increases, baseline-related pupil diameter is expected to reduce [8]. Blink features such as blink frequency, blink duration and blink interval have been shown to be sensitive to increasing time-on-task [9,10]. Bursts of blinks is another phenomenon studied, where increasing mental fatigue is accompanied by increase in blink bursts [11]. Eye movement features derived from saccades-rapid eye movements between gaze positions-have also been associated with mental fatigue. Saccades have been found to get shorter and faster as individuals get more fatigued [12,13]. Most experimental setups investigating mental fatigue manipulate the task duration to induce fatigue and analyse variation in fatigue with time-on-task [8,[14][15][16][17][18]. Borragán has shown that while time-on-task plays a role in generating mental fatigue during continuous cognitive processing for an extended period of time, cognitive load, or the demand for allocation of mental resources to the task, is also an important factor [19]. Previous research has explored variations to the theory on mental fatigue caused by cognitive load, and they emphasise that mental fatigue is imposed from individual perception of high task demands, rather than high cognitive load per se [20,21]. Pattyn et al. have extended this theory and created theoretical models that place an important role on the perception of effort and its effect on mental fatigue [22]. However, the influence of the cognitive load on fatigue measurement using eyetracking features has not been explored. Eye-tracking based psycho-physiological signals have been used to classify mental fatigue in healthy individuals [17,18]. These papers classify fatigue into two mental states-fatigued and alert. However, we hypothesise that fatigue assessment have more levels than just the binary states. The bases for this hypothesis are that (1) mental fatigue increases in an accumulative process [23] and (2) mental fatigue questionnaires that have been used reliably in the medical field use non-binary scales to determine fatigue, rather than specify a threshold to classify the user as fatigued or alert [24][25][26]. Moreover, methods to counteract fatigue, such as taking a break [27,28], or monitor health [18] could be improved further and personalised to the level of fatigue. Tracking mental fatigue with a higher granularity can be useful to systematically explore other ways to counter the problem of fatigue. Furthermore, with the ubiquitous and non-intrusive nature of eye-tracking, mental fatigue detection could help to improve the quality of life for people with neurological disorders as well as the general working population. In the present study, cognitive processing during a task of eye-typing was used to induce mental fatigue, which was classified into six increasing levels of self-evaluated mental fatigue. Eye-typing is a known eye-based interactive task. The most common method for eye-typing is to fixate on each key on an on-screen keyboard for a certain amount of time (known as dwelltime), until the key is selected [29]. The cognitive processing on the eye-typing task in the present experiment was induced by asking the participants to memorise sentences of varying difficulty and eye-type them from memory, thus eliciting cognitive load on the participants. We identified the eye-based features most useful for the assessment of mental fatigue. Since the participants were not restricted in their movement, we decided to also study their posture and its relation to fatigue, based on known relations of increased postural variations during low arousal periods in tasks [30], and observations of participants lowering themselves in the chair as the experiment progressed. Finally, we also studied performance measures commonly measured using eye-typing-typing speed, error rate, attended but not selected rate (ANSR) for keys and read text events ratio (RTE) [31]. ANSR and RTE are associated with the error rate during typing and accuracy of the gaze-typing system [32]. Since most of the above physiological measures are also commonly investigated when studying cognitive load [33][34][35][36][37] and mental fatigue is affected by cognitive load, in this paper, we will attempt to explain the impact of the relationship between cognitive load and fatigue on the features studied. Participants Nineteen healthy volunteers (nine males, 10 females, Age: 25.5 years ± 2.38), all university students, participated in the study. None of the participants had photosensitive epileptic seizures or a history of a brain disorder. The Scientific Ethics Committee for the Capital Region in Denmark approved the study protocol (approval number H-18052072). All participants provided written and informed consent to participating in the study, and they received a gift card worth 500 DKK on finishing the experiment. One participant did not complete the study and was omitted from the analysis. Experimental design Each participant performed the experiment during four different days. Each day, two sessions were performed in one seating. Each session was composed of five typing-from-memory trials, which involved reading and memorising a sentence, and typing it from memory using eyetyping (Fig 1). The source of the sentences was the Leipzig corpus [38], and the readability score-Lasbarheitsindex (LIX) score [39]-was used to define the level of difficulty. For simplicity, two levels of difficulty were established based on the LIX score-easy, with a LIX score of less than 30 and difficult, with a LIX score of more than 60. During an easy session, the typing-from-memory trials involved five easy sentences, and five difficult sentences were applied during the difficult session. The order of easy and difficult sessions was balanced for each participant. Between each trial, 5 s of break time was provided to the participants, to allow the phasic arousal to return to the baseline [15]. On the first day, after the participants signed the consent form, they read the instructions on the experiment and the typing procedure. This was followed by a practice session. The experiment was performed on an on-screen keyboard Optikey [40] using the eye-tracker Tobii Eye Tracker 4C. The experiment room had lighting of 25-60 lux at the computer screen. At the end of every trial, the participants answered the effort question from NASA-Task Load Index questionnaire (NASA-TLX) on a seven-point Likert scale, by selecting a number using eye-tracking on the on-screen keyboard, in response to the question on the screen, thereby reporting the perceived effort during the trial. Before starting the experiment each day and after every session, a question on the subjective level of fatigue on a seven-point Likert scale was orally answered by the participants [16]. The experiment design is shown in Fig 2. Native danish speakers performed the test in Danish, and everyone else performed it in English. Ten participants performed it in English. Features The features computed were divided into three groups-performance-based features, eyebased features and self-reported measures. They are listed in Table 1, with descriptions of each feature. Eye-tracking data, obtained using the Tobii Pro software development kit, was filtered by removing invalid data (data points from the Tobii Eye Tracker 4C that remained constant in all the data fields) and interpolating spontaneous blinks, defined as missing data for a continuous duration of range 0.075-0.500 s. The pupil data was filtered by removing 0.200 s of pupil data before and after the blinks and replaced with a linear interpolation of the pupil diameter. This was followed by application of a hampel filter [41] with removal of outliers larger than 3 standard deviations of the averaged data over 5 samples around the current data sample. The pupil diameter from the right and left eye were combined using a weighted average, with weights computed from the inverse of the standard deviation of 25 samples until the current data sample. A Hidden Markov Model was used to label saccades, fixations and noise [42,43]. Fixations of duration less than 0.100 s and saccades of amplitude less than 0.5˚were labelled as noise. Furthermore, successive fixations separated by less than or equal to 0.075 s, and the centroids of which were less than 0.5˚away, were merged. Blink bursts were identified as two or more blinks occurring within a span of 2 s. The feature eye height was computed from the vertical position of the eye. Difference between the eye height during the trial and at the beginning of the day was used to define the feature. Pearson correlation between right and left pupil diameter was used to determine the quality of the data. Sessions with a correlation value lower than a threshold of 0.75 were removed from data analysis of the features. Self-reported measures were used for subjective evaluation of the cognitive load and mental fatigue. The effort question from NASA-TLX was selected to focus on the perception of the effort applied by the participants on the tasks. A single-item measure using the word tired was The blue text during the reading task displays the sentence to be typed. When the typing task starts, the sentence to be memorised disappears. https://doi.org/10.1371/journal.pone.0246739.g001 used to define the fatigue level [16,44]. The experiment required cognitive processing, and did not involve any physical activity. Moreover, the participants were not restricted in their movement, and thus the fatigue level was assumed to define mental fatigue. Data analysis Analysis of self-reported measures. Two types of manipulation check were implemented, based on the perceived effort of each trial and the fatigue level obtained after every five trials. The perceived effort was examined for the effects of the the objective task difficulty, session number, day number and language using linear mixed models (LMM). Fatigue level was examined to find out if performing the cognitive tasks had an effect. Initial fatigue level was recorded before the experiment started, intermediate fatigue level after session 1 (after five trials) and terminal fatigue level after session 2 (after ten trials). Wilcoxon-ranksum test was performed in the three following sections, analysing (1) difference between the intermediate and initial fatigue level, (2) difference between the terminal and intermediate fatigue level and (3) difference between the terminal and initial fatigue level, and if they varied from 0. Additionally, fatigue level was analysed for the effect of the objective task difficulty, day number and time of evaluation. To preserve the independence of the fixed variables objective task difficulty and time of evaluation, the analysis was performed in 3 separate sections, analysing (1) difference between the intermediate and initial fatigue level for the effect of objective task difficulty of session 1, day number and language, (2) difference between the Design of the experiment during one day consisted of two sessions and a total of ten trials. Data Analysis was divided into two parts-prediction of the fatigue level, correlation analysis and linear mixed model of the data for effects of time-on-task and perceived effort. Trials 1,5 and 10 were used as representative of the users' state before and after sessions 1 and 2, respectively, and employed for the prediction and correlation analyses. All trials were used for the linear model. terminal and intermediate fatigue level for the effect of objective task difficulty of session 2, day number and language and (3) difference between the terminal and initial fatigue level, to examine the effect of the order of the objective task difficulty (comparing easy followed by difficult session to difficult followed by easy session), day number and language. LMM was fitted to the difference in fatigue level for each of the above cases. Machine learning analysis: Prediction of the fatigue level. Four models were tested for the prediction of the fatigue level-adaBoost regressor with regression trees (RT), random forest regression (RFR), partial least squares regression (PLS) and support vector regression with bagging (SVR). The machine learning methods were implemented using the Scikit-learn library (version 0.22.1) in Python (version 3.6.10). Hyperparameters for all four models were optimised using grid search and 5 repetitions of 5-fold cross-validation in the Scikit-learn library. The training and testing data was normalised to unit Euclidean length. The mean absolute error (MAE) from 5 repetitions of 5-fold cross-validation was used as the primary metric, with 80% of the data as training data, to compare the performance of the models. We compared with a random predictor based on Monte Carlo simulations of the target variable, where the target variable had the same distribution as the fatigue level data collected in the study; the MAE computed using this simulated data was used to establish the baseline prediction performance of the fatigue levels. To identify features that generated the best performance of the models, feature selection through recursive evaluation was performed and compared to the models generated with all the features. The model with the lowest MAE was chosen as the final model. To the feature combination selected from this step, objective task difficulty was added as a feature and the model results of MAE and explained variance were compared to the original model without the objective task difficulty. Feature importance was further computed to explain the importance of the various selected features. Up to now, the models were applied in an subject-independent cross-validation setting, where the data from all subjects was pooled together to train the model, and the testing data was composed of all subjects. As a last step, the final model was applied in a cross-subject setting and MAE results from leave-one-subject-out cross-validation (LOSOXV) were discussed. These results would hint at the robustness of the model and show whether the inter-subject differences in the features selected are greater than the intra-subject differences. Statistical analysis: Effect of time-on-task and perceived effort on the features. To better understand the working of the machine learning models, the effect of perceived effort and time-on-task on the features was analysed using mixed methods analysis. The entire data analysis is depicted, along with the experiment design, in Fig 2. The fixed effects used were of two types-factors, which was language (Danish/English), and numerical variables comprising of perceived effort, day number (with four increasing levels) and time-on-task (with 10 increasing levels). Perceived effort (with seven subjectively defined levels), replaced the objective task difficulty (with two objectively defined levels) as a fixed effect, as the features were expected to be more sensitive to the perceived effort. Random intercepts were used to model the random effects of the within-subject variability and random slopes for the perceived effort were added to the model when found to be significant using the step function from lmerTest package and when the final model converged. Significance was set at 0.05. Packages lmerTest (version 3.1.2) [45] and lme4 (version 1.1.23) [46] in R (version 4.0.2) [47] were used to implement the models, and effect sizes were computed using the package r2glmm (version 0.1.2) [48], which used the Nakagawa and Schielzeth approach [49]. The pvalues were computed using the Satterthwaite degrees of freedom. Additional post-hoc analysis was performed using the package multcomp (version 1.4.13) [50] and Bonferroni correction for the p-values. Statistical analysis: Correlation between fatigue level and the features. To assess the role of subjective reports of the fatigue level in explaining the machine learning models, Pearson correlations between the fatigue levels and the features from the trial numbers 1, 5 and 10 were performed. Significance was set at 0.05. Results Each of the 18 participants performed the experiment on four days in total, with 10 trials on each day. This resulted in a total of 720 trials. Due to a deviation in settings, seven extra trials were performed, and they were removed from analysis if no self-reported measure was obtained for the trial. Self-reported measure perceived effort was obtained for 704 trials, which were all used for its analysis. The correlation between the right and left pupil diameter was below 0.75 in 15 sessions, which were removed due to increased noise. Furthermore, trials where data from any feature was missing were also removed. This resulted in a final selection of 623 trials, such that each participant had at least 10 trials. Data for each trial consisted of the performance and eye-based features computed from Table 1 and the perceived effort by the participant. The data from trials 1, 5 and 10 on the four days for 18 participants amounted to 216 trials. Fatigue level data was obtained for 209 of the trials, which were all used for the manipulation check. Trials from noisy sessions and with missing data were removed, such that each participant had at least six trials and each fatigue level had at least five data points. One participant with two trials and the trial with fatigue level 7, which had only one data point, were removed resulting in 183 remaining trials. The data from these trials and the fatigue levels were used for machine learning, to predict fatigue level on a six-point Likert scale and for correlation analysis. Analysis of self-reported measures The perceived effort was examined, to determine if it showed an effect of the objective task difficulty in the experiment. The marginal mean of perceived effort showed a difference of the objective task difficulty (Easy: 2.95, 95% CI [2.57, 3.33], Difficult: 4.72, 95% CI [4.34, 5.10]). The perceived effort decreased each day by 0.339 (SE = 0.045) and increased during the second session by 0.415 (SE = 0.142). Using linear mixed models, we found that the objective task difficulty had an effect on the perceived effort (χ 2 (2) = 262.88, p < 0.001, η 2 = 0.012). The effect of the day number was significant (χ 2 (1) = 54.849, p < 0.001, η 2 = 0.060) and so was the session number (χ 2 (2) = 8.651, p < 0.05, η 2 = 0.010). There was an interaction between the session number and objective task difficulty and the perceived effort for the easy session reduced during the second session by 0.469 (SE = 0.200), but the effect was not significant after multiple comparisons. Fatigue level was investigated in 3 parts. Machine learning analysis: Prediction of the fatigue level Monte-carlo simulations resulted in a baseline MAE of 1.487. All the machine learning models were compared to this baseline error. The cross-validation results on MAE computed from the 80% training data and 20% testing data, for models generated on all features and for features selected using recursive feature elimination are given in Tables 2 and 3. Both RFR models explained high variance in the data while resulting in a low MAE. Based on the consistent results, RFR with recursive feature elimination was selected as the best performing model over the RFR model using all features (20% testing data MAE = 1.157), due to the use of fewer features. This model was a 22% improvement from the baseline performance. On adding objective task difficulty as a feature to the RFR model, the 80% training data MAE lowered from 0.939 to 0.912, the 20% testing data MAE increased from 1.157 to 1.179 but the resulting model explained 29.347% of the variance in the data, higher by 9% from the model without objective task difficulty. The variables most important for the prediction were (in descending order of importance): blink frequency, eye height, objective task difficulty and baseline-related pupil diameter. Finally, LOSOXV was performed. The resulting testing MAE was 1.057, with a minimum testing error of 0.609 and a maximum testing error of 1.894. Statistical analysis: Effect of time-on-task and perceived effort on the features To observe the impact of the relationship of mental fatigue with the perceived effort and increasing time on the features, all features were analysed using LMM, with perceived effort, time-on-task and day number as the fixed effects. The models were reduced to the optimised model for each feature, which resulted in elimination of some of the fixed effects in the end. The results of the main effects are shown in Table 4. The model coefficient β and its standard error are depicted in the table. A positive β indicates that the dependent variable increases with increasing independent variable, and a negative β indicates that the dependent variable decreases with increasing independent variable. The language of the experiment did not affect any of the features. The optimised model for saccade duration did not contain any fixed effect and so the feature is omitted from the table. The variation in the features with respect to timeon-task are shown in Discussion In the present study, we modelled mental fatigue for healthy individuals performing cognitively demanding eye-typing tasks. Cognitive load of varying degree was generated using working memory task of memorising 10 sentences of two levels of task difficulty-easy and difficult. The fatigue level showed a significant increase after each session, composed of five trials, and the terminal fatigue level was higher when the second session was difficult. The prediction of the fatigue level on a six-point Likert scale using RFR resulted in a 22% improvement from baseline MAE. On addition of objective task difficulty as a feature, the explained variance of the model increased by 9%, in comparison to the model without the feature objective task difficulty. The features selected by the final model-in decreasing order of importance wereblink frequency, eye height, objective task difficulty and baseline-related pupil diameter. As expected, the increase in fatigue level was significant after both sessions, but only in the second session did the task difficulty have an effect on the fatigue level. Moreover, the difference between the terminal and the initial level did not depend on the order of the difficulty levels of the sessions. This indicates that there may be a non-linear relationship between task difficulty, time-on-task and mental fatigue. The increase in the subjective fatigue level was higher after the second session (0.714) compared to after the first session (0.299). We know from literature that evaluation of the fatigue experienced can lead to re-evaluating the effort on the task and the performance generated from the effort applied [21]. This is observed in the data, as the participants evaluated their fatigue level after the first session, which may have prompted them to invest more effort in the second session, regardless of the task difficulty in the second session, resulting in the perceived effort being higher in the second session, as observed during manipulation check. This in-turn may have resulted in increase in the fatigue level after the second session. At the same time, the performance features such as typing speed and ANSR improved with time-on-task, as seen in Fig 4, depicting the application of higher effort. The ability to apply sustained effort on a task to achieve maximum performance has been termed conation [51]. This concept can help to explain a non-linear relationship between mental fatigue, task difficulty and time-on-task. Conation provides a divergence from the resource-based theory of fatigue, which delineates a limited capacity of mental resources available for tasks, and applying effort on a task reduces some of this capacity, with reduced resources available for the subsequent tasks. The Framework for Understanding Effortful Listening (FUEL) is a model based on Kahneman's attention model [52], and can potentially be extended to mental fatigue. The model bridges the concepts of effort to motivation level and task demands, and claims that increase in task demands or motivation can result in an increase in the effort applied on the task. In this study, the re-evaluation of fatigue after the first session and conation, along with the link between effort and motivation in increased task demands from the FUEL model could explain the observed increase in perceived effort during the second session. Prediction of the fatigue level using eye-based data has been performed as a binary classification in literature [17,18]. However, mental fatigue classification on a continuous scale has more uses in real-life fatigue management [18]. In this study, an RFR model of fatigue level on a six-point Likert scale predicted the 20% testing data with a MAE of 1.179. If the fatigue level on the six-point Likert scale had been classified as two classes, the mean absolute error that would have resulted in a false classification would have been 1.51. In comparison, our regression model has resulted in a lower prediction error. While this could have direct applications for non-intrusively classifying mental fatigue for people with neurological disorders, who use eye-typing in daily lives, the suggested model still needs to be re-evaluated for the target population. The addition of task difficulty to the list of features also improved the explained variance in the data by the model by 9%, compared to the model without task difficulty. Although the MAE did not improve by inclusion of the task difficulty, it was shown to be the third-most important feature in determining mental fatigue. This suggests the importance of modelling task difficulty and cognitive load in determining mental fatigue. For future applications, recognising the difficulty level of the task could improve the prediction accuracy of mental fatigue. The best performing machine learning algorithm was RFR, a non-linear model with four features, including task difficulty. Statistical analysis methods were undertaken to understand the working of the machine learning model. Two of the four features-baseline-related pupil diameter and eye height, showed a linear effect of time-on-task and correlated to the subjective fatigue level with absolute correlation value of greater than 0.1. The third, and the most important feature selected-blink frequency-not only showed effects of time-on-task and correlated to the fatigue level, but also showed high effects of the perceived effort. The feature blink burst ratio showed conflicting effects of time-on-task and correlation with fatigue level, compared to blink frequency. While blink frequency reduced with time-ontask, blink burst ratio did not show any effect of time-on-task. Although, both the features depicted positive correlation with the fatigue level, only blink frequency was selected by the RFR model. The only other difference between the features was that blink frequency also showed effects of the perceived effort. Blink frequency was selected to be the most important feature by the model. This working of the model indicates that fatigue level might be controlled by both time-on-task and perceived effort. The generally low variance explained by the machine learning model (30%) can be attributed to this complex nature and relations of mental fatigue, time, cognitive load and possibly other related variables such as motivation, circadian rhythm and food and caffeine intake [1,10,53], which were not controlled for or included in the scope of this study. The LMM showing the effect of perceived effort, time-on-task and day number on the features suggest that several features did not behave as expected, with respect to time-on-task. As per the model coefficient values (β), blink frequency reduced with increasing time-on-task while saccade amplitude and saccade peak velocity increased. A possible explanation for the blink frequency could be the increase in the effort applied, indicated by the perceived effort, during the second session, as the participants attempted to concentrate more while performing the second session and thereby, blinked less often. Another explanation could be that the increase in the effort applied could be accompanied by an increase in the arousal level, resulting in increase in saccade peak velocity [54], and thereby increased saccade amplitude as timeon-task increased. The features studied in the study have not been previously studied in combination with an interactive eye-based task, and the effect of such an interaction may have affected the behaviour of the features in response to cognitive load and time-on-task. All the features analysed using LMM resulted in the perceived effort having a stronger effect on the features than time-on-task. The balancing order of the difficulty levels on different days could have reduced the average effect of time-on-task over each day. There are additional limitations in this study. No objective measurement of fatigue was conducted, using e.g. attention tests [55], which could have confirmed the subjective fatigue level. Although the participants performed two trials for practice, the eye-typing task was not a common task in the participants' everyday lives, and there was a large learning effect over the days. Finally, all the features examined within the study, with the exception of baseline-related pupil diameter and eye height, are known to be affected by cognitive load, which may further have resulted in unexpected variations in the features, such as improvement in performance features with time-on-task. Conclusion Results from the current study indicated that mental fatigue prediction as a regression problem has a feasible solution. Moreover, mental fatigue, perceived effort and time-on-task are interlinked in a complex manner, and modelling of mental fatigue depends on both time-on-task and perceived effort. We were able to successfully make reasonable predictions of the fatigue level using three eye-based features, during an eye-typing task-blink frequency, eye height and baseline-related pupil diameter. On including task difficulty as an additional feature to predict the fatigue level, the variance explained by the machine learning RFR model improved. These results are a step towards a better understanding of the cognitive state of mental fatigue. Finally, it contributes to the development of a non-intrusive method for continuous mental fatigue detection, that could benefit both people with neurological diseases and general working population.
2021-02-24T06:16:39.926Z
2021-02-22T00:00:00.000
{ "year": 2021, "sha1": "9e20b2ce459663438feeeea85f979f9dfbe34408", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1371/journal.pone.0246739", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "16214a15f15cb0d8b64198fe7dfbe4f32a9ee77e", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
263669561
pes2o/s2orc
v3-fos-license
Frequency and burden of disease for SARS‐CoV‐2 and other viral respiratory tract infections in children under the age of 2 months To evaluate the frequency and burden of disease of SARS‐CoV‐2 and other respiratory viruses in children under the age of 2 months. Although viral respiratory tract infections are frequent in early life, the coronavirus disease 2019 (COVID-19) pandemic might have changed the burden of respiratory viruses in children.SARS-CoV-2, responsible for the COVID-19 pandemic in January 2020, 1 has had a dramatic impact worldwide in terms of mortality, health system saturation, lockdowns, social distancing measures, and school/work closures. 2,38][9] Successive epidemic waves of COVID-19 caused by different SARS-CoV-2 variants have occurred in 2021 and 2022 10,11 : the Alpha in February 2021, Delta in July 2021, and Omicron variants from November 2021 until at least the winter of 2022-23.The emergence of new variants has increased children's susceptibility to SARS-CoV-2 infections. 10,11However, the true burden of COVID-19 (relative to diseases caused by other viruses affecting the respiratory tract) in young children has yet to be clearly determined. The nonpharmacological measures employed to contain COVID-19 epidemics have had an impact on the circulation of these viruses in children, with a decrease in viral disease during the lockdown periods of 2020 and early 2021-even in very young babies. 12,135][16] Since 2020, the use of respiratory virus panels by multiplex polymerase chain reactions (PCRs) to screen nasopharyngeal swabs has increased 17,18 ; this technique provides a better assessment of the impact of viral infections.However, there are few published data on the burden of COVID-19 relative to other respiratory tract diseases caused by viruses in children under the age of 2 months. 19e primary objective of the present study was to determine the frequency of SARS-CoV-2 infection in children under the age of 2 months during a 1-year epidemic period of COVID-19.The secondary objectives were to (i) determine the frequency of infection by variants of COVID-19 in children under the age of 2 months, (ii) study the frequency of positive multiplex-PCR tests for other respiratory viruses, during the same epidemic period, and (iii) compare the burden of disease for SARS-CoV-2 versus the other respiratory viruses. | Study design and inclusion criteria This retrospective, cross-sectional, single-center study was conducted at Lille University Hospital between March 1, 2021, and February 28, 2022.We included all children under the age of 2 months who had been admitted to our institution during the study period and then tested for SARS-CoV-2 using a reverse transcriptase (RT)-PCR assay.In contrast, we excluded children aged 2 months or over, repeat samples (from children having been tested several times during a single hospital stay), and duplicate patient files. | Definitions and outcomes The positive detection of SARS-CoV-2 and of other respiratory viruses using multiplex RT-PCR in a nasopharyngeal swab was defined according to each assay's manufacturer's recommendations.Analyzer (both from Qiagen ® ). 16The testing panels' respective target viruses are listed in Supplementary Material 1. | Study procedures, data collected, and ethical aspects Clinical data of potential value in the study's analyses were collected from each infant's electronic health records: demo- | Statistical analysis Statistical analyses were performed using Epi-Info 6.04fr software (Centers for Disease Control and Prevention).Quantitative variables were expressed as the mean (standard deviation).Categorical variables were expressed as the frequency (percentage), with 95% confidence intervals (CIs) quoted when appropriate.Fisher's exact test was used to compare values of categorical variables, and the Mann-Whitney test was used to compare values of quantitative variables, when appropriate.The threshold for statistical significance was set to p < 0.05. | RESULTS During the study period, a total of 727 children under the age of 2 months with data on an RT-PCR test for SARS-CoV-2 performed at the Lille University Hospital were included in the study (mean [standard deviation] age: 0.9 [0.6] months; boys: 56.8%) (Table 1). Of these, 514 (71%) were tested in the ED and 213 (29%) during a hospital stay (Figure 1).Of the 514 tested in the ED, 499 (97%) had symptoms of infection.Of these 514, 61 were tested for SARS-CoV-2 only (12%) without differences between each period of circulation of each variant, mainly before surgery (n = 29) or the presence of symptoms (39%) and a pre-op check-up (35%), followed by transfer to another unit (16%), the mother's SARS-CoV-2 infection during pregnancy (3%), and contact with an infected person during the hospital stay (2%).The SARS-CoV-2 One or more siblings, n (%) 267/400 (67) 3).Among the children admitted to the ED, The low positivity rate during the Alpha period might have been due to a nationwide, 28-day period of lockdown (from April 3, 2021, to May 3, 2021) and an increase in social distancing measures. 20However, it was more probably due to a lower tropism of the initially circulating SARS-CoV-2 variants toward the respiratory tract and low immune system stimulation in babies and children. 3,22Yuan et al. found that there were differences in immune response between children and adults with COVID-19, which may be associated with the low positivity rate during the alpha period. 3This study showed that the production of the pro-inflammatory cytokines IL-2, IL-4, and IL-6 was greater in adults than in children, and is associated with a greater risk of .01 Abbreviations: LRTI, lower respiratory tract infection; SD, standard deviation; UTI, urinary tract infection.a Comparison between children with a positive SARS-CoV-2 test and children with a positive test for another respiratory virus. lung damage.Studies conducted during the same time period but outside lockdown found a similar, low prevalence of SARS-CoV-2 in children, e.g., 2.7% of the children (whether symptomatic or not) tested at Cologne University Medical Center (Germany). 23In a US network with a prospective surveillance of acute respiratory illnesses at seven pediatric medical centers, a SARS-CoV-2 infection was detected in 5.9% of children (65% of whom were under 5 years of age) tested in the ED between March 2020 and August 2021. 24though many studies have focused on the symptoms and impact of COVID-19, few have compared the data on SARS-CoV-2 with those on other respiratory viruses.In the present study, two signs were significantly more frequent in children with a positive SARS-CoV-2 test than in children with another respiratory virus: fever (71.6%, and particularly isolated fever [20%]) and diarrhea (13.3%).These signs were even more frequent in the French National Pandor cohort of SARS-CoV-2-positive patients under the age of 90 days: 92% had fever and 24% had diarrhea.| 107 In contrast to the present study, the Pandor cohort included only hospitalized children. 25In a meta-analysis of data on 7780 SARS-CoV-2-positive children from 26 countries performed between January and May 2020, 20% were asymptomatic, 59.1% had fever, and 6.5% had diarrhea. 4The high proportion of Omicron variants (61%) in our study might explain the high frequency of diarrhea in the SARS-CoV-2-positive patients.In a South African study of children hospitalized for a SARS-CoV-2 infection, the prevalence of diarrhea was also high (20%) for the Omicron variant. 11A retrospective cohort study in the United States compared Delta and Omicron variant SARS-CoV-2 infections in a group of 79,592 children under the age of 5 years. 26The burden of SARS-CoV-2 disease was significantly lower during the Omicron period, with fewer ED visits (18% vs. 26% during the Delta period) and fewer hospital admissions (1% vs. 3%, respectively).In our study, the burden of disease of SARS-CoV-2 infections in children under 2 months of age appeared to be moderate, with a median LOS of 3 days (in a conventional or post-ED ward in 96% of cases) and a requirement for respiratory support in 20% of cases. Despite the implementation of drastic social distancing measures during the study period, the frequency of infection with respiratory viruses other than SARS-CoV-2 was high (58%).Enterovirus/ rhinovirus (28%) and RSV (27%) infections were frequent, whereas influenza virus (0.5%) and adenovirus infections (0.9%) were rare.An Austrian study performed in a pediatric hospital during the winter of 2020-2021 among children under 2 years old with acute respiratory symptoms (n = 449) found similar results. 13though influenza virus and SARS-CoV-2 infection rates are known to fall significantly when non-pharmaceutical measures are taken, the latter have only a transient effect on RSV infections, 27 and no effect on rhinovirus infections. 28However, the widespread use of nasopharyngeal swabs in the EDs might have increased the detection of respiratory viruses even in asymptomatic children. 16,17In our study, SARS-CoV-2 and another respiratory virus were detected concomitantly in 3% of cases overall; this value is in line with the literature data. 29 characteristics were those usually encountered, for example, a comorbidity rate (including prematurity) of 39%. 26The hospital admission rate was high, as is usual for a population of children under 2 months of age seen in the ED. 7,19,25ere are several possible explanations for the relatively low COVID-19 burden of disease in children. 30First, immune crossreactivity may result from contact with the other coronaviruses that predominate in younger children. 31Second, the innate immune system is frequently stimulated and trained in young children via contact with viruses and live vaccines; this might enhance immune defenses against SARS-CoV-2. 32Third, SARS-CoV-2's receptor (pulmonary angiotensin-converting enzyme 2) is poorly expressed in the respiratory tract of young children, which might explain the low rates and low severity of infections in this population. 22Finally, the absence of high-risk factors for severe COVID-19 (i.e., obesity, smoking, and diabetes) in children may have a role. 33 | CONCLUSIONS These tests are described in more detail below.Each result was validated by a medical biologist.The epidemic periods linked to the dissemination of particular SARS-CoV-2 variants were defined with regard to the French Public Health Agency's guidelines 20 : the "Alpha period" ran from February 1 (the start of data collection) to June 28, 2021, the "Delta period" from June 29th to December 27, 2021, and the "Omicron period" from December 28, 2021 to February 28, 2022 (the end of data collection).The Alpha period was marked by the compulsory wearing of masks in public from the age of 6 and a nationwide, 28-day period of lockdown in April 2021, interregional travel restrictions, closure of indoor public places, and a curfew ending in June 2021.During the Delta and Omicron periods, masks were mandatory indoors, and the number of people allowed in public places was limited.Schools remained open throughout the study period.The endpoint to the primary objective was positivity for SARS-CoV-2 among children under the age of 2 months tested in our center during the study period.The endpoint to the secondary objectives was (i) positivity for newly circulating variants of SARS-CoV-2 in each defined period, (ii) a positive multiplex rt-PCR test for other respiratory viruses, (iii) the hospital admission rates, length of stay (LOS) in hospital, intensive care unit admission, and the requirement for respiratory support for SARS-CoV-2 and other respiratory viruses. All SARS-CoV-2 RT-PCR test results were recorded in an Excel ® file (Microsoft Corporation) at the medical center's virology laboratory.Each SARS-CoV-2 and/or multiplex RT-PCR test performed in a child under the age of 2 months counted as a single inclusion.At that time, indications of these RT-PCR were fever and/or respiratory symptoms and/or digestive symptoms and/or oro-pharyngeal symptoms on admission.The only two situations in which SARS-CoV-2 was tested alone were contact with a person positive for SARS-CoV-2 and before surgery at the ED, or before another procedure during the hospital stay.When considering children with more than one PCR test during a given hospital stay, only the first test was included in our analysis.Two real-time SARS-CoV-2 RT-PCR tests were used: the Simplexa™ COVID-19 Direct (Diasorin), with S gene and ORF1 gene targets, and the Xpert ® Xpress SARS-CoV-2 (GenXpert, Cepheid), with E, N2, and SPC targets. 21Two syndromic testing panels were used: the BioFire ® Respiratory Panel 2.1 plus on the FILMARRAY multiplex RT-PCR system (both from bioMérieux Lyon) and the QIAstat-Dx Respiratory SARS-CoV-2 Panel on the QIAstat-Dx graphic and administrative data (age, sex, siblings, and hospital admission date), the child's medical history (previous hospital admissions, notified contact with an infected person, current symptoms, and comorbidities), virologic test results, care provision (respiratory support, antimicrobial therapy, intravenous fluid support, enteral nutrition, vascular filling, and administration of vasoactive amines), and the disposition (hospital admission through the emergency department [ED], a conventional ward or the intensive care unit, transfer to another hospital, LOS, death).For children tested before surgery or during a hospital stay unrelated to a viral infection (e.g., preterm children), we did not analyze the care provision or the LOS because they might not have been representative of the burden of the infectious disease with regard to the LOS and the care provided.The reason for performing an rt-PCR test for SARS-CoV-2 in these patients was documented: a pre-op check-up, the presence of COVID-19 symptoms, an infection in the child's mother, contact with an infected person, transfer to a hospital ward, or unknown.The children's parents or legal representatives were provided with information about the study's objectives and procedures.In line with the French legislation on non-interventional, retrospective analyses of routine medical practice, consent to participation was not required.However, the parents or legal representatives could object to inclusion of their child's data.In strict compliance with France's MR-004 reference methodology (established by the French National Data Protection Commission), the study was approved by Lille University Medical Center's data protection commission (reference: DEC21-364). T A B L E 3 Clinical data and medical care on children under the age of 2 months with a rt-PCR SARS-CoV-2 performed in the emergency department between March 1, 2021, and February 28, 2022. 81% were subsequently admitted to a hospital unit (76% of the SARS-CoV-2-positive children and 82% of children with another respiratory virus detected).Children with a positive SARS-CoV-2 test were less likely to have required oxygen therapy, respiratory support, enteral nutrition, or intensive care admission than children with other respiratory viruses.The duration of oxygen therapy, of high-flow oxygen support, and the LOS were shorter in SARS-CoV-2-positive children than in children with another respiratory virus ( T A B L E 4 Characteristics of SARS-CoV-2 positive patients versus RSV-positive patients admitted between March 1, 2021, and February 28, 2022. However, 16 (26%) of the SARS-CoV-2 positive patients had a viral co-detection (particularly enterovirus/rhinovirus and RSV).The present study had limitations of a retrospective singlecenter study.Although the study's design ensured that the proportion of missing data was very low, we lacked data on the medical care given to children transferred to other hospitals (14.8% of the total).Twenty-two percent of children had SARS-CoV-2 testing alone, without a multiplex RT-PCR test performed, and only 10% at the ED, mainly before surgery.This may possibly be considered as a screening differential bias.To avoid selection bias, we decided that data from patients tested when already in hospital should be excluded from our secondary analyses.Furthermore, we did not consider LOS data for children admitted for reasons other than a respiratory tract infection (e.g., prematurity, surgery, heart disease, etc.); in many cases, these children had a long hospital stay that was transiently modified by an intercurrent viral infection.To avoid a patient effect, we also chose to exclude the small proportion of children with more than one tested nasopharyngeal sample (none of which were positive for SARS-CoV-2).The present study had a number of strengths.It is one of the first to have generated comparative data on the frequency of infection by SARS-CoV-2 and other respiratory viruses in very young children.Moreover, we were able to analyze a large body of data, thanks to systematic collection over a 1-year period in a large university hospital.The patients' In children under the age of 2 months, SARS-CoV-2 infections appear to have at most equal or less burden of disease than infections by RSV or other respiratory viruses.This study should guide practitioners toward a management not different from that of other respiratory viral infections, without over-treating or over-medicalizing in young children with a SARS-CoV-2 infection.Systematic viral detection with rapid results could be introduced to reduce the proportion of antibiotic prescriptions in children with viral infections. characteristics of all children under the age of 2 months with a SARS-CoV-2 RT-PCR test in emergency department or in a hospital ward between March 1, 2021, and February 28, 2022.Flow chart of the patients and positive tests for SARS-CoV-2 and other respiratory viruses.Results of SARS-CoV-2 RT-PCR tests and multiplex RT-PCR tests for other respiratory viruses among children under 2 months of age between March 1, 2021, and February 28, 2022. Abbreviation: ED, emergency department.aBorn before 37 weeks of gestational age.F I G U R E 1 T A B L E 2 a 15 SARS-CoV-2 variants not screened. Table 3 ).Relative to children with a positive RSV test, children with a positive SARS-CoV-2 test were significantly more likely to have fever and diarrhea and significantly less likely to have bronchiolitis.They also were less likely to require respiratory support and enteral nutritional and had a shorter LOS, relative to children with RSV (Table4).Relative to children with a viral respiratory infection other than RSV, children with a positive SARS-CoV-2 test were more likely to have had fever and less likely to have signs of bronchiolitis (Supplementary Material 2).Relative to children with a negative SARS-CoV-2 test, children with a positive SARS-CoV-2 test were significantly older, had fewer comorbidities (including prematurity), were more likely to have had fever and had a significantly shorter LOS (Supplementary material 3).
2023-09-27T20:24:34.514Z
2023-10-05T00:00:00.000
{ "year": 2023, "sha1": "0299d8247fab235faec977342ae590566d5d3c66", "oa_license": "CCBYNC", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/ppul.26718", "oa_status": "HYBRID", "pdf_src": "Wiley", "pdf_hash": "392e121658dad537cdcbdc4b208e7f042a3f3c0f", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
211523259
pes2o/s2orc
v3-fos-license
Nutritional Status and the Influence of the Vegan Diet on the Gut Microbiota and Human Health The human gut microbiota is considered a well-known complex ecosystem composed of distinct microbial populations, playing a significant role in most aspects of human health and wellness. Several factors such as infant transitions, dietary habits, age, consumption of probiotics and prebiotics, use of antibiotics, intestinal comorbidities, and even metabolic diseases may continously alter microbiota diversity and function. The study of vegan diet–microbiota interactions is a rapidly evolving field, since plenty of research has been focused on the potential effects of plant-based dietary patterns on the human gut microbiota. It has been reported that well-planned vegan diets and their associated components affect both the bacterial composition and metabolic pathways of gut microbiota. Certain benefits associated with medical disorders but also limitations (including nutritional deficiencies) have been documented. Although the vegan diet may be inadequate in calorific value, it is rich in dietary fiber, polyphenols, and antioxidant vitamins. The aim of the present study was to provide an update of the existing knowledge on nutritional status of vegan diets and the influence of their food components on the human gut microbiota and health. Introduction During the last decades, plant-based and vegetarian eating patterns proven to be associated with several beneficial health outcomes have been adopted by an increasing proportion of individuals in Western societies [1,2]. Vegetarianism is characterized by a diversity and heterogeneity of dietary practices [3], with the exclusion of certain food groups such as meat, poultry, and similar products, and a focus mainly on fruits, vegetables, grains, pulses, nuts, seeds, and honey. The diet may potentially include seafood (pescetarianism), or eggs (ovo-vegetarianism), dairy products (lacto-vegetarianism), or both (ovo-lacto-vegetarianism) [4,5]. In contrast, the most strictly regimented form of vegetarianism (veganism) is characterized by a complete abstinence of consumption of meat and food of animal origin, such as dairy, eggs, and honey [4], with a diet consisting solely of plant foods like grains, vegetables, fruits, legumes, nuts, seeds, and vegetables fats and oils [6]. Veganism is usually adopted as the result of ethical principles related to animal rights and welfare, since the way these products are acquired is considered violent and barbaric [4,7], but also due to spiritual, moral, and religious values [3,8], socioeconomic considerations [9], and environmental concerns as well, focusing on the energy and natural resources savings in food production [5,10]. Thus, disparities in the prevalence rates of both vegetarianism and veganism have been observed in data reported across several countries, but also between different territories within the same country. Although the percentage of vegans has increased by 350% during the last decade [4], today only 0.1% to 1% of the adult population in Germany claims to follow a vegan diet [11,12]. In contrast, the constantly increasing prevalence of vegetarianism and veganism within North Americans has been reported to be as high as 5% and 2%, respectively [8], whereas the prevalence of veganism among US individuals with specific religious beliefs was reported to be 7.6% [13]. Other studies involving countries worldwide have yielded variable prevalence rates of vegetarianism within the general population: 0.77% in China [14], 0.79% in Italy [15], 1.5% in Spain [16], 3.3% in Germany [11], 3.8% in Norway [17], 4.1% in Finland [18], from 3% to 5% in Latvia [19], up to 11.2% in Australia [20], 33% in South Asia [21], and from 4.8% to 15.6% in Sweden [17]. In addition, vegetarian diets have also become popular due to potential health benefits among adolescents and young adults, especially females; in a recent report, prevalence rates from 8% to 37% and from 1% to 12% in female and male Australian teenagers were quoted, respectively [22]. Much research has been focused on the potential effects of veganism on health and wellness. Certain benefits associated with multiple medical comorbidities but also limitations such as nutritional deficiencies with respect to vitamins, minerals, and proteins have been reported [1,3,4,18,20,23,24]. Veganism has been widely accepted as the prototype of healthy diet related to gut microbiota [25][26][27], cardiovascular disease [24], diabetes, cancer, chronic kidney disease, cataracts, obesity, normal pregnancy outcomes [1,3,8,9,[28][29][30], metabolic syndrome, the brain [2], bone health [31], and more. It has been reported that vegans demonstrated a risk reduction of 75% for hypertension, 47%-78% for type 2 diabetes mellitus, and 14% for total cancer incidence [8]. In addition, several studies have demonstrated that plant-based diets have been associated with reductions in mortality rates [13], although much research is needed on the long-term health of consumers [32]. In addition, many people interested in vegan diets also adopt generally healthy lifestyle habits including regular physical activity, abstinence from smoking and alcohol [3,33], consequential social interactions, emotional regulation, and cognitive and behavioral investments [34]. Nowadays, successful social media campaigns focus on the visibility and acceptance of veganism among athletes and people working in health and fitness-related fields [35]. The aim of the present study was to provide an update of the existing knowledge on nutritional status in vegan diets and their influence on human gut microbiota and health. Vegan Nutrition The vegan diet does not include products made out of animals; thus, most of the nutrient income is based on the lower levels of the food pyramid. This kind of nutrition includes high intake of fruits and vegetables and low intake of both sodium and saturated fat [20]. Apart from the nutrients, plants contain numerous phytochemicals, including carotenoids and polyphenols. Such compounds are polyphenols found in grapes, berries, and nuts, indole-3-carbinol in cruciferous vegetables such as sprouts, cabbage, and cauliflower, isoflavones found in legumes, including clover, soy, and lupine, and lycopene in tomatoes. In general, these substances, which are referred as food ingredients, have no additive nutritional value, but they can affect various metabolic pathways of the body, providing multiple health benefits [36][37][38][39]. However, if a vegan diet is not appropriately planned, reduction of caloric intake and nutritional deficiency of fatty acids, proteins, vitamins, and minerals may appear [1]. Macronutrients Carbohydrates may be subdivided into digestible and indigestible compounds. Plant-based diets composed of fiber-rich foods refer to indigestible carbohydrates also called "dietary fiber", including non-starch polysaccharides, lignin, resistant starch, and non-digestible oligosaccharides [40]. These macronutrients, that are intrinsic and intact in plants [41], are also resistant to digestion in the small intestine and pass into the large intestine, where they are fermented and produce specific bacterial metabolites, such as short-chain fatty acids (SCFAs), associated with beneficial effects [40]. Plant foods that are rich in fiber include whole grains, vegetables, fruits, and legumes. Dietary fiber appears to confer benefits to various aspects of human health: cardiovascular disease, body weight management, immunity, and intestinal health including colorectal cancer prevention, laxation, regularity, and appetite control (satiation, satiety) [41]. In particular, prebiotics such as oligosaccharides of natural (e.g., human milk oligosaccharides) or synthetic origin (e.g., galacto-oligosaccharides, fructo-oligosaccharides), phytochemicals, polyphenols and derivatives, carotenoids, and thiosulphates exert several beneficial effects [42]. These effects include increases in bifidobacteria, lactobacilli, and calcium absorption, decreases in other bacteria populations and protein fermentation, improvement in gut immunity, production of beneficial metabolites, and effects on gut barrier permeability [43]. The content in fatty acids and saturated fats is particularly low in a plant diet, leading to weight loss, improved lipid profile, and reduced blood pressure, associated with prevention of coronary heart disease and other chronic diseases [44][45][46]. Plant foods contain just small amounts of monounsaturated and polyunsaturated fatty acids, mainly α-linolenic acid (ALA), and therefore omega-3 polyunsaturated fatty acids can be obtained from most vegetable oils, cereals, walnuts, chiaseed, rapeseed, linseed, camelina, canola, and hemp [4,45,47,48]. Microalgae supplements containing docosahexaenoic acid (DHA), as well as DHA-fortified foods, regular supplies of ALA foods, and supplements are also good sources of essential fatty acids [1]. One of the major concerns about the vegan diet is the lack of protein intake providing the lowest energy for body functions when comparing to vegetarians and meat consumers [49]. The quality of a protein is determined by the digestive efficiency and the content of essential amino acids. High digestibility is provided by purified or concentrated vegetable proteins such as soy and gluten, while the majority of the vegetable products are characterized by low digestibility. It has been well documented that the presence of plant cell wall and antinutritional agents (enzyme inhibitors, tannins, phytates, glucosinolates, isothiocyanates), as well as food processing and heat treatment, may be inhibitory factors in protein digestibility [4,50]. In general, if certain plant foods are consumed in appropriate combinations, they can provide all the essential amino acids for human nutrition, although some of them may be absent in certain plants, including lysine in cereals, rice, and corn, and methionine in legumes [4]. Vegans usually include sufficient amounts of legumes in their diet, a protein source that has been reported as a potential preventive factor against stomach, prostate, and colon cancer [1]. In addition, consumption of legumes may demonstrate a cardioprotective effect by decreasing the levels of circulating serum lipids and lipoproteins including total cholesterol, low-density lipoprotein (LDL), and triglycerides [51]. Micronutrients Although the vegan diet may have inadequate calorific value, it is rich in antioxidant vitamins and phytochemicals. A minimal amount of vitamins is usually required for metabolic and homeostasis functions [20]. Plant foods clearly supply vitamins to this kind of diet, including vitamin C (L-ascorbic acid) and carotenoids. Carotenoids are precursors of vitamin A, such as β-carotene or provitamin A, which is found in abundance in carrots. Polyunsaturated vegetable oils contain significant amounts of liposoluble vitamin E. Selenium, a trace element which is very important for the production of glutathione peroxidase, is also found in many plant foods [52,53]. Vitamins also appear to have a protective role in various neoplastic diseases such as hematological (vitamin C), glioma, lung (vitamin A), prostate, breast, colorectal (vitamin E and selenium), oropharyngeal, bladder, skin, uterine, and ovarian cancers (selenium) [53]. In contrast, there are significant deficiencies concerning other vitamins, including vitamin B12 and vitamin D. Vitamin B12 is a water-soluble vitamin that is found predominantly in products of animal origin, playing a vital role in hematopoiesis and nervous system, whereas a severe deficiency may occur by either alterations in absorption or nutritional insufficiency [23,54,55], resulting in several comorbidities such as megaloblastic anemia, stroke, Alzheimer's and Parkinson's diseases, vascular dementia, cognitive impairment, and more [2]. In order to prevent vitamin deficiency due to inadequate dietary intake, there is an urgent need for vegans to incorporate reliable vitamin B12 sources including vitamin B12-fortified foods such as fortified soy and rice beverages, certain breakfast cereals, or vitamin B12 dietary supplements which usually provide high absorption capacities [1,4,18,24,31,50]. Other sources of vitamin B12 include vegetables like broccoli, asparagus, and bean sprouts, specific types of nutritional mushrooms, tea leaves, tempeh, edible algae including dried green laver (Enteromorpha spp.) and purple laver (Porphyra spp.), other microalgae (klamath, Chlorella), and cyanobacteria (spirulina, Nostoc). However, the vitamin content may vary among these products since many of them contain only traces of vitamin B12 and should not be considered as an adequate source for the daily intake [6,15,20]. High prevalence rates of vitamin B12 deficiency (up to 80%) have been reported among Hong Kong and Indian populations, where vegans rarely include fortified foods or supplements in their diets [24]. Vitamin D, related to both calcium absorption and bone mineralization, plays an essential role in bone health [31]. Its levels depend predominantly on adequate sun exposure, and thus supplementation might not be necessary, especially among individuals living in low latitude regions. Low 25-hydroxyvitamin D concentrations in the serum have been documented in vegan societies, especially in winter or spring, or in those living in high latitudes [6,43,56]. Vitamin D3 (cholecalciferol) can originate from plants or animals, whereas vitamin D2 (ergocalciferol) is produced by the action of ultraviolet radiation. Mushrooms treated under ultraviolet light can be an important source of vitamin D [31,45]. Alternative vitamin D sources are breakfast cereals and nondairy substitutes for milk other than soy, like oat, almond, and rice drinks [6]. If sun exposure and intake of fortified foods are insufficient to meet the nutrients requirements, vitamin D supplements are recommended, both for children and adults [18,57]. Deficiencies in minerals such as iodine, calcium, and zinc may also occur. Iodine deficiency is very common among vegans, often leading to acquired hypothyroidism [58]. Vegan sources of iodine include iodized salt and sea vegetables containing various amounts of the mineral [45]. There are abundant plant-based sources of calcium; however calcium bioavailability is inversely proportional to the amounts of oxalate, and to a lesser extent, to phytate and fiber found in vegetables [45,50]. High-calcium foods include several green leafy vegetables, tofu, tahini [1], and fortified foods such as cereals, soy, rice, almond and coconut beverages, orange and apple juices, and to a lesser extent unsweetened cranberry and low sodium tomatoes [59]. Nevertheless, the best absorption is provided by low-oxalate vegetables, including broccoli, kale, turnip greens, Chinese cabbage, and bok choy [60]. Vegans have the opportunity to consume as much iron as non-vegans daily. However, both iron and ferritin levels in the blood are lower in vegans than in non-vegans. The absorption of iron derived from heme is significantly higher compared to non-heme iron intake from plant foods. This can be counteracted by consuming ascorbic acid (citrus, strawberries, kiwi), a component necessary for the absorption of non-heme iron [1,50]. Legumes, beans, whole grains, integral cereals, dark-green leafy vegetables, fruits, seeds, and nuts can be used as sources of iron [16,30,61]. Zinc acts as a catalyst in iron metabolism and is not as easily absorbed from plant sources as it is from animal products, which usually supply half of the zinc intake [4]. In vegans, low plasma zinc levels can lead to iron deficiency anemia. Zinc-rich plant foods are wholemeal bread, peas, corn, nuts, carrots, whole grains, wheat germs, soybeans, cabbage, radish, watercress, and legumes [4,30,62]. Gut Microbiota Composition and Functional Aspects The microbial composition of the human gut microbiota consists of several taxa of microorganisms, such as bacteria, viruses, protozoa, and fungi [63]. It is estimated that the human gastrointestinal tract harbours approximately 100 trillion microorganisms, comprising more than 1000 bacterial species [63,64]. Bacteroidetes, Firmicutes, Actinobacteria, Proteobacteria, Fusobacteria, and Verrucomicrobia are primarily found as part of the normal gut flora, where Bacteroidetes and Firmicutes represent the 90% of total bacterial phyla constitution, and Actinobacteria, Proteobacteria, and Verrucomicrobia are represented to a lesser extent [64][65][66]. A brief summary of commonly encountered bacteria in the gut microbiota is given in Table 1. Several factors such as infant transitions (birth gestational age, type of delivery, milk-feeding practices, infant weaning), dietary habits, age, ethinicity, cultural and lifestyle habits (exercise, alcohol consumption), geographic and environmental factors, stress, obesity, consumption of probiotics and prebiotics, use of antibiotics, intestinal comorbidities, and metabolic diseases, may continuously alter bacterial composition and diversity [26,27,63,67,68]. Studies in vivo have reported that changes in gut microbiota composition have been shown to exert an important role in maintaining the function of the intestinal barrier. Indeed, low-fiber, high-protein, and high-fat diets have been documented to increase both intestinal inflammation and permeability by altering the translocation of bacterial populations and metabolites that modulate inflammation [69]. In addition, metabolites derived from gut microbiota including bacteriocins, SCFAs, microbial amino acids, and vitamins seem to play a vital role in activating the intestinal immune response thus defending against external pathogens [70]. Recently, the term "metabolic endotoxemia" was introduced to describe a significant increase in bacterial lipopolysaccharide (LPS) plasma levels observed both in animals and humans in high-fat diets [71,72]. Under such conditions, the increase in LPS plasma levels, caused by an imbalance in the homeostasis of the microbiota, induces a low intensity systemic inflammation which has shown to be associated with obesity, diabetes, and insulin resistance [73]. Although there is a variety of functional competencies among different intestinal microbial communities, the normal gut microbiota, which is considered the largest organ and the most complex system of microorganisms [74], plays a crucial role in most of the human health aspects and well-being, including digestion of foods, metabolic breakdown of drugs and toxins [75], nutrient metabolism [76,77], antimicrobial protection [26], development and homeostasis of immunity [75,78], the gut-brain axis [79,80], the gut-liver axis [74,81], and gastrointestinal and cardiovascular health [26,82,83]. Today, high-throughput microbiome sequencing technologyies including 16S rRna gene sequencing, whole genome metagenomics, metatranscriptomics, metaproteomics, and metabolomics, offer the most considerable insight into the gut microbiota ecosystem and their metabolic functions [68]. An imbalance or alteration in microbial composition and activity, also called "gut microbiota dysbiosis", has been associated with several clinical manifestations, although it is not yet clear if dysbiotic patterns are the cause or the consequence of the disease [84]. These disorders include obesity, type-2 diabetes mellitus, neurological and neuropsychiatric cormobidities (Alzheimer's and Parkinson's diseases, hepatic encephalopathy, autism spectrum disorder, depression, amyotrophic lateral sclerosis), allergy, carcinogenesis, autoimmune diseases (celiac disease, systemic lupus erythematosus, rheumatoid arthritis, psoriasis, atopic dermatitis), infectious diseases (Clostridium difficile infection), cardiovascular disease, and chronic kidney, hepatic, and gastrointestinal diseases [63,66,74,75,81,[85][86][87][88]. Among the most common disorders of the gastrointestinal tract related to gut microbiota dysbiosis are the two major types of inflammatory bowel disease, ulcerative colitis and Crohn's disease [75]. Irritable bowel syndrome, diverticular disease, and colorectal cancer have also been reported [74] (Figure 1). Impact of Vegan Food Components on the Human Gut Microbiota It has been well documented that long-term dietary patterns can alter both diversity and function of the gut microbiota, while it is not well known how the short-term consumption of different diets may alter changes in the gut microbiota composition and functionality [82,89]. Food polymers, including fibers, polyphenols, fats, and proteins are commonly involved in main gut microbiota metabolic pathways [79]. Omnivore, ovo-lacto vegetarian, and vegan diets are sources of nutrients for microorganisms and they have also their own microbiota, conferring heterogeneous effects on both abundance and diversity of the gut microbiota [67]. Vegan and vegetarian gut microbiota profiles may not differ and both include a greater profusion of beneficial bacteria when compared to that of omnivores. On the contrary, the human gut microbiota appears to be altered with a greater impact in omnivores than in vegans, and is composed of bile-tolerant potentially harmful microorganisms, since animal-based diets are usually characterized by increased levels of fecal bile acids [25]. Bile acids, which are cholesterol-derived compounds synthesized in hepatocytes, enable the emulsification of dietary fats and the intestinal absorption of lipids and lipofilic vitamins, act in several metabolic and inflammatory pathways, and alter the composition of gut microbiota through farnesoid X receptor and G protein-coupled membrane receptor 5 directly and indirectly [90,91]. Moreover, since the occurrence and abundance of antimicrobial resistance genes have been found significantly lower in gut microbial communities of vegans than those of omnivores, animal-based diets may be involved in antimicrobial resistance spread within the gut microbiota environment [92]. Characterization of the human gut bacterial diversity is usually determined by using enterotyping, interpreted as a bacterial Prevotella to Bacteroides ratio (P/B) [93]. Although the gut microbiota structure in strict vegans has not been precisely specified, and several environmental, cultural and genetic factors have been associated with Western to non-Western gut community differentiation, it has been reported that the ratio P/B was higher in persons with a natural fiber and starch intake than in individuals following a Western-type diet [94,95]. Thus, gut microbiota are dominated by Prevotella species in persons with plant-based dietary habits, such as populations living in African, Asian, and South American societies, while Bacteroides-driven enterotype is predominant in individuals living in Western societies that consume diets rich in animal protein, amino acids, and saturated fats [75,93]. Interestingly, Prevotella spp. have been found to provide effective anti-inflammatory properties on certain diseases [26] including inflammatory arthritis [96] and multiple sclerosis [97], whereas Bacteroides spp. are usually involved in several infections providing antimicrobial resistance to a variety of antibiotics, and may act as useful commensals to the human host as well [98]. Dietary fiber may influence the gut microbial community in terms of type, number, and consistency of bacterial species. Thus, indigestible carbohydrate diets rich in whole grain and wheat bran are associated with an increase of Bifidobacterium spp. and Lactobacillus spp., whereas resistant starch and whole grain barley may also increase lactic acid bacteria including Ruminococcus spp., Eubacterium rectale, and Roseburia spp. It is difficult to state the same for other members of the Firmicutes phylum such as Clostridium and Enterococcus species, which are both reduced [26,87]. Both bifidobacteria and lactobacilli demonstrate an exclusive potential of saccharolytic metabolism and have been considered to be associated with a protective role in the human gut barrier by inhibiting the invasion and growth of bacterial pathogens [26,41] (Figure 2). Akkermansia muciniphila, a mucin-degrading bacterium of the intestinal microbiota which may represent 3%-5% of the total microbial community in healthy subjects, has also been related to the enhancement of gut barrier function, prevention of gut bacterial translocation, inflammation, obesity, intestinal homeostasis, and metabolism [99]. Animal studies have shown that prebiotic supplementation may consistently promote the abundance of this bacterium in the gut [100]. Further studies in animals have shown that a purified membrane protein from A. muciniphila or the pausterized bacterium improves metabolism in obese and diabetic mice [101]. Recently, an exploratory-study in obese and overweight human volunteers showed the beneficial effect associated with A. muciniphila supplementation [102]. In addition, studies in mice showed that when comparing the intake of a crude fraction of wheat bran and the same fraction with reduced particle size and wheat bran-derived arabinoxylan oligosaccharides, only the crude fraction of wheat bran was followed by an increase in Akkermansia spp. on gut microbiota, thus providing beneficial effects in the context of obesity [103]. Apart from the different impact on gut microbiota composition, the intake of crude fraction of wheat bran with reduced particle size also led to the observation of hepatic anti-inflammatory effects [104]. Fermentable dietary fiber has been shown to serve as a substrate for intestinal bacteria metabolism. The end products of the bacterial metabolism include certain metabolites such as SCFAs [65]. The main SCFAs include acetate and propionate (used as substrates for lipid, glucose, and cholesterol metabolism), and butyrate (which plays a key role in immunoregulation and maintainance of tissue barrier function), serving as energy substrates for the gut epithelial cells [40,83]. They probably provide anti-inflammatory effects in the intestine [87]. They are also involved in several other important physiological functions, including decrease of colonic pH and circulating cholesterol, improvement of glucose tolerance and insulin sensitivity, growth inhibition of emerging Enterobacteriaceae pathogens (Salmonella spp., adherent-invasive Escherichia coli), stimulation of water and sodium absorption, energy provision to the colonic epithelial cells, inhibition of cancer cell proliferation by interfering with multiple mechanisms, and prevention of high-fat diet induced obesity by stimulating fat oxidation [66,68,84,105]. Therefore, SCFAs improve blood lipid profiles, glucose homeostasis, and body composition, reduce body weight [93], strengthen the mucosal barrier [87], and act protectively against several disorders including type 2 diabetes mellitus, inflammatory bowel disease, and immune diseases [26] (Figure 2). Apart from fibers, polyphenols, which are also abundant in vegan diets, increase both Bifidobacterium spp. and Lactobacillus spp., providing cardiovascular protection as well as antibacterial and anti-inflammatory effects [26]. Most of these compounds exhibit structural diversity, and consist of flavonoids, phenolic acids, stilbenes, lignans, and secoiridoids. They pass into the colon and are metabolized by colonic bacteria which influence their bioactivity, while a tiny proportion is possibly absorbed in the small intestine [77]. Fruits such as grape, blueberry, sweetsop, mango, and citrus, vegetables, medicinal plants, microalgae, herbs, seeds, cereals, and beverages including coffee, tea, cocoa and red wine are good sources of polyphenols [40]. Beneficial interactions between tea or soy isoflavones and intestinal microbiota have been reported, whereas wild blueberries-a good source of polyphenols-have been shown to increase Bifidobacterium and Lactobacillus species [106]. A decrease in pathogenic Clostridium perfringens and Clostridium histolyticum is probably attributable to the consumption of fruit, seed, tea, and wine polyphenols [87]. It has also been reported that proanthocyanidin-rich extract from grape seeds increased the number of Bifidobacterium spp. significantly, while genera of Enterobacteriaceae family were decreased [107]. In an another study the consumption of red wine was associated with an increase of bifidobacteria and species of the Enterococcus, Bacteroides, and Prevotella genera, whereas nonbeneficial bacteria such as Clostridium spp. were inhibited, providing possible prebiotic benefits of red wine polyphenols and resulting in the reduction of both cholesterol and C-reactive protein (CRP) [108]. A significant increase of high-density lipoproteins and decrease of CRP and triglyceride serum levels have also been reported after consumption of cocoa-derived polyphenols [87] (Figure 2). Fats are considered to be an efficient source of energy, and on the basis of current data, both the quality and quantity of the dietary fat intake may influence the gut microbiota composition [65]. Vegan diets are low-fat diets containing monounsaturated and polyunsaturated fats, altering the microbial intestinal composition by increasing the Bacteroidetes to Firmicutes ratio. On the contrary, animal saturated fats increase genera of Proteobacteria and Firmicutes and also decrease Bifidobacterium spp., which may provoke inflammation, leading gradually to metabolic derangements [26] (Figure 2). There is a strong and consistent evidence that the consumption of animal-fat diets can be a major driving factor in cardiovascular disease pathogenesis through the increase of both total serum cholesterol and LDL levels [87]. The protein-energy status in vegans has been reported lower when compared to omnivores [109]. Studies examining the impact of dietary proteins on the microbiota confirmed that both Bifidobacterium and Lactobacillus species as well as the intestinal SCFA levels were increased after the consumption of pea protein, while both pathogenic C. perfringens and Bacteroides fragilis were decreased [87]. The beneficial effect of the consumption of walnuts on the gut microbiota composition by increasing Ruminococcus spp. and Bifidobacterium spp. and decreasing Clostridium spp. has also been reported [26] (Figure 2). In contrast, animal protein intake appears to have a significant role in the pathogenesis of inflammatory bowel disease since it may alter gut microbiota composition by increasing Bacteroides spp., Alistipes spp., and Bilophila spp., and decreasing beneficial Lactobacillus spp., Roseburia spp., and E. rectale [87]. In addition, diets with high animal protein intake are associated with cardiovascular disease, since the consumption of red meat may alter the gut microbiota composition resulting in the production of a proatherogenic metabolite (trimethylamine-N-oxide) in mice [110]. Among micronutrients, certain vitamins including vitamin K and B-complex vitamins (biotin, cobalamin, folate, nicotinic acid, panthotenic acid, pyridoxine, riboflavin, thiamin), all involved in bacteria metabolism, can be synthesized in gut microbiota [77]. On the other hand, studies performed in human volunteers showed that carotenoids such as blackcurrant lutein were found to affect microbiota composition by increasing Bifidobacterium spp. and Lactobacillus spp., and reducing Bacteroides spp. and Clostridium spp. [111]. Conclusions Vegan diets have been gaining in popularity among Western societies in recent years, as several clinical disorders and malignancies caused by the consumption of animal-based products still occur frequently in developed countries. The evaluation of such comorbidities may reveal further modes of pathogenesis, including the consumption of such diets. The often-claimed effect of plant-based diets on human health is attributed to the activity of major nutritional components on conferring health benefits to the host. Such components include dietary fiber, monounsaturated and polyunsaturated fats, proteins, polyphenols, and micronutrients. Nevertheless, one of the major concerns about the vegan diet is the nutritional status restriction of certain nutrients like proteins and fats. Thus, vegans should always follow a comprehensive diet plan in order to avoid the lack of essential nutrients. It has been well documented that different factors may contribute to the gut microbiota composition and variation. The gut microbiota is composed of highly diverse microbial communities which interact and compete for such nutrients, producing metabolites associated with most aspects of human health and well-being. Vegan diets and their main components affect both bacterial composition and metabolic pathways of gut microbiota by increasing beneficial microorganisms. However, more studies are needed to determine the impact of these diets on gut microbiota. Further, a better understanding of the individualized nature and diversity of gut microbiota may help explain disease susceptibility and will lead to new approaches in the medical field.
2020-02-27T09:23:32.333Z
2020-02-01T00:00:00.000
{ "year": 2020, "sha1": "133db3a053a5c87179d770945cb928ba2475546d", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1648-9144/56/2/88/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8db9c2a628e8472fe7c8ba1ab73bcbdc9d519fdf", "s2fieldsofstudy": [ "Medicine", "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
212641899
pes2o/s2orc
v3-fos-license
Hidden Hypocalcemia as a Risk Factor for Cardiovascular Events and All-Cause Mortality among Patients Undergoing Incident Hemodialysis Lower corrected calcium (cCa) levels are associated with a better prognosis among incident dialysis patients. However, cCa frequently overestimates ionized calcium (iCa) levels. The prognostic importance of the true calcium status defined by iCa remains to be revealed. We conducted a retrospective cohort study of incident hemodialysis patients. We collected data of iCa levels immediately before the first dialysis. We divided patients into three categories: apparent hypocalcemia (low iCa; <1.15 mmol/L and low cCa; <8.4 mg/dL), hidden hypocalcemia (low iCa despite normal or high cCa), and normocalcemia (normal iCa). The primary outcome was the composite of all-cause death and cardiovascular diseases after hospital discharge. Among the enrolled 332 patients, 75% of the patients showed true hypocalcemia, defined as iCa <1.15 mmol/L, 61% of whom showed hidden hypocalcemia. In multivariate Cox models including other potential risk factors, true hypocalcemia was a significant risk factor (hazard ratio [HR], 2.34; 95% confidence interval [CI], 1.03–5.34), whereas hypocalcemia defined as corrected calcium <8.4 mg/dL was not. Furthermore, hidden hypocalcemia was significantly associated with an increased risk of the outcome compared with normocalcemia (HR, 2.56; 95% CI, 1.11–5.94), while apparent hypocalcemia was not. Patients with hidden hypocalcemia were less likely to receive interventions to correct hypocalcemia, such as increased doses of active vitamin D or administration of calcium carbonate, than patients with apparent hypocalcemia (odds ratio, 0.45; 95% CI, 0.23–0.89). Hidden hypocalcemia was a strong predictor of death and cardiovascular events, suggesting the importance of measuring iCa. hypocalcemia was a risk factor for mortality in hospitalized patients with heart failure and CKD 11 . All these studies defined calcium status by corrected calcium levels. No clinical studies have investigated the prognostic implication of ionized calcium except for a study involving patients undergoing hemodialysis 6 . Thus, we conducted a retrospective cohort study on patients just before the initiation of dialysis, in whom calcium misclassifications easily occur. The effect of pre-dialysis care of patients with CKD on the prognosis after the initiation of dialysis has received considerable attention in recent years 8,[12][13][14][15] . The current study aimed to examine 1) the prevalence of hidden hypocalcemia just before the initiation of dialysis and 2) its prognostic implications after the initiation of dialysis. Methods Study design and populations. In this retrospective cohort study, we enrolled patients undergoing incident hemodialysis with ionized calcium measured between January 2008 and December 2016 in Osaka University Hospital. We excluded patients aged ≤20 years, patients who started dialysis at the intensive care unit, and patients without data concerning albumin levels. This study was performed in accordance with the Helsinki Declaration. The Ethics Committee of Osaka University Hospital approved the study and waived informed consent based on the retrospective study design (approval number: 18026-2). We provided patients with the option to opt out of participation. Data collection and laboratory measurements. We collected the latest data just before the initiation of dialysis. In the analysis, we used the data within 3 months prior to the initiation of dialysis. We measured ionized calcium with whole blood samples by using a blood gas analyzer (Siemens 348 and Radiometer ABL800 FLEX before and after November 2015, respectively) immediately after drawing the samples. We measured serum total calcium and albumin levels by the methyl xylenol blue method and the bromocresol purple method, respectively. We corrected serum total calcium levels by serum albumin levels if albumin levels <4.0 g/dL: corrected calcium (mg/dL) = calcium (mg/dL) + 0.8 × (4.0 − albumin [g/dL]) 16 . outcomes. We followed up patients until August 2017. We obtained prognostic information from medical records or united questionnaires from dialysis facilities. The primary outcome was all-cause mortality and hospitalization for cardiovascular disease (CVD). CVD included myocardial infarction, unstable angina, heart failure, arrhythmia, hemorrhagic and non-hemorrhagic stroke, peripheral vascular events (including amputation), aneurysm dissection, or rupture. Patients were censored at the date of death or kidney transplant, or when they were lost to follow-up. Statistical analyses. Corrected and ionized calcium was categorized as low (<8.4 mg/dL and <1.15 mmol/L, respectively), normal (8.4-10.0 mg/dL and 1.15-1.29 mmol/L, respectively), and high (>10.0 mg/ dL and>1.29 mmol/L, respectively) 17,18 . We examined the prevalence of hypocalcemia defined by corrected calcium levels and ionized calcium levels and the risk factors for low ionized calcium levels using stepwise down logistic regression models. Most of the patients belonged to one of the following three categories: apparent hypocalcemia (low ionized calcium and low corrected calcium), hidden hypocalcemia (low ionized calcium despite normal or high corrected calcium), and normocalcemia (normal ionized calcium). We summarized baseline characteristics of these three groups. Data were presented as the number (percent) for categorical variables and as the mean (SD) for continuous variables with a normal distribution or the median (interquartile range) for those with a skewed distribution. We compared baseline characteristics between apparent and hidden hypocalcemia. The significance of differences in continuous variables between groups was tested using the Student's t test or Mann-Whitney test as appropriate. The difference in the distribution of categorical variables was tested using Fisher's exact test. We compared baseline characteristics between the three groups using analyses of variance, the Kruskal-Wallis tests, or Fisher's exact tests. We examined the prognostic impact of hypocalcemia, defined by either ionized or corrected calcium, using the log-rank test, Kaplan-Meier curves, and Cox proportional hazards models. We also used these methods to examine whether the three calcium statuses (apparent hypocalcemia, hidden hypocalcemia, and normocalcemia) predicted the primary outcome after hospital discharge. We constructed several multivariable models: Model 1 adjusted for age, sex, eGFR, and history of diabetes; Model 2 adjusted for covariates in Model 1 plus covariates related to past history of CVD (percutaneous coronary intervention [PCI] or coronary artery bypass grafting [CABG], heart failure, pacemaker implantation, peripheral artery disease, and cerebrovascular infarction); Model 3 adjusted for covariates in Model 2 plus chronic kidney disease-mineral and bone disorder (CKD-MBD) parameters (phosphate, alkaline phosphates [ALP], and intact parathyroid hormone [iPTH]) and pH; and Model 4 adjusted for covariates in Model 3 plus nutrition or inflammation parameters (body mass index [BMI], albumin, and C-reactive protein [CRP]). In addition, we examined adjusted hazard ratios of each categories defined by ionized and corrected calcium status using model 4 with normal ionized and corrected calcium group as reference. Furthermore, we examined associations between intervention to hypocalcemia and the primary outcome among patients with low ionized calcium levels using Cox proportional hazards models. A multivariate model was adjusted for age, sex, eGFR, and CKD-MBD parameters. Intervention to hypocalcemia was defined as either increased doses of vitamin D receptor activator (VDRA) during hospitalization or administration of calcium carbonate at discharge or both. Increased doses of VDRA included a switch from oral to intravenous VDRA. We compared the percentage of patients receiving intervention for hypocalcemia between apparent hypocalcemia and hidden hypocalcemia groups using the Fisher's exact test and logistic regression model. We also compared the percentage of patients receiving both increased doses of VDRA and administration of calcium carbonate at discharge. In these prognosis analyses, we excluded those who reached the outcome during hospitalization for dialysis initiation. reached the primary outcome. Thirty-four patients died and 48 were hospitalized for CVD. The Kaplan-Meier analysis showed that patients with low ionized calcium demonstrated a significantly higher likelihood of developing the primary outcome (log-rank P = 0.01) (Fig. 3A), while no significant difference was observed between patients with low and normal corrected calcium levels (Fig. 3B). The Cox proportional hazards model showed that low ionized calcium levels were significant risk factors for the primary outcome as compared to normal ionized calcium levels, regardless of the models employed (Table 2A). Even in Model 4 with all covariates included, the hazard ratio (HR) was 2.34 (95% confidence interval [CI], 1.03-5.34; P = 0.04). However, this was not the case when we used corrected calcium levels instead of ionized calcium levels to define hypocalcemia. The Kaplan-Meier analysis of the three groups (apparent hypocalcemia, hidden hypocalcemia, and normocalcemia) demonstrated that the prognosis of patients with hidden hypocalcemia was the worst in terms of the primary outcome (log-rank P = 0.002) (Fig. 3C). The univariate Cox proportional hazards model showed that hidden hypocalcemia was significantly associated with developing the primary outcome as compared to normocalcemia (HR, 2.51; 95% CI, 1.41-4.47; P = 0.002) (Table 2B). After adjustment for other covariates, hidden hypocalcemia remained associated with a higher likelihood of developing the primary outcome in Model 4 (adjusted HR, 2.56; Figure 1. Flow diagram. In total, 341 patients were screened and 332 patients were enrolled (9 excluded). Among the enrolled patients, 18 patients reached the primary outcome and 9 patients withdrew from hemodialysis before leaving the hospital. Follow-up data after discharge were not obtained in 11 patients. These 38 patients were excluded in analyses of the association between calcium status before the initiation of dialysis and the primary outcome after leaving the hospital. (2020) 10:4418 | https://doi.org/10.1038/s41598-020-61459-4 www.nature.com/scientificreports www.nature.com/scientificreports/ 95% CI, 1.11-5.94; P = 0.03) (Table 2B). Apparent hypocalcemia was not a significant risk factor in univariate or multivariate Cox proportional hazards models (Table 2B). These results were similar in the analyses with normal ionized and corrected calcium as reference (Table 3). Figure 2. Scatter plot of ionized and corrected calcium at the initiation of dialysis. Apparent hypocalcemia, hidden hypocalcemia, and normocalcemia accounted for 29%, 46%, and 23% of the enrolled patients, respectively. Most of the patients belonged to one of these three groups. www.nature.com/scientificreports www.nature.com/scientificreports/ intervention to hypocalcemia. Patients with hidden hypocalcemia were less likely to receive an intervention for hypocalcemia, such as increased dose of VDRA and use of calcium carbonate, than patients with apparent hypocalcemia (69% vs. 83%; Fisher's exact test, P = 0.03) (Fig. 4). Moreover, the percentage of the patients with both interventions were higher in the apparent hypocalcemia group than the hidden hypocalcemia group (39% vs. 13%; Fisher's exact test, P < 0.001). The Cox proportional hazards model only in the patients with low ionized calcium levels showed that any intervention to hypocalcemia was associated with a lower risk for the primary outcome in the univariate model (HR, 0.40; 95% CI, 0.24-0.65; P < 0.001) and in the multivariate model (adjusted HR, 0.54; 95% CI, 0.29-0.99; P = 0.047). Discussion In this retrospective cohort study, we revealed that true hypocalcemia (ionized calcium <1.15 mmol/L) accounted for 75% of the patients undergoing incident hemodialysis. Furthermore, 61% of the true hypocalcemia cases were found to be hidden hypocalcemia. We showed that true hypocalcemia was a risk factor for the composite of all-cause mortality and cardiovascular events, while hypocalcemia defined by corrected calcium was not. Moreover, hidden hypocalcemia proved to be a strong risk factor. To the best of our knowledge, this study is the first study to demonstrate the high prevalence of true hypocalcemia and hidden hypocalcemia in pre-dialysis CKD stage 5. Regarding the prevalence of hypocalcemia defined by corrected calcium, our results were consistent with our previous study showing approximately 30% of patients in CKD stage 5 among a large CKD cohort 19 . The prevalence of hypocalcemia defined by ionized and corrected calcium levels in our cohort (75% and 30%, respectively) was higher than that in the previous study (32% and 16%, respectively) 6 . Furthermore, in our cohort, few patients showed true hypercalcemia, although 8.9% of the patients showed true hypercalcemia in the previous study 6 . The reason for this discrepancy might reside in the www.nature.com/scientificreports www.nature.com/scientificreports/ different timings of blood sampling; calcium levels were measured just before the first dialysis in this study, while they were measured during the first 91 days of dialysis in the previous study. Corrected calcium levels increased after dialysis initiation among patients with low corrected calcium levels 8 , likely because of a positive calcium balance during dialysis 20 , active vitamin D treatment, and/or calcium-based phosphate binders. Despite the discrepancy, approximately 60% of the patients with true hypocalcemia were incorrectly categorized as normocalcemia using corrected calcium both in our cohort and in the previous study. True hypocalcemia defined by low ionized calcium is a significant risk factor for mortality or CVD morbidity. Our results agree with those of a previous report from the Dialysis Outcomes and Practice Pattern Study (DOPPS) showing worse prognosis of hypocalcemic patients 21 . In this previous study, both uncorrected calcium and corrected calcium levels of 7.5 mg/dL or less were associated with a greater mortality in patients with albumin levels higher than 3.8 g/dL, i.e. a subpopulation in whom the correction of calcium was almost unnecessary to infer ionized calcium. Causal relationship between low ionized calcium and the outcome, if any, includes the following issues. Hypocalcemia leads to heart failure 22,23 and arrhythmia 24 . Moreover, in patients undergoing hemodialysis, hypocalcemia is more likely to be associated with a positive net balance of calcium during dialysis 19 . Positive calcium balance is a risk of myocardial infarction, especially in diabetic patients with low PTH 25 . This might be explained by exacerbation of vascular calcification. Another possibility is residual confounding by poor nutritional status, since patients with low ionized calcium tended to have older age, lower BMI, lower albumin levels, and diabetes. In fact, we adjusted for these covariates when studying the association between low ionized calcium levels and the outcome. However, malnutrition cannot be sufficiently explained only by these traditional markers. Prior studies reported the association between higher corrected calcium and poor prognosis 8,9 . This association might be partly explained by overestimation of calcium status among patients with hypoalbuminemia, who are at high risk for mortality 26 . Since albumin is an important carrier for calcium 1 , uncorrected calcium levels are intrinsically dependent on serum albumin levels. In fact, in the aforementioned study 21 Table 3. Associations of calcium status stratified by ionized and corrected calcium with the primary outcome in the final model. Adjusted hazard ratios were estimated using model 4 with normal corrected and ionized calcium group as reference. Low ionized calcium levels despite normal corrected calcium levels (hidden hypocalcemia) was a significant risk for the primary outcome, while low ionized calcium levels and low corrected calcium levels (apparent hypocalcemia) was not. Since there was only 1 patient with normal ionized and low corrected calcium, adjusted hazard ratio in this group is not shown. www.nature.com/scientificreports www.nature.com/scientificreports/ levels of 7.5 mg/dL or less were not high risk for mortality in hypoalbuminemic patients (less than 3.8 g/dL). In other words, serum albumin levels modify the association between uncorrected calcium and mortality. However, we should not forget that the equation of corrected calcium cannot be used to infer ionized calcium levels just before the dialysis initiation, when discussing the association between calcium status and prognosis. Although true hypocalcemia defined by low ionized calcium was a significant risk factor, hypocalcemia defined by low corrected calcium was not. Misclassifications of low calcium status possibly result in an overestimation of the prognosis. In other words, physicians may overlook the worse prognosis of patients with hidden hypocalcemia on the grounds of their normal corrected calcium levels. Notably, hidden hypocalcemia, and not apparent hypocalcemia, was associated with a higher likelihood of developing the primary outcome, although ionized calcium levels were much lower in the apparent hypocalcemia group than in the hidden hypocalcemia group. Patients with hidden hypocalcemia were more likely to have a history of diabetes, PCI/CABG, and heart failure than patients with apparent hypocalcemia. Additionally, patients with hidden hypocalcemia had higher serum bicarbonate levels (pH) than patients with other calcium statuses, suggesting lower protein intake in this population 27 . Low protein intake accelerates body weight loss 28 . In fact, patients with hidden hypocalcemia had lower BMI and serum albumin levels than patients with apparent hypocalcemia. Since, the definition of cachexia includes weight loss and low serum albumin 28 , patients with hidden hypocalcemia might have suffered from cachexia, which is a complex metabolic syndrome associated with underling chronic illnesses such as CKD or chronic heart failure 28 . Patients with hidden hypocalcemia were likely to have abnormal high ALP despite a lower iPTH than patients with apparent hypocalcemia. High ALP levels relative to PTH is characteristic of osteomalacia 29 , possibly derived from vitamin D deficiency 30 . Weight loss and vitamin D deficiency, which suggest malnutrition, are risk factors for mortality in patients undergoing hemodialysis [31][32][33] . However, hidden hypocalcemia remained a significant risk factor for the primary outcome after adjustment for a history of diabetes, PCI/CABG and heart failure, ALP, iPTH, pH, BMI, and albumin. Therefore, hidden hypocalcemia might reflect malnutrition status, which cannot be sufficiently explained by these standard nutritional parameters. Patients with hidden hypocalcemia were less likely to receive VDRA or calcium carbonate than patients with apparent hypocalcemia, because physicians may miss true hypocalcemia. Physicians can readily recognize hypocalcemia in patients with apparent hypocalcemia but cannot recognize hidden hypocalcemia unless they check ionized calcium levels. In our study, the data of ionized calcium were extracted from the blood gas test, which was performed for evaluating pH or bicarbonate (tCO 2 ). Since the data of ionized calcium were "byproducts, " physicians may not check that data. Moreover, patients with apparent hypocalcemia were more likely to receive both calcium carbonate and an increased dose of VDRA than patients with hidden hypocalcemia. This practice pattern suggests a strong intention of physicians to increase serum calcium levels in hypocalcemic patients. Intervention to treat hypocalcemia, such as the administration of VDRA and calcium carbonate, improves hypocalcemic cardiomyopathy 34,35 . We found that VDRA or calcium carbonate prescription was associated with lower CVD morbidity and mortality among hypocalcemic patients. Previous observational studies showed that the use of VDRA was associated with a better prognosis among dialysis patients 33,[36][37][38][39] . Furthermore, Inaguma et al. reported that the use of calcium carbonate before the initiation of dialysis was associated with better prognosis after the initiation of dialysis 9 . In this context, undertreatment for hypocalcemia might explain the observed higher risk in patients with hidden hypocalcemia. Our observation raises a question about the revised Kidney Disease: Improving Global Outcomes guidelines on CKD-MBD 40 , arguing that asymptomatic or mild hypocalcemia does not need to be corrected considering the unproven benefits of intervention to hypocalcemia and potential harm of a positive calcium balance 41,42 . Our study has several strengths. First, we measured the ionized calcium levels immediately after drawing the samples, which is in sharp contrast to the previous study 6 . Ionized calcium levels vary easily due to CO 2 changes in the samples 43,44 . Fresh samples are required to measure these levels accurately. Second, this is the first study to measure the ionized calcium levels immediately before the initiation of dialysis, partly reflecting a patient's nutritional status and pre-dialysis care not influenced by hemodialysis. Our study has several limitations. First, a single-center study limits generalizability to other populations. Second, the outcomes were not adjudicated. Indication of hospitalization possibly varies according to the physicians. Further multicenter studies with a larger number of patients are needed to validate the association between calcium status and hard outcome. conclusion Hidden hypocalcemia at the initiation of dialysis was a strong risk factor for the composite outcome of all-cause mortality and CVD morbidity. This suggests the importance of measuring ionized calcium levels in patients undergoing incident hemodialysis.
2020-03-10T15:02:50.072Z
2020-03-10T00:00:00.000
{ "year": 2020, "sha1": "7a1cb26638426996501b7f7659ba454dda1e83ca", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-020-61459-4.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7a1cb26638426996501b7f7659ba454dda1e83ca", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
259871403
pes2o/s2orc
v3-fos-license
Investigation of Physiologic Effect of Prolonged Consumption of Raphia Hookeri Fruits Pulp Aqueous Extract on Renal Functions of Male Wistar Rats The aim of this study is to investigate the physiologic effect of prolonged consumption of Raphia Hookeri fruits pulp aqueous extract on renal functions of male wistar rats. A total of 32 male wistar rats of weight ranging from 130gram to 200grams were used. The extract was administered orally for each 4 groups except control (group1) for twenty-eight (28) days. Group 1 rats were given animal feed and water only, group 2 were given 500mg/kg body weight of the extract, group 3 were given 1000mg/kg body weight of the extract, group 4 were given 2000mg/kg body weight of extract. The statistical analysis done using mean and standard deviation, P value at ≤ 0.05 and results showed that the sodium ion levels in all the test groups were marginally raised when compared to group 1, potassium ion levels in all test groups had only non-significant variations when compared to both the control and test groups but in test groups were seen to be slightly reduced with respect to group 1. Groups 2, 3 and 4 indicated elevated bicarbonate ion but of all, it was most and significant in group 4 when compared to group 1. Chloride ion indicated non-uniform and non-significant changes when compared to both the control and test groups. Creatinine all indicates non-significant effects of the extract when compared to group 1. The increase observed in urea and creatinine indicates that kidney function would deteriorate as it prolongs which negatively alter the renal physiology of the male wistar rats. INTRODUCTION The ripe boiled Raphia Hookeri fruits pulp locally called "Ogbusi" by Abua people in Abua/Odual local government area of Rivers State, Niger Delta region, Southern Nigeria. The Ogbusi (Raphia Hookeri fruits pulp) is usually soaked in water or stored in refrigerator to maintain nutrients and commonly consumed with tapioca. The inhabitants of Emoh village in Abua and the people of Abua/Odual LGA hypothesized that the Ogbusi boost immunity, inhibit plasma glucose, reduce blood pressure, ameliorate fat and boost hematopoiesis, etc. The Ogbusi (Raphia Hookeri fruits pulp) is frequently consumed in Abua because the plant is abundantly found there. Raphia Hookeri plant is also found in other West African countries. This plant is classified under palm tree and belongs to the family of Palmae or Palmacea which originated from tropical West Africa where it extended to, raphia palms strive predominantly in swampy areas which are mostly hydromorphic (Imogie AE, 2008), (Mphoweh et al., 2009). The raphia palm grows in the form of a monocarpic crop with a terminal inflorescence that undergoes hepaxanthic flowering. At the vegetative stage, the crop is characterized by continuous stem elongation (Adeneyi AA, Akpabio UD, 2011), with an inflorescence which is the sink for the photosynthate which is tapped and referred to as "palm wine". Other parts of the raphia palm tree such as the leaves, roots, branches and seed are also exploited for craft work and traditional medicines (Mphoweh et al., 2009), (Obahiagbon FI, 2009), (Ezeagu et al., 2003) 88 .The inflorescence emerges from the base of the fanlike leaves, and they bear the male and female flowers that will develop to form the fruits and the seeds. All parts of the plant are well utilized by locals for various things ranging from building materials as twine, rope; personalized items like baskets, placemats, hats, shoes to consumables like oil, wine and food (Akinola et al., 2010;Afolayan et al., 2014). Raphia fruit pulp is a good source of phytochemicals and some micronutrients and is locally consumed as a snack (Tatianan et al., 2023). Its fruit is large, cone-shaped with a single hard nut having an outer layer of overlapping reddish brown scales and in-between the outer layer of scales and the hard seed is a yellow, mealy, oil-bearing mesocarp or pulp (Mbaka et al., 2012). Similarly, Ndon BA, 2003 described raphia hookeri fruit as large, cone-shaped with a hard nut having an outer layer of rhomboid-triangular and overlapping reddish-brown scales. Between the outer layer and the seed, is a yellow, oil-bearing mesocarp or pulp (Ndon BA, 2003). The pulp extract of Raphia hookeri was shown to contain vitamins C and E, carotenes, niacin, alkaloid, saponins, flavonoids and phenols which explains its antioxidant activity (Edem et al., 1984;Akpan and Usoh, 2004;Dada et al., 2017). Flavonoids and tannins as phenolic compounds in plants are a major group of compounds that act as primary antioxidants by scavenging free radicals (Polterait, 1997). The pulp has been reported to contain useful and therapeutic nutrients and chemicals. It is hard and often boiled before consumption. Given its hard and relatively dry nature attributed to its high fiber content, it could be conveniently processed into flour, as an alternative form for consumption or added to pastries that are less diversified in nutrients. The pulp is known by locals as an appetizer and aphrodisiac (Mphoweh et al., 2009). Many uses it for medicinal purposes and it has been reported to contain phytochemicals with antimicrobial properties (Ogbuagu MN, 2008). The Ogbusi (fruit pulp) of the raphia hookeri plant consumption is abysmally low and this may be due to little or lack of the knowledge of its medicinal benefits to the renal functions in spite of its reported high fiber, mineral, vitamins, and phytochemical contents (Ogbuagu MN, 2008). Hence, this present study is aim at evaluating the effect of raphia hookeri fruits pulp aqueous extract on renal functions of male wistar rats" model. ANIMAL PREPARATION A total of Thirty-Two (32) healthy male wistar rats of weight ranging from 130gram to 200grams were used for this study. These rats where all housed in the preclinical animal house, Faculty of Basic Medical Sciences, University of Port Harcourt, Nigeria. The animals were maintained in a well-ventilated animal house under optimum condition of humidity, temperature and natural light-dark cycle were allowed free access to food and water. The experiment protocols and procedures used conform to the international guidelines of the care and use of animals in research and teaching American physiological society, 2002. Acclimatization of the Animals After identification, the animals were weighed using a weighing balance and housed in a clean plastic cage with 12 hours light-darkness cycle, for four weeks so as to acclimatized to the environmental condition of the University of Port Harcourt, the study was generally conducted in accordance with recommendation from the 1983 declaration of Helsinki on guiding principles in the care and use of animals. Experimental Extract The Aqueous extract of Raphia Hookeri Mesocarp (fruit pulp) was used for the experiment. Preparation of Aqueous Extract Raphia Hookeri Mesocarp (Fruit Pulp) Maceration method was used, the Mesocarp (fruit pulp) were air-dried in other not to kill the active ingredients, then it was finally crushed and soaked in a maceration jar about 1000gram of the extract was dissolved in 2000ml of water and allowed to stand for 72 hours with a continuous agitation to enable a good yield after which it was filtered and the filtrate was mounted on a water bath to evaporate the liquid content at temperature of 65 degrees Celsius, after evaporation the weight of the extract was taken and it was stored for use. Study Design A total of thirty-two (32) healthy male Wistar rats of weight ranging from 130gram -200grams were used for this study. The animals were divided into two groups: Control group and Dose dependent group: The dose dependent groups were further divided into three (3) subgroup two (2), three (3) and four (4). Each of the subgroups contains eight animals in each cage compartment. Mode of Administration of Extract In the course of oral administration of aqueous extract to the animals the following doses were administered for each group except the control group for twenty-eight (28) days. The Lethal dose (LD 50) of the aqueous extract of Raphia Hookeri fruit was calculated using Lorke"s method, 5000mg/kg body weight of wistar rats was attained, therefore the male wistar rats were not given extract beyond 5000mg/kg body weight Group 1: Were given animal feed and water Group 2: Were given 500mg/kg body weight of the extract. Group 3: Were given 1000mg/kg body weight of the extract. Group 4: Were given 2000mg/kg body weight of extract. EXTRACT ADMINISTRATION Raphia Hookeri Aqueous extract was administered orally daily for 28 days at a dose of 500mg/kg body weight of the animal to dose dependent group 2, 1000mg/kg body weight to dose dependent group 3, and 2000mg/kg body weight to dose dependent group 4. COLLECTION OF SAMPLE The Aqueous extract which was gotten from Raphia Hookeri fruits was purchased from Emoh community in Abua/Odual LGA in Rivers State, Niger Delta Region, southern Nigeria and the rats were sacrificed after 28 days of treatment. The rats were anaesthetized with chloroform one at a time. They were then sacrificed while still under anaesthesia. Each rat was dissected and the liver and kidney of each animal was excised and blood samples were collected from each of them through cardiac puncture for renal indices and electrolytes evaluation. LABORATORY TESTS AND ANALYSIS The following laboratory tests were carried out: Sodium (Maruna and Trider method) mmol/L was used for the analyses. Principle: The present method is based on reaction of sodium with a selective chromogen producing a chromophore whose absorbance varies directly as the concentration of sodium in the test sample. Procedure: The test tubes were labelled as standard, blank, and test. Pipette 10ml of the reagent into all test tubes. 0.0ml of the samples was added into appropriate tubes The samples were mixed and incubated for 5 mins at 25c The absorbance at 630nm was read and recorded. Potassium (Tiets N.W. method) unit mmol/L was used for this analysis. Principle: The amount of potassium is determined by the use of sodium tetraphenylboron in a specifically prepared mixture to produce a colloidal suspension. The turbidity of which is proportional to potassium concentration in the sample. Procedure: The test tubes were labelled as standard blank and test. Pipette 1ml of the reagent was released into all the tubes. 10ml of the sample was added into appropriate tubes. The samples were mixed and allowed for 3mins at 25c. Zero the spectrophotometer using blank at 500nm The absorbance was read and recorded. Chloride: (Levinson S.S. Method) unit mmol/L was used. Principle: The quantitative displacement of thiocyanate by chloride from mercuric thiocyanate any subsequent formation of a red ferric thiocyanate complex is measure calorimetrically. Procedure: The tubes were labelled as test, standard, and bank. Pipette 1.0ml of the reagent into the tubes. 10ul of the samples were added into the tubes appropriately. The samples were mixed and incubated for 5mins at 25c The absorbance at 480nm was read and recorded. Bicarbonate (HCO 3 ) (Back Titration Method) Principle: Serum HCO 3 was reacted with excess standard HCl. The remaining HCl was back titrates with standard NaOH using phenol red as indicator. Procedure: 50ml conical flask was added to a CO 2free d/w 250ul, 200ul sample, 0.0/NHCL 1mml, was well mixed and 3 drops of phenol red was added. The flask was whirl to release the CO 2 . The resultant solution with 0.0/N NaOH was titrated until the initial light yellow colour fades to a light purple at the endpoint. The remaining NaOH that does not take part in the reaction was read. The reading obtained was divided by + wo: This gives the concentration of HCO 3 in the sample unit mmol/L. Creatinine (Direct End-Point Method) umol/L Principle: Creatinine reacts with picric acid in alkaline solution to form a coloured complex. The amount of complex formed is directly proportional to the creatinine concentration. Procedure: The tubes were labelled as test, standard and blank. Pipette 2.0ml of reagent, into all the tubes. 0.1ml of the sample, standard and d/w was added into the tubes respectively. The sample was mixed and after 30seconds, the absorbance of the standard and sample was read. Exactly 2mins later, the absorbance of the standard and sample was read. A 2 of standard and sample A 1 -A 2 = A Statistical Analysis The data obtained from the present study were subjected to statistical analyses using the Statistical Package for Social Sciences (SPSS) version 21.0. Statistical significance was determined using one-way analysis of variance (ANOVA) followed by Post-Hoc multiple comparison test and p< 0.05 was considered statistically significant. The values were expressed as mean ± standard error of mean (SEM). Ethical Considerations This study was approved by the center for Research ethics. Values represent mean ± SEM, n=4; a Significant at p<0.05 compared to Group 1; b Significant at p<0.05 when compared to group 2; c Significant at p<0.05 when compared to group 3. AFERH = aqueous fruit extract of Raphia hookeri. Effect of aqueous fruit extract of Raphia hookeri on sodium ion (Na + ) in male Wistar rats Values represent mean ± SEM, n=4; a Significant at p<0.05 compared to Group 1; b Significant at p<0.05 when compared to group 2; c Significant at p<0.05 when compared to group 3. AFERH = aqueous fruit extract of Raphia hookeri. Effect of aqueous fruit extract of Raphia hookeri on potassium ion (K + ) in male Wistar rats Values represent mean ± SEM, n=4; a Significant at p<0.05 compared to Group 1; b Significant at p<0.05 when compared to group 2; c Significant at p<0.05 when compared to group 3. AFERH = aqueous fruit extract of Raphia hookeri. Effect of aqueous fruit extract of Raphia hookeri on bicarbonate ion (HCO -3 ) in male Wistar rats Values represent mean ± SEM, n=4; a Significant at p<0.05 compared to Group 1; b Significant at p<0.05 when compared to group 2; c Significant at p<0.05 when compared to group 3. AFERH = aqueous fruit extract of Raphia hookeri. Effect of aqueous fruit extract of Raphia hookeri on chloride ion (Cl -) in male Wistar rats Values represent mean ± SEM, n=4; a Significant at p<0.05 compared to Group 1; b Significant at p<0.05 when compared to group 2; c Significant at p<0.05 when compared to group 3. AFERH = aqueous fruit extract of Raphia hookeri. Effect of aqueous fruit extract of Raphia hookeri on creatinine in male Wistar rats Values represent mean ± SEM, n=4; a Significant at p<0.05 compared to Group 1; b Significant at p<0.05 when compared to group 2; c Significant at p<0.05 when compared to group 3. AFERH = aqueous fruit extract of Raphia hookeri. Effect of aqueous fruit extract of Raphia hookeri on urea in male Wistar rats Values represent mean ± SEM, n=4; a Significant at p<0.05 compared to Group 1; b Significant at p<0.05 when compared to group 2; c Significant at p<0.05 when compared to group 3. AFERH = aqueous fruit extract of Raphia hookeri. DISCUSSION OF RESULT The results of the analysis of renal physiology revealed at p-value ≤ 0.05 a statistically significant decreased level of Potassium across all the groups treated with aqueous fruit extract of Raphia Hookeri compared to the control. Hypokalaemia is caused as a result of potassium depletion. Hypokalemia leads to several important disturbances of renal function. Potassium depletion causes tubulointerstitial fibrosis that is generally greatest in the outer medulla. Although usually reversible, it may result in renal failure. These include reduced medullary blood flow and increased renal vascular resistance that may predispose to hypertension, tubulointerstitial and cystic changes, alterations in acid-base balance, and impairment of renal concentrating mechanisms. Also revealed significant increase of sodium althrough the groups treated with aqueous fruit extract of Raphia Hookeri compared with the control. In renal epithelial cells, a rise in sodium uptake across the apical membrane increases intracellular sodium concentration, which in turn stimulates the turnover rate of Na+-K+-ATPase and thereby enhances sodium efflux across the basolateral membrane, a condition called Hypernatremia (High Level of Sodium in the Blood) A prolonged increase in sodium (Hypernatremia) causes dramatic hypertrophy and hyperplasia and a rise in the quantity of Na+-K+-ATPase in the basolateral membrane..The result is in agreement with Armstrong et al., (2002) that stated that increase of sodium induces acute diuretic effect. It displayed a decrease in chloride in groups 3and 4 and an increase in group2, a prolonged decrease of chloride causes hypochloremia, a condition of an electrolyte imbalance that occurs when there"s a low amount of chloride in the body. Metabolic alkalosis is directly associated with hypochloremia as sodium bicarbonate reabsorption in the proximal convoluted tubule increases in hypovolemic settings with increased levels of angiotensin II Akoum, et al., 2021. Also revealed an increase of bicarbonate although groups treated with aqueous fruit extract of Raphia Hookeri. Metabolic alkalosis there is excess of bicarbonate in the body fluids. It can occur in a variety of conditions. It may be due to digestive issues, like repeated vomiting, that disrupt the blood's acid-base balance. It can also be due to complications of 93 conditions affecting the heart, liver and kidneys (Cleveland Clinic, 2023). CONCLUSION Prolonged consumption of Raphia Hookeri fruit pulp has effect on the renal physiology of the wistar rats. The kidneys eliminate, among other products, urea, uric acid, and creatinine, in addition to metabolizing and eliminating drugs and toxins Baynes, et al., 2006. Therefore, a decrease in urinary volume causes an increase in the passive reabsorption of urea and a decrease in its elimination, which depends on protein intake and catabolism (Castaño-Bilbao et al., 2009). The results statistically showed a significant increase Creatinine levels in groups 2 and 4 and a significant decrease in group 3 when compared with the control group and a decrease urea levels in groups 2 and 3, with a significant increase in group 4. The increase observed in urea and creatinine indicates that kidney function would deteriorate as it prolongs which negatively alter the renal physiology of the male wistar rats. Hence, the finding of this study has further confirm the effects aqueous fruit extract of Raphia Hookeri on renal functions of wistar rats.
2023-07-15T15:16:31.213Z
2023-06-04T00:00:00.000
{ "year": 2023, "sha1": "3d65365731e730bfc78391bbf477522169ae8fa7", "oa_license": null, "oa_url": "https://doi.org/10.36348/sijap.2023.v06i07.001", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "068397e044ca9f09ebadddf12a81e1cf3c0bdf59", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
3752489
pes2o/s2orc
v3-fos-license
Female Breast Cancer Mortality Clusters in Shandong Province, China: A Spatial Analysis This study aimed to detect the spatial distribution and high-risk clusters of female breast cancer mortality for the years 2011 to 2013 in Shandong Province, China. The urban-rural difference in the spatial distribution and clusters of disease mortality were also examined. Breast cancer mortality data were obtained from the Shandong Death Registration System (SDRS) during 2011 to 2013 and were adjusted for the underreporting rate. The purely spatial scan Statistics method was performed using Discrete Poisson model. Seven significant spatial clusters for high mortality of female breast cancer were detected in Shandong Province at the county level; these clusters were mainly located in the eastern, southern, southwestern, central and northern regions. The spatial distributions differed significantly between urban and rural populations. Population ageing influenced the distribution of breast cancer clusters for the urban eastern residents. This study provided evidence for the presence of clusters of breast cancer mortality in Shandong, China and found urban-rural difference in the clusters, which is helpful for developing effective strategies to control breast cancer in different areas. clustering among different administrative units in rural or urban areas, which results in a failure to clearly depict real regional disparities 14 . Thus, some developed countries (i.e., the USA) have already attempted to monitor cancer-related data based on smaller units such as cities, counties, towns, and even villages [15][16][17][18] . Monitoring cancer-related data in smaller units can both provide detailed information about the incidence or mortality of the cancer and identify potential clusters more accurately 19 . The spatial statistical analysis developed by Kulldorff and Nagarwalla has been demonstrated to be an effective method for discerning geographical distribution and detecting spatial cluster based on smaller units 20,21 . Previous studies have demonstrated that there was a rural-urban difference in breast cancer; these studies were generally based on a sample from surveillance sites or a retrospective sampling survey of death causes in some specific regions of China. However, no studies have explored the clusters of breast cancer mortality based on an entire province, particularly a province with a large population of nearly 100 million (8% of China's population). In addition, no studies, to the best of our knowledge, have examined the difference of spatial distributions between rural and urban areas. To remedy this situation, this study aimed to detect the geographic distribution and high risk areas of breast cancer mortality based on an entire population at the county level in Shandong, China and to examine the difference in spatial distribution between urban and rural areas. Methods Data collection. Shandong Province is located to the east of the mountain Tai-Hang and adjacent to the Bohai Sea and the Yellow Sea; it is the second-largest province, population-wise, in China (Fig. 1). Data on female breast cancer deaths have been collected through official surveillance of the Shandong Death Registration System (SDRS), which was established in 2006. The SDRS, which initially covered only the population of Disease Surveillance Points (DSPs), has collected data from the entire Shandong population since 2010. In this study, a case was defined as a death caused by malignant neoplasm of breast cancer (C50) according to the International Classification of Diseases, 10th Reversion (ICD-10), in the women residing in Shandong during 2011 to 2013 22 . These death data were adjusted based on the population-based underreporting investigation conducted in Shandong during 2011 to 2013 by the capture-mark-recapture (CMR) method 23 . CMR is an epidemiological method used to estimate the size of a targeted population with a specific characteristic 24 . It is hypothesized that M independent individuals in a randomly acquired sample from the targeted population with N individuals are marked and released to the original population. Then, another random sample is acquired with n independent individuals from the same targeted population to identify the number of marked individuals (m). An unbiased formula was used to estimate the size of the targeted population according to the two independent samplings. The population information was obtained from the Shandong statistical yearbook, and the age-specific population numbers for female permanent residents were acquired based on 142 counties/districts in Shandong during 2011 to 2013. Urban-rural classification. There are six levels in the Chinese administrative system: national, provincial, prefectural, county, township, and village level. In Shandong Province, there are 17 prefectures administrating 142 county-level units (counties/districts). Each county-level unit (county or district) usually includes two types of township level units: townships (in Chinese, we call "Xiangzhen") and subdistricts (in Chinese, we call "Jiedao"). The former consists of a town centre and dozens of surrounding villages. The latter is comprised of urban communities or suburbs. We defined the rural population as people living in townships, and the urban population as people living in subdistricts. This classification is virtually identical to the recommended methods by the National Statistical Bureau, which were used in the recent census 25 . There are no rural townships in 17 urban districts and thus only urban population was defined in such districts. For 5 counties with predominantly rural population, the data of subdistricts population were unavailable and we treated the population of the entire counties as rural population. All in all, we defined 262 sub-county level units for assessment totally. Among them, 137 were urban units and 125 were rural units. Of which, 5 units with small populations, including 4 urban units and 1 rural unit, had no breast cancer deaths during the study period, and we treated their mortalities as zero. Statistical analysis. To alleviate mortality variations in small populations and areas, the average reported mortality rate (ARMR) of breast cancer was calculated in each county/district as a ratio of total deaths over the corresponding population from 2011 to 2013. The ARMR of breast cancer was displayed by the GIS-based maps at the county level to visualize the distribution patterns of breast cancer and the high-risk (hotspot) areas in Shandong Province. The county-level point layer, containing information on latitudes and longitudes of the central points of each county, was created by GIS with a ratio of 1:100,000 to draw the maps. We performed purely spatial analysis using a Discrete Poisson model to detect the spatial distribution of female breast cancer deaths in Shandong Province. Age and urban/rural status were adjusted to discern their effects on breast cancer clusters. Statistical significance of clustering was based on the Monte Carlo hypothesis testing 26 by comparing the likelihood ratio test statistic from the observed data set with the test statistic from 999 random data sets generated under the null hypothesis of no clustering. The level of statistical significance was set as 0.05 in this study. The spatial scan statistics detected the disease clusters by gradually scanning a window across space and comparing the number of observed and expected cases inside the window. In our study, we specified the maximum spatial cluster size as one with 50% of the population at risk and conducted the scanning window in the shape of a circle. The most likely cluster was defined as that with the maximum likelihood ratio (LLR) with statistical significance. Secondary cluster were also reported that rejected the null hypothesis but did not overlap with the most likely cluster. The calculation of the ARMR at the county level was conducted by Stata Version 12.0 (Stata Cor., College Station, TX, USA). The county-level polygon maps at the 1:100,000 scale of the ARMR and disease cluster were drawn using the software ArcGis 10.2 (ESRI Inc., Redlands, CA, USA) 27 . A spatial scan statistics analysis was performed to examine the presence of female breast cancer clusters using SaTScan v9.1.1 28 Results Descriptive analysis. In all, 11,510 breast cancer deaths occurred in the Shandong female population during the years 2011 to 2013, with an ARMR of 8.16 per 100,000 women. Of these, 11,505 women (99.96%) had complete information including their place of residence. For urban and rural units, the ARMR was 8.85 and 7.80 per 100,000 women, respectively. The ARMR at the county level ranged from 0.57 to 17.95 per 100,000 women. Figure 2a shows the geographic distribution of female breast cancer mortality in Shandong Province, in which green represents lower mortality than expected and red represents higher mortality than expected. In general, female breast cancer mortality displayed a decreasing trend from the eastern region to the western region, but some counties of the western region were still at a high risk of mortality. High mortality rates were mainly concentrated in the eastern region and some counties of the southwestern region in Shandong, including most of the counties in Yantai, Qingdao and Weihai cities; Dingtao county and Juye county in Heze city; Tengzhou county and Shizhong district in Zaozhuang city; and Cangshan county in Linyi city. Lower mortality rates were mainly located in the western and northern areas of Shandong. For urban units, the mortalities of the eastern and southwestern regions were relatively higher than those of the other areas. The highest mortality rates were mainly located in the southwestern region of Shandong, including Dingtao county, Cao county and Shan county in Heze city, and Tengzhou county and Shizhong district in Linyin city (Fig. 2b). For rural units, the regions with the highest mortality presented a dispersed distribution and were mainly located in the surrounding areas in Shandong, such as Fushan district in Yantai city, Wudi county in Binzhou city, Decheng district in Dezhou city, Dingtao county and Juye county in Heze city, Tengzhou county in Zaozhuang city, and Cangshan county in Linyi city, but the mortality of female breast cancer gradually decreased from the eastern region to the western region (Fig. 2c). Spatial scan statistics analysis. The spatial scan statistics analysis detected seven significant spatial clustering areas of breast cancer mortality in the entire province (Table 1 and Fig. 3a), suggesting that breast cancer mortality was not randomly distributed. The most likely cluster was located in the eastern region of Shandong Province, namely in Jiaodong Peninsula, including 25 counties with a relative risk (RR) of 1.46 compared to the rest of Shandong Province. The 1st and 3rd secondary clusters were located in the southern region, including 4 counties (Tengzhou county and Shizhong district in Zaozhuang city and Luozhuang district and Cangshan county in Linyi city) with RRs of 2.01 and 1.54, respectively, compared to the other areas. The 2nd and 5th secondary clusters were located in the southwestern region, including 2 counties in Heze city (Dingtao and Juye counties) with RRs of 2.24 and 1.56, respectively. The 4th secondary cluster was located in the central region of Shandong, including 9 counties/districts (Shizhong district, Lixia district, Licheng district, Tianqiao district and Zhangqiu county in Jinan city; Boshan district and Zhoucun district in Zibo city; Laicheng district in Laiwu city; and Zouping county in Binzhou city) with a RR of 1.31. The remaining secondary cluster was located in the northern region and only included one county (Wudi county in Binzhou city) with a RR of 1.59. After controlling for age, six clusters were detected in which the spatial distribution was consistent with that before controlling for age (Table 1 and Fig. 3b). The result of the spatial scan analysis showed that there were four detected clusters of breast cancer mortality in the urban areas of Shandong Province (Table 1 and Fig. 3c). The most likely cluster was located in the southern region and included 3 counties with a RR of 2.32 compared to the rest of Shandong. The other clusters were located in the southwestern, eastern and central regions of Shandong Table 1. Results of the spatial scan analysis of female breast cancer mortality at the county level in Shandong for the years 2011 to 2013. Note: "Most likely cluster" was defined when the maximum log likelihood ratio (LLR) with statistical significance was detected by Monte Carlo simulation in the spatial analysis; the other LLR values with statistical significance were identified as the "Secondary cluster". The relative risk (RR) was the ratio of the mortality inside a cluster area to the mortality outside a cluster area. compared to the remaining areas. After controlling for age, only three clusters were detected and located in the southwestern and southern regions (Table 1 and Fig. 3d). The most likely cluster was located in the southwestern region, with a RR of 2.04. The two secondary clusters were both located in the southern region, with RRs of 2.42 and 1.73, respectively. Four clusters were detected in the rural areas of Shandong using a purely spatial scan statistical analysis (Table 1 and Fig. 3e). The most likely cluster was located in Shantung Peninsula and included 59 counties with a RR of 1.42. The other secondary clusters were located in the southern and southwestern regions with RRs of 1.53, 2.05 and 1.63, respectively, compared to the other areas of Shandong. After controlling for age, the spatial distribution of the clusters was consistent with that before controlling for age (Table 1 and Fig. 3f). Discussion This study detected, for the first time, the spatial distribution and high-risk clusters of female breast cancer mortality at the county level in an entire population of Shandong Province in China. Consistent with previous studies, the current study also found significantly higher mortality in the eastern region than that in the western region. This difference remained significant even when the mortality was standardized by age (data not presented). Jing Han and his colleague's study 29 showed significant regional differences in female breast cancer mortality at the county level in Shandong, with the highest county (9.99 per 100,000) having nearly four times as high a rate as the lowest county (2.63 per 100,000). Another study in Shandong 30 showed that female breast cancer mortality was higher in the eastern region than that in the western region. The study also found that the mortality was higher in urban areas than that in rural areas. The spatial scan analysis in this study detected seven significant clusters of breast cancer mortality, which were mainly located in the eastern, southern, southwestern, central and northern areas of Shandong. These clusters remained significant after controlling for age, which indicated that the influence of population ageing on the clusters of breast cancer mortality was not significant. Urban-rural difference in breast cancer has been demonstrated both in China 11,30 and in other countries [31][32][33] . In the current study, this difference was also examined. Two major populations (urban and rural residents) were classified based on sub-county level units (townships/subdistricts) to respectively explore the spatial distributions and clusters of breast cancer mortality in urban and rural areas. The results indicated that the geographical distribution of breast cancer mortality in urban areas was not completely accordant with that in rural areas, even though a gradual increase in mortality from western to eastern regions was observed both in urban and rural areas. Further study is necessary to identify the clusters of breast cancer mortality in urban and rural areas respectively. In urban areas, four significant spatial clusters were detected by the spatial scan analysis; these were located in the eastern, southern, southwestern and central regions. After controlling for age, the clusters in the southwestern and southern regions were still discerned, whereas the clusters located in the eastern and central region disappeared and a new cluster including only one county appeared in the southern region. This indicated that the clusters in the eastern and central regions might result from the regional population ageing. Three clusters were found in rural areas located in the eastern, southern and southwestern regions. When controlling for age, the results remained the same, which suggested that population ageing did not affect the clusters in rural areas. Similar to previous studies, female breast cancer mortality in this study was higher in the eastern region than that in the western region 13,30 . Some previous studies have reported a positive correlation between the socioeconomic level and breast cancer mortality 34,35 . In Shandong, the economy in the eastern region is more developed compared to the central and western regions. This regional difference in breast cancer mortality might be a result of the imbalance in the development of the regional economy in Shandong Province 8, 10, 34 . There was a difference in the spatial distribution and clusters of breast cancer between urban and rural areas in this study; this difference has not been reported in previous studies. The clusters were observed in eastern rural areas regardless of adjustment for age. However, no clusters were observed in eastern urban areas when adjusted for age. In the eastern urban areas, accessibility to high quality health services was easier for women diagnosed with breast cancer than for women in rural areas. In addition, the urban women tended to receive physical examinations more frequently, which is helpful for detecting breast cancer in its early stage 36 . Thus, the women in eastern urban areas would likely have higher survival rates after the diagnosis of breast cancer than rural women. The results in our study suggested that there were also some small clusters in the underdeveloped western and southwestern regions in Shandong. Women living in these areas may have lower survival rates because of delayed detection and poor access to high quality health services 37 . It is also likely that cancer-related determinants existed to lead to the high incidence of breast cancer in these areas. Moreover, the inherent reluctance due to cultural barriers and cancer fatalism in Chinese women may also hamper screening and treatment efforts, particularly in older women and those from groups with low socioeconomic status [38][39][40] . This study has several limitations. First, although the death data were adjusted for underreporting rates, we could not exclude the possibility that the underreporting rates vary across counties/districts and that some counties/districts or towns/subdistricts might have more missing cases for various reasons. In addition, given the delay of population statistical data collection, we could not obtain official data from the population statistical bureau at the beginning of our study, and thus the total numbers of the population from 2010 to 2012, instead of from 2011 to 2013, were used. Second, we only analysed the spatial distribution of breast cancer mortality in the entire province during a short period (the average mortality was calculated from 2011 to 2013) using a purely spatial analysis. Further studies are necessary to evaluate the spatial and temporal changes in the distribution of breast cancer mortality using longitudinal data from a longer surveillance period. Third, we did not assess potential influencing factors associated with clustering. Although we had hoped to obtain socioeconomic and environmental information from the official surveillance data, we were unable to do so. Future research is necessary to identify potential associated factors in the clusters. Conclusion In conclusion, this study detected the spatial distribution and clusters of female breast cancer mortality in Shandong Province, China. The results demonstrated that the mortality was higher in the eastern regions than in the western regions; an urban-rural difference in the clusters was also identified. The study indicated that the spatial distribution and clusters of breast cancer mortality differed between urban and rural areas. The identified clusters in eastern rural areas were not observed in eastern urban areas. This study may provide public health officials with necessary information about statistically significant clusters of breast cancer mortality in urban and rural regions and thus enable them to perform more effective and targeted strategies to control breast cancer in different areas. More detailed individual-level investigations are necessary in the identified clusters to evaluate potential determinants for female breast cancer mortality.
2018-03-07T14:25:57.493Z
2017-03-07T00:00:00.000
{ "year": 2017, "sha1": "c869ee5b9f5b6b15abaa685d8d5ba04f7962dba9", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-017-00179-8.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "c869ee5b9f5b6b15abaa685d8d5ba04f7962dba9", "s2fieldsofstudy": [ "Geography", "Medicine" ], "extfieldsofstudy": [ "Geography", "Medicine" ] }
234774042
pes2o/s2orc
v3-fos-license
RETRACTED ARTICLE: Analysis of the influence of the characteristics of mountain soil and the noise in the tunnel on people: active noise control system The production area of a mountain crop includes the resources of the land, the water, and heat coefficient, and the advantages of light and temperature are very obvious. These crops are fully developed in planting, and the pigment formation is very good, and the sugar content is high, and the damage degree of pests, and diseases are also relatively high. low. According to experts at home and abroad, this is the best planting area in the world for planting and cultivation. In this paper, a large number of samples have been collected for research and experiments, and the physical and chemical properties and biological properties of local species in a certain mountainous area have been analyzed. The growth and development of crop quality factors in the soil and the formation of products are systematically studied. After the establishment of a perfect soil quality evaluation system, it provides guidance and realistic theoretical basis for local crop industry growers. It needs to be calibrated before each data collection, especially for all sensors inside and outside the main tunnel, to ensure the accuracy and feasibility of the tunnel test in the test. The calibration of the sensor includes its sensitivity and the calibration of the sensitivity at the factory. As time goes by, its temperature, dust, and humidity will have a certain influence, which will cause its sensitivity to change. Acquired acoustic signals will cause inaccurate results along with inaccurate sensitivity and finally in the tunnel will also affect the results of inaccurate noise distribution. This experiment uses the PAI sound and vibration test analysis system in the table for data collection. The acquisition process is to obtain the analog degree of different devices on different sensors and the information of the tested unit and finally make the digital information obtain representative physical meaning. Introduction Through the analysis and research on the soil indicators and different soil levels of a typical mountain crop garden in different planting years, it is found that (1) the geology of the soil in a mountain area is relatively coarse, and the content of sand is more than 60%, and it is buried in the soil. Under the influence of vines, the difference between the inner and outer layers is not particularly significant. The 2m alluvial wood characteristics are more obvious, and the degree of soil development is not high, and it is relatively poor within a certain range (Felicisimo et al. 2012). (2) The specific content of the soil is less. With the increase of the year, the soil of the garden planted is> 0.36mm, and the water stability characteristics are also more obvious (Friedman et al. 2000). (3) The capacity of the soil in each vegetation garden is relatively large, which exceeds the limit and inhibits the growth and development of roots to a large extent (Golkarian et al. 2018). (4) In the surface and subsurface parts of the soil, the main air pores and capillary pores are the main ones, and the subsurface layer is dominated by the capillary pores. Each bottom layer is dominated by inactive pores. Although the performance of the soil is better, the leakage of water and fertilizer is more serious (Hamdan and Khozyem 2018). (5) The water holding capacity of the bottom of the soil in the field is relatively high. Although its surface water content is the highest, its coefficient of variation in the field is large, so the coefficient of variation of saturated water content is relatively small. Even though the degree of water content in soil changes rapidly, the water holding capacity is poor (Hamdan and Khozyem 2018). (6) The 100 acres of the promenade in a mountainous area are divided in detail according to the area. The study found that the sand performance of the mountainous area of the promenade is relatively strong. If the sandy soil is mainly gravel, it is not conducive to the development of the root system of the crop; the soil in the production area a is too sticky and heavy; the soil in the b area is mainly made of light soil and windy sand; the c and d areas are mainly sandy loam, although the aeration is good, and the soil structure is not conducive to stability (Issawi and Anonymous 1978). The median algorithm can effectively separate and reduce the noise of the signal with impact noise, but it cannot solve the problem of this algorithm because the fixed factor value cannot quickly and take into account the shortcomings of convergence speed and steady-state error. Therefore, the LMS algorithm proposed in this paper is a variable step algorithm, which can actively control the noise problem in the car (Khidr 1997). If the degree of active control of the noise in the car is obvious, then the algorithm research problem proposed in the article overcomes its own shortcomings, which is an effective method of noise control in the car (Kim et al. 2019). Research design Mountain soil sampling and experimental methods Determination of soil physical properties Soil bulk density: A ring knife method is used. The knife is inserted into the soil vertically in every aspect, and then the surrounding part of the soil is excavated with a shovel, and the other excess soil is cut away with a knife. The temperature is 115°C. To bake the box, then use tweezers to pick it up and place it on the balance to weigh (Klitzsch et al. 1987). Soil moisture content: Measured by specific immersion method. Soil field water holding capacity: Measured by the ring knife method (Lee 2005). The composition of soil mechanical problems: The dry sieve method is used for preliminary determination, and the wet sieve method is used to stabilize the aggregate performance. Soil pores: Calculated and converted by infiltration method (Lee et al. 2004). Soil aggregates: The wet sieving method adopted by the preliminary dry sieving method was used to establish waterstable aggregates. The distribution of meteorological stations in a mountainous area is shown in Fig. 1: Determination of soil chemical properties pH: Use the potentiometric method with a pH meter (water: soil = 5:1). Full salt: Measured with DDS-11 conductivity meter. Organic matter: Use potassium dichromate and ferrous sulfate for titration test. Total nitrogen: Using the distillation method, the sodium hydroxide of total phosphorus is anti-antimony and anti-color problems. Alkaline hydrolysis of nitrogen: Alkaline hydrolysis and diffusion method. Quick-acting potassium: Flame photometric method using ammonium acetate to soak the embankment. The effective calcium and magnesium are all measured by the atom in the process of absorbing light. The wave length is 527.3nm. The effective iron, manganese, copper, and zinc are all measured by atomic absorption method, and the content of metal elements in the solution was determined directly by atomic spectrometry. The principle and collection method of noise in the tunnel Before collecting data each time, a sensor calibration test is required to ensure the accuracy of the test results in the tunnel experiment. The so-called sensor calibration refers to the sensitivity calibration of all sensors in the tunnel and the sensitivity calibration of each sensor when it leaves the factory (Liu et al. 2016). But as time changes, the temperature, humidity, and dust level of the sensor will be affected, which in turn affects its sensitivity. However, inaccurate sensitivity will lead to inaccuracy of the collected acoustic signal results and ultimately affect the inaccuracy of tunnel noise distribution results. In this experiment, the PAl sound and vibration test and analysis system in the table were used for data collection (Malekzadeh et al. 2019). The purpose of the data collection system is to obtain the simulation of various sensors and other equipment and the information collection of the test digital and the unit under test and then process each collected information and obtain the meaning of the signal. Finally, analysis and research are carried out (Mirzaei et al. 2020). In each case of data acquisition and analysis, the system is composed of the software and hardware equipment of the front part of each segment and the software data of the back part for analysis and processing. The data collection of each front part will be from the measuring point, starting from a certain part of the sensor, and then the measurement methods of different physical quantities were studied. Each time the information received by the sensor will be converted into an analog electrical signal output according to a certain rule (Moawad 2013). The sensor and the analog-to-digital conversion device are connected by wires, and then use the wires to output the signal to the analog-to-digital device, and then the analog signal is converted into a digital signal that the computer can recognize. Each analog-to-digital conversion device is connected with the computer through a dedicated device, and the calculation is performed using data acquisition software. In the process of data collection, by reading the data and storing it in the hard disk or other computer-connected equipment, each equipment needs power supply (Moawad et al. 2016). Due to the restrictions on the conditions of the tunnel, each laboratory needs to carry out its own power distribution. The equipment that generates electricity is then supplied on site. The above is the whole process of field data collection and storage. According to the collection process, we can know that it is mainly composed of power supply, data transmission, and sensor transmission equipment and module conversion computer and other software (Moore and Grayson 1991). In general, the tunnel is relatively long, and the length is greater than the width and its height to a certain extent, and the interior of the tunnel is relatively smooth. At the same time, the long space tunnel can be regarded as a uniformly finite long tube. The incident wave and reflected wave are in the form of Eqs. (1) and (2): The reflection wave port is caused by the acoustic load, it is different from the incident wave, and there is also a phase difference, which can generally be expressed as Eq. (3): Formula 1 and 2 are added together to calculate the total sound pressure, as Formula 4: Among them: The three-dimensional coordinate wave equation is Eq. (6): According to the relationship between the two coordinates of the right angle and the cylinder, we can get the equation of the cylinder coordinate as Eq. (7): According to this equation, the sound pressure in the tube can be obtained as Eq. (8): Design of active noise control system Median LMS algorithm structure parameter setting This paper uses the median algorithm to simulate the active control simulation technology of its track vehicle noise control. This algorithm takes the discrete error signal over a period of time for research and updates the adaptive filter weights. Then the error signal is directly used in the update application of the filter, and the weight vector formula of the filter is as follows: (9) and (10): W n þ 1 ð Þ¼ W n ð Þ þ μ:med e n ð ÞX n ð Þ; e n−1 ð ÞX n−1 ð Þ; …; e n−1 þ 1 ð Þ X n−1 þ 1 ð Þ ð9Þ Variable step size LMS algorithm structure parameter setting This paper uses the variable step algorithm of sine function to study the function relationship between each factor, and error signal is as follows: The two dependent variables in the formula are the structural parameters of the variable step size algorithm. The step size function controlled by each different parameter is within a certain value range, which will not only affect the convergence speed of each but also affect each parameter; the shape of the step length curve is close to zero in the steady state. When the factor is fixed, the functional relationship between each step length and the error signal is shown in Fig. 2. When the parameters are fixed, the function relationship between the step length and the error signal is shown in Fig. 3 (Naghibi and Moradi Dashtpagerdi 2016). For each same initial error, the larger the parameter, the faster the convergence speed in the initial stage. The convergence speed of the algorithm increases and the corresponding n value may be larger. If the steadystate error has a relatively high requirement, then you should choose a relatively small parameter value, and you should pay attention to some problems each time you choose. If the parameter is selected smaller, the interval will change during the convergence process of the algorithm (Naghibi et al. 2020). The larger it is, it will gradually become another algorithm, and the steady-state error will also become larger. Similarly, it can be seen from Fig. 3 that when β is fixed, if there is a higher requirement for the convergence speed of the algorithm, then a larger parameter value should be selected; if there is a higher requirement for a stable error, the parameter value should be selected smaller. Yes, the smaller the parameter value is selected, the smaller the interval of change in the process of algorithm convergence. Figures 2 and 3 show that in the initial convergence stage, the larger the value of n, the larger the corresponding μu(n) value, and the faster the algorithm converges (Natarajan and Sudheer 2019). When the algorithm enters the convergent state, the value of |e(n)| reaches the minimum, and the corresponding u(n) value is also the minimum. Therefore, compared with the LMS algorithm, the algorithm used in this article can solve its inherent defects or is an effective improvement algorithm. In terms of parameter selection, this article adopts two measures, fixed value and optimized value (Nourani et al. 2014). The range of values and the adoption of the algorithm and the optimization results are shown in Fig. 4: As shown in Fig. 4, the optimal values of the two parameters are 1, respectively, but the selection of the optimal values of the two parameters may currently require lack of theoretical data, and it can only be determined and developed through experiments. The article also gives there are two parameter adjustment principles: if you want to obtain a faster convergence and tracking speed, you should increase the number of parameters, and if you want to get a relatively small steadystate error, you should decrease the parameter value. But as the number of selections of the two parameter values becomes smaller, the algorithm of variable step size will regress to another algorithm. Measurement results of soil physical properties It can be seen from Table 1 that the soil planting area of a mountain crop is mainly sand grains, followed by silt and clay particles. The study found that the texture changes between different layers are also different. If the soil layer is deeper, the less sand content is, and the clay particles will increase instead. The soil powder content of the bottom layer is obviously higher than that of the other two layers (Pakparvar et al. 2018). For a certain mountain crop, the range of the sand content of the soil to remove the surface layer is between 50 and 70%, the range of the change of powder grain is between 15 and 30%, and the range of the change of clay grain line is between 20 and 40%. As the depth of the soil increases, the sand content becomes less and less; otherwise, the clay particles increase, and the silt content at the bottom of the soil is significantly higher than the other two parts, and the coefficients of each different layer are also constant (Paraskevas et al. 2015). The types in soil vary greatly, too. It can be seen from Table 2 that the overall row aggregates of the plantation soil of a certain mountain crop are relatively low, and the content of soil stable aggregates with a diameter of 0.35-0.6mm is relatively large, and the dominance is also smaller than that of other soils. The content of soil waterstable aggregates within 50cm is higher than that of the bottom soil. From the perspective of the percentage of soil aggregates of each level, the proportions of >6mm are 43.54%, 62.71%, and 6-2mm. The proportions of 2-1mm and other grades are 6.02%, 7.06%, and 4.28%, respectively. The aggregate content of the surface layer is equivalent to relatively high, and the content of the soil aggregates in the upper layer is the highest, exceeding 70%. The dispersed state of the soil particles basically exists, and the stability and erosion resistance of the soil are relatively weak. Table 3 shows that the soil content of a crop plantation in a certain mountainous area is relatively large, and the scope of change is also limited. The surface soil content is not more than 2g. If the soil is loose, then it is suitable for cultivation in some environment suitable for the growth of crop roots. The deeper the degree, the greater the internal soil capacity. When the soil layer reaches a certain value, the area of each part is as high as 1.73g per cubic centimeter. If the volume is too large, it will inhibit the degree of deep penetration and growth of the root system. From the coefficient of each variation, it can be known that the soil content in the study area will become higher overall, ranging from 8.3 to 14.6%. The minimum value of soil capacity is 2.06g, and the maximum value is 3.68g, which indicates the surface layer. The difference between the two layers of the subsurface layer is not obvious, and both are lower than the soil value of the bottom layer. The terrain features of a certain mountainous area are complex and diverse. According to the comparative standard of Chinese geomorphological mapping that can be seen from the research, this terrain is divided into 5 grades, namely, flat, slightly undulating, small and medium undulating, and large undulating. A certain mountainous terrain is dominated by small undulations (70-200m), accounting for 62.87%; slightly undulating (30-70m) terrain, accounting for 17.59%; and moderately undulating terrain (200-500m) accounting for 11.51%, mainly distributed on the western edge In the northeast and northwest part of Shaanxi; the flat area accounts for 8.56%. It is concentrated in the eastern part of a mountainous The roughness of the surface refers to the ratio of the surface area of the earth to its projected area in each specific area, which reflects an index degree of the surface morphology. The roughness of each soil surface refers to an important part of the characteristics of soil hydrology. The roughness of the surface has a certain effect on soil invasion, and it also has two effects in the process of erosion, namely, enhancement and reduction, which can change the physical and chemical properties of the soil to a certain extent and the nature of runoff. In rain types such as reducing the shear force of runoff, the duration of the rainfall will indirectly affect the occurrence process and the rate of erosion, as shown in Fig. 6. Results of determination of soil chemical properties Soil organic matter is the most important source of nutrients in the soil. It can not only improve the physical structure of the soil but also promote the growth and development of plant roots. If the activity of microorganisms is increased, it will improve the fertility and buffering capacity of the soil to a certain extent and has the function of activating mineral elements. The content characteristics and distribution frequency of organic matter in the soil in the cultivation area of a certain mountain crop are shown in Fig. 7. The smaller the average value of the range, the smaller the average value of the range, which means that the growers of the crops attach great importance to and improve the organic level of the soil. Adding organic fertilizer will increase the level of organic matter in the soil. Figure 7 can clearly reflect the overall distribution of organic matter in the plantation. The overall distribution of organic matter in the soil is on the left, with a frequency of 3.65 kg, indicating that the level of fertility is relatively low. Need to add a lot of organic matter. The nitrogen content in the soil is affected by various factors such as the input of nitrogen fertilizer, the fertility of the soil and the absorption of nitrogen by crops. Table 4 shows that the total nitrogen content in the absorption root distribution area of surface crops is the lowest. The content is different; there is a big difference. If the nitrogen content in the early stage is limited and the nitrogen supply capacity is relatively weak, then the nitrogen consumption in the later stage will be higher, and the artificial nitrogen content in each year can be kept at the same level as the content consumed each time. The difference in the total nitrogen content of vegetation in different years is not significant. For the entire vegetation as a whole, the total nitrogen content is not only low but also in the range of lacking grades. If you want to ensure the healthy development of crops, you need to follow a certain order supplement nitrogen fertilizer. A R T I C L E The soil content in this area is mainly composed of the local phosphorus-rich limestone, which is affected by the parent soil. The management of the soil after cultivation has a relatively small impact on the total phosphorus. If the plant roots in the soil can absorb the available phosphorus component, then the part can be fixed. Therefore, the longer the planting time of the vegetation, the higher the total phosphorus content. It can be seen from Table 5 that the total phosphorus content of the surface soil with a planting period of 9 years has increased significantly, and the depth of crop planting has no significant difference in total phosphorus in different soils. With the increase in the number of years, the phosphorus content has increased but not obvious; the highest soil content of the vegetation garden within 3 years of planting is only 8.06 mg, which is significantly lower than that of the vegetation gardens of 6 and 9 years. Fertilization can supplement a large amount of phosphorus content to a certain extent, and the roots also consume a large amount of phosphorus. The input and consumption of the planting period of 6 years remained basically the same. The potassium content in the soil changes with the nutrient absorption of surface plants. As shown in Table 6, as the planting time increases, the content of available potassium also increases. Vegetation plants belong to potassium-like plants. Crops need to increase the demand for potassium after the fruit expands. With the continuous supplement of fertilizer, the content of potassium in the soil is also increasing, but the content of the surface layer and the subsurface layer are also different, and the surface layer is significantly higher than that of the subsurface layer. There is no significant difference in soil available potassium for years. Analysis of noise signal in tunnel When all the vehicles on the track are moving, there may be a certain degree of noise problems in the car body and the noise reflected in the tunnel due to the narrow road, which makes the noise inside the car continue to increase, passing the outside of the car, projected into the car. The following figure shows the problem of measuring points in the car under three conditions of acceleration and uniform speed and deceleration of different vehicles in the track. At this time, in order to intuitively understand the characteristics of the noise of the measuring points under different working conditions of the tunnel, the curve is drawn. It can be seen from Fig. 8 that the weights of the measuring points of the operating conditions in each tunnel are different, and the sound pressure levels are all above 85dB. The researcher analyzed the reasons, mainly because the joints of the cars are in a sealed state, the sound insulation effect and the sound absorption effect are not good, and it will also affect the noise problem in the cabin. Another reason is that the measurement points of different road conditions are different, and the scores are not equal. When each different train is running at a constant speed, the speed of the vehicle is answered to the maximum value during the operation of the vehicle. At this time, the acceleration and deceleration are relatively large and reach the highest value, so a relatively large noise problem will occur. Frequency is one of the important parameters describing the characteristics of sound. This paper analyzes the octave frequency of noise at 8 measuring points in different positions under each different working condition, as shown in Figs. 9, 10, 11, and 12. Analysis of the results of active noise control in the car In this paper, when the selected measuring point 1 is on the elevated bridge and different vehicles are traveling at a constant speed, some of the cars will be perpendicular to the floor due to the height of 2.1m. The noise sample becomes the noise of the main control algorithm. Object, each different sample is set at a frequency of 31250Hz. It can be seen that the 5s intercepted in the figure is used as a main noise elimination object. Figures 13 and 14 are the waveforms of the original noise in time. In the active noise control of the LMS algorithm, the sample selects the same measurement point 1, which is the situation when the vehicle on the track is driving at a constant speed. The height of the horizontal axis in the carriage on the upper part of the elevated bridge is 2.1. When m, the sampling frequency is set to 21105Hz. If 5s is intercepted as the main noise elimination object, the LMS algorithm and the variable step algorithm proposed in the article are used for the internal active control of rail vehicle noise. If it is based on the algorithm, the result of active noise reduction is shown in Fig. 14. The filter order of the variable step size algorithm has reached M=11, and the result of active noise reduction is shown in Fig. 14. From the curve in the figure, it can be known that if the LMS algorithm is guaranteed to reach the condition of convergence, another algorithm will converge in each of the fastest conditions, that is, the 65th sampling point, but the curve is not smooth. Compared with the variable step size algorithm, the LMS algorithm proposed in this paper converges faster. The curve shows that the algorithm has entered the overall convergence state at the 27th sampling point, and the convergence curve is very smooth. It can be seen from Figure 14 that the steady-state error of the variable step algorithm proposed in this paper is obviously smaller than that of the LMS algorithm. As the convergence of the algorithm becomes larger, the variable step algorithm will gradually increase in steady-state error. , Changes with changes, that is, the algorithm is getting smaller. In the LMS algorithm, the error signal fluctuates greatly in the process of convergence. The steady-state error of each different error signal at the sampling point will gradually increase, and the error of the variable step algorithm will also be between the two. The time gradually grows larger. Discussion on the physical properties of mountain soil The soil in the crop production area in a mountainous area is mainly lime-calcium, mainly including silt soil, windy sandy soil and gravelly sandy soil. The content of windy sand and lime-calcium soil in the soil layer area is relatively low, and the surface soil is sandy. More serious, the content of sand is higher than others, the clay content of the organic matter in the soil is low, and the capacity is relatively large. These conditions are not conducive to the development of vegetation roots, but there are some important advantages in the production of raw materials. The clay in the soil area is relatively high, the texture is mostly in the middle soil area, and the capacity is relatively low. The water holding capacity in the soil is relatively strong, and it is conducive to the growth and development of the root system, but there are no high-quality conditions for the production of high-quality crops. In addition to the instillation of silt soil, the soil in a mountainous area has a strong sand content. And more than 60%, the number of aggregates in this area is relatively small, especially the proportion of water-stable aggregates is relatively small. Planting and tillage are conducive to the formation of aggregate structure in this area, and the sandy soil content is larger than the others. The quality of each measured soil exceeds the average. In terms of fertilization, the depth of organic fertilization reduces the capacity at that time, and the aeration pores and capillary pores become the main cultivation layer. The number of active pores in the accumulation area is relatively high, and the removal of the calcium accumulation layer is The soil water retention performance in this area is relatively poor. The main types of distribution were upper and lower clay, the main saturated water content of soil layer was also relatively high, the water capacity of the lower part of the field was relatively low, the depth of the main planting root zone of crops was about 40 cm, and the spatial variation coefficient ratio of different soil physical properties content was higher. The deep furrow shallow planting method has broken all the problems of the straw and the surface layer in the soil and also improved the physical properties of the soil and improved an important foundation of the excellent planting garden. Discussion on the chemical properties of mountain soil A certain mountainous soil in a certain area is a kind of calcareous soil. The pH value exceeds 8.9 and belongs to alkaline soil. The highest alkaline can reach 9.51 strong alkaline, which can inhibit the normal growth and development of crop roots and some metal elements. The effective performance supply of some nutrients such as calcium, magnesium, manganese, copper, zinc, etc., restricted by the flooding landform and weak wind, the salt content in the soil has reached about 0.5kg, and all of them are chloride and sulfate. The main salt is not enough to bring growth damage to crops. The content of organic matter in the soil is relatively low and very scarce. The maximum content is less than the lowest level six, which is far from enough to meet the highest quality and stable production requirements. With the gradual increase in depth, the performance is continuously reduced, which seriously affects the growth of crops and the improvement and optimization effect on the soil, and it is necessary to increase the intensity of organic fertilizer cultivation. The content of total nitrogen and alkaline hydrolyzed nitrogen in the soil are appropriately supplemented with nitrogen fertilizer according to the requirements of the yield, but the effective phosphorus content is relatively low on the whole. Under alkaline conditions, the phenomenon of phosphorus fixation is also more serious and effective The intensity is relatively low. With the increase of planting years, it is necessary to supplement a certain amount of phosphorus and potassium in the soil, which is conducive to the development and metabolism of the root system. The content of available potassium in a mountainous soil is relatively low and unreasonable application will cause the nutrient ratio in the soil to be inconsistent. The effective calcium and magnesium content in a mountain botanical garden is relatively rich, and it has extremely high-quality crops as raw materials for planting. Although the content of some trace elements in the soil is relatively high, the effective performance is relatively poor. The content of water-soluble elements is generally relatively low, especially zinc and iron are the most lacking, which severely restricts the growth and development of crops. The construction of a personalized winery in a mountainous area in this area, and the requirements for high-quality vegetation wines determine that high yield is not the main pursuit goal. Therefore, the harmless treatment of straw is carried out through the fermentation of aerobic bacteria and anaerobic bacteria and then promotes the structure of the soil. The formation of soil and stability reduces the pH value of the soil, activates the nutrient elements in the soil, promotes the growth and development of vegetation roots, and reduces the application effect of chemical nitrogen and phosphorus fertilizers. It is used as a mountain plant to improve the soil chemical quality in this production area, the most preferred measure. Construction of comprehensive evaluation indexes for mountain soil quality If the quality of the planted varieties of crops is guaranteed, increasing the planting yield of the crops as much as possible is the key issue and the main direction of the technology in the planting production area of a mountainous region in this area. The yield and quality of vegetation are more or less related to some physical and chemical properties and biological quality. It not only considers its growth and yield but also takes into account its quality issues. The degree of relevance analyzes the order of the effects of the comprehensive evaluation of soil quality: soil sand (0.952)> available potassium ( above data, it can be seen that the two methods of testing are basically the same, and the performance results of the comprehensive quality evaluation are also basically the same. If you consider the representativeness and economy of multiple aspects and integrate various information, 24 indicators can be viewed. Finding a maximum soil parameter problem can also lose as much as possible the data with the smallest amount of soil information in these parameters. s is also analyzed by the texture, field water holding capacity, and capacity in the physical indicators. Each chemical index was composed of 7 keywords, including available potassium, available phosphorus, organic matter and catalase activity. Each comprehensive evaluation index system can effectively reflect the quality of the soil itself in a mountain crop plantation and can further reflect the close connection between soil quality and crop planting products. It can also maintain and improve soil quality issues, optimization issues, and local land resource management and continuous utilization issues, which can effectively promote the advantages of a mountain crop planting area and the sustainable development of the industry in the production area. Conclusion A certain mountain crop production area is rich in land resources and has obvious advantages in water-heat coefficient and light temperature. The crop output value garden is relatively fully developed, the pigment formation is good, the sugar content is relatively high, the pH is moderate, and the pests and diseases are not serious. All of the experts have identified it as one of the best ecological cultivation areas in the world. In this paper, a large number of experiments are collected to analyze the physical and chemical properties and biological properties of soil in a mountain plantation area. Systematic studies have been conducted on the relationship between the quality of soil and the growth and development of crops, and the quality of soil factors has a certain relationship with the formation of crop yield and quality. A system of soil evaluation indicators has been established and improved. The planting industry of a certain mountain crop provides a realistic theoretical basis and practical guidance for sustainable development. In a certain tunnel experiment, in order to ensure the accuracy of the test results, the sensor calibration experiment is required before data collection, that is, to test its sensitivity. The sensor inside and outside the tunnel must be calibrated before leaving the factory. The purpose is the sensitivity has a large change to a certain extent, and due to changes in temperature, humidity, and dust, its sensitivity will also change. However, inaccurate sensitivity will lead to inaccuracy of the collected acoustic signal results and ultimately affect the inaccuracy of tunnel noise distribution results. This test uses the PAl sound and vibration test analysis system in the table for data collection. The data collection system has simulated and tested the information data in the digital unit under test on various sensors and other equipment to be tested. The collected information can be processed to obtain a physical meaning of the signal. Then, in the analysis and research of the regional data of a mountain plantation area, the differences between the typical crop plantations, the soil level, and the planting level can be obtained. The number of years and the corresponding indicators are analyzed in research, and it is found that the soil geology is relatively thick, the content of sand is higher than 50% and is affected by the winter climate, the difference between the surface layer and the subsurface layer is very significant, the parent material characteristics of about 2m are more obvious, and the degree of planting and development of crops is not obvious. In general, the content of aggregates is relatively small. With the increase of planting time, the soil depth of the vegetation garden will be more than 0.45mm, which is obviously compared with the previous. The depth has increased a lot, and the soil capacity of the vegetation garden is relatively large, with an average value of 2.36g or more, but if the value of the bulk density is too large, the growth and development of the root system will be inhibited, and the growth will be restricted and affected. Declarations Conflict of interest The authors declare that they have no competing interests. Open access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution, and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If the material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
2021-05-19T13:49:43.413Z
2021-05-01T00:00:00.000
{ "year": 2021, "sha1": "58e794f223af7554d9a67c6b2938fa37b5ab1d2b", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1007/s12517-021-07212-1", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "58e794f223af7554d9a67c6b2938fa37b5ab1d2b", "s2fieldsofstudy": [ "Environmental Science", "Engineering" ], "extfieldsofstudy": [] }
4606099
pes2o/s2orc
v3-fos-license
Ventilator Dependence Risk Score for the Prediction of Prolonged Mechanical Ventilation in Patients Who Survive Sepsis/Septic Shock with Respiratory Failure We intended to develop a scoring system to predict mechanical ventilator dependence in patients who survive sepsis/septic shock with respiratory failure. This study evaluated 251 adult patients in medical intensive care units (ICUs) between August 2013 to October 2015, who had survived for over 21 days and received aggressive treatment. The risk factors for ventilator dependence were determined. We then constructed a ventilator dependence (VD) risk score using the identified risk factors. The ventilator dependence risk score was calculated as the sum of the following four variables after being adjusted by proportion to the beta coefficient. We assigned a history of previous stroke, a score of one point, platelet count less than 150,000/μL a score of one point, pH value less than 7.35 a score of two points, and the fraction of inspired oxygen on admission day 7 over 39% as two points. The area under the curve in the derivation group was 0.725 (p < 0.001). We then applied the VD risk score for validation on 175 patients. The area under the curve in the validation group was 0.658 (p = 0.001). VD risk score could be applied to predict prolonged mechanical ventilation in patients who survive sepsis/septic shock. imposes a financial burden on patients, families, and the healthcare facility. Furthermore, it decreases the patient's quality of life. We sought to determine the risk factors for ventilator dependence in patients who survive sepsis and septic shock. Results Patient characteristics and findings. A total of 379 patients with sepsis or septic shock and acute respiratory failure requiring mechanical ventilation were admitted to the medical intensive care unit in the Kaohsiung Chang Gung Memorial Hospital from August 2013 to October 2015. Of the 379 patients, 251 patients were enrolled in the study (Fig. 1). For validation, we collected data from sepsis/septic shock patients with mechanical ventilation admitted to the medical ICU from November 2015 to November 2016 and 175 patients were included in the study (Fig. 2). Among the 251 patients studied, 69.3% were admitted from the emergency room. The site of suspected infection was of pulmonary origin in 60.6% (Table 1). 72 patients (29%) were ventilator-dependent (on the ventilator for at least 21 days), and 179 (71%) patients were ventilator-independent (needing ventilator support for less than 21 days). The mean age (standard deviation (SD)) of ventilator-dependent patients was older (64.59 (15.67) vs. 69. 40 (15.65), p < 0.05) ( Table 2). No statistical differences were observed in the APACHE II score (p = 0.664) or the initial SOFA score (p = 0.184) between the ventilator-dependent and ventilator-independent groups. However, there was a significant difference between the ventilator-dependent and ventilator-independent groups regarding previous stroke (p = 0.027 in the univariate analysis) ( Table 2). Hematology, biochemistry, FiO 2 , and PaO 2 /FiO 2 were collected for further analysis ( Table 2). The platelet count was lower on admission day 7 in the ventilator-dependent group compared with the ventilator-independent Ventilator dependence risk score. We constructed a ventilator dependence risk score using individual risk factors, which were first identified from the univariate analysis. Risk factors on admission day 1, day 3 and day 7 in univariate analysis included white blood cells, hemoglobin, platelet, prothrombin time, INR, AST, ALT, BUN, creatinine, Na, K, C-reactive protein, albumin, lactate, procalcitonin, pH, PaCO 2 , Bicarbonate, FiO 2 , PaO 2 / FiO 2 , etc. Variables on admission day 1, admission day 3 and admission day 7, which were possibly associated with ventilator dependence in the univariate analysis (p < 0.1), were included in a multivariate analysis model. A total of 11 variables were included in the multivariate analysis model, such as age >69 years, FiO 2 >39%, history of stroke, pH < 7.35, platelet <150,000/μL, history of coronary artery disease, hemoglobin, BUN, C-reactive protein, PaO 2 /FiO 2 and PaCO 2 on admission day 7. After stepwise method, the independent factors associated with ventilator dependence were identified to build a score. A clinical score (VD risk score) was calculated based on four variables independently associated with ventilator dependence in the multivariate analysis, including previous stroke, thrombocytopenia, acidosis, and higher FiO 2 ( Table 3). The ventilator dependence risk score was calculated as the sum of these four variables after adjusting by proportion to beta coefficient. We assigned a history of stroke one point, platelet count on admission day 7 of less than 150,000/μL one point, pH value on admission day 7 of less than 7.35 two points, and the fraction of inspired oxygen on admission day 7 over 39% two points (Table 4). Receiver operating characteristic curves were plotted in Fig. 3. The area under the curve (AUC) of the ventilator dependence risk score was 0.725 with p value < 0.001. A ventilator dependence risk score equal to or more than one point yielded 80.5% sensitivity and 50.2% specificity. SOFA score for ventilator dependence prediction. We tested Sequential Organ Failure Assessment (SOFA) score 19 on admission day 1 and day 7 to predict ventilator dependence on sepsis and septic shock patients with respiratory failure. The area under the curve (AUC) of the SOFA score on admission day 1 was 0.441 and the AUC of the SOFA score on admission day 7 was 0.662. We also tested and found that SOFA PaO 2 /FiO 2 subscore and GCS subscore on admission day 7 could help predict ventilator dependence on sepsis and septic shock patients with significant difference in univariate analysis ( Table 5). The area under the curve of the PaO 2 /FiO 2 subscore on admission day 7 was 0.668 and the AUC of the GCS subscore on admission day 7 was 0.673 (Fig. 3). Validation ventilator dependence risk score. For validation, we collected data from sepsis/septic shock patients with mechanical ventilation admitted to the medical ICU from November 2015 to November 2016. We used the ventilator dependence risk score to predict ventilator dependence in these 175 patients. Patient characteristics and underlying disease are collected in revealed as Table 6. The AUC of the ventilator dependence risk score was 0.658 and the p-value was 0.001 (Fig. 4). A ventilator dependence risk score equal to or more than one point yielded 69.0% sensitivity and 53.0% specificity. We also found the AUC of ventilator dependence risk score was 0.745 in the cancer group and p-value was 0.009. AUC of ventilator dependence risk score was 0.723 in the chronic kidney disease group (p = 0.009) (Fig. 5). Discussion In this study, we identified risk factors for prolonged mechanical ventilation in patients who survived sepsis and septic shock. These included a history of stroke, and data collected on day 7 (thrombocytopenia, acidosis, and a higher fraction of inspired oxygen). The ventilator dependence risk score can help easily predict prolonged mechanical ventilation. We chose biochemical and physiological variables from day 7 to incorporate into our score, as opposed to day 1 or day 21, which would each have advantages and disadvantages. For example, it is too late to predict ventilator dependency using day 21 data. On the other hand, with multiple factors and different treatment response, it is difficult to predict ventilator dependency from day 1 data. With aggressive treatment in the first week, day 7 data can help identify which patients face a substantial risk of becoming long-term ventilator-dependent. Patients who have suffered a stroke in the past often have respiratory dysfunction due to respiratory drive impairment. According to a previous study, respiratory function depends on numerous neurologic structures, which extend from the cerebral cortex to the medulla; complications after an injury to the respiratory center could lead to prolonged mechanical ventilation 20,21 . Therefore, a previous stroke is an independent risk factor for predicting prolonged ventilator use. Sepsis is a life-threatening organ dysfunction caused by a disproportionate host response to infection and it involves complex mechanisms 22 . During sepsis, platelet numbers decrease due to increased platelet destruction. Sepsis may result in hypercoagulation due to fibrin deposition and platelet activation. This leads to the formation of micro-thrombi as a host defense mechanism against pathogens, in which platelets play a crucial role. In extreme situations, this may progress to disseminated intravascular coagulation (DIC), with severe thrombocytopenia and coagulation system impairment [23][24][25] . Platelet dysfunction during sepsis correlates with a poorer prognosis. Thus, the morphology, number, and function of platelets may be used as biomarkers for the risk stratification of patients with sepsis 25 . Although we excluded very ill patients with decreased platelet counts who expired within 21 days (in our series, average 152*10 3 /μL), the platelet count on day 7 could differentiate the ventilator-dependent and independent groups on day 21. A relatively low platelet count on admission day 7 suggests that a septic patient has not completely recovered and may have greater risk of ventilator dependence. Although there was a significantly lower hemoglobin level in the ventilator-dependent group, it is hard to suggest that bleeding caused by thrombocytopenia is causing weaning failure. The hemoglobin level in both groups was greater than 10 g/dl. Acidosis is increased acidity (hydrogen ion concentration) in the blood and other body tissues. It occurs when the arterial pH falls below 7.35. Sepsis can cause tissue hypoperfusion and the accumulation of lactate, which causes metabolic acidosis 26 . Acidosis resolution in survivors was attributable to a decrease in strong ion gap and lactate levels 26 . Additionally, respiratory acidosis can be due to the accumulation of carbon dioxide in the lungs, which indicates poor lung functioning 27 . Our data revealed that arterial blood gas acidosis on day 7 was one of the independent risk factors predicting ventilator dependence. We did not find statistical differences between groups on higher lactate levels or vasopressor use trends in ventilator dependent patients. Acidosis could be non gap metabolic acidosis from hyperchloremia and fluid overload. In addition, either sepsis progression or poor lung functioning may have caused the resulting acidosis. Fraction of inspired oxygen (FiO 2 ) is the fraction or percentage of oxygen in the volume being measured. It is used to represent the percentage of oxygen participating in gas exchange. According to a study by Diniz et al., FiO 2 levels sufficient to ensure a SpO 2 ≥92% do not alter breathing patterns or trigger clinical changes in weaning patients 28 . The FiO 2 level was enough to represent the oxygen status of the ventilated patient. Our data revealed that a higher fraction of inspired oxygen demand was associated with greater risk of ventilator dependence in patients with sepsis or septic shock. Applying the ventilator dependence risk score to predict prolonged ventilator dependence can help us communicate with the family, enable quick adjustment of the treatment strategy, and ensure more efficient allocation of medical resources. In addition, it is clinically applicable. The score includes two components. One component is uncorrectable, such as previous stroke history; the other component is correctable if treatment is successful, such as thrombocytopenia, acidosis, and fraction of inspired oxygen. We do not suggest correction of thrombocytopenia and acidosis by blood transfusion and bicarbonate use, as there are inherent risks with platelet transfusion and bicarbonate infusion. However, the clinical physician should make the best efforts in correcting underlying progressive sepsis to avoid prolonged ventilator use. We do not routinely use subcutaneous heparin for prophylaxis of deep vein thrombosis or pulmonary embolism in Taiwan. Therefore, we seldom have heparin induced thrombocytopenia patients. In our study group, we had no patients with sepsis and pulmonary embolism concurrently. However, we should keep the possibility in mind. As some components of our ventilator dependency risk score are similar to SOFA values, we tested SOFA score for ventilator dependence prediction. We found the area under the curve (AUC) of the ventilator dependence risk score (0.725) was better than the SOFA score on admission day 1 and day 7. However, 2 components of SOFA score (pulmonary subscore: PaO 2 /FiO 2 and GCS subscore) on admission day 7 were significant for predicting ventilator dependence in univariate analysis (p < 0.001). Despite these findings, the PaO 2 /FiO 2 and GCS AUC were not better than the ventilator dependence risk score AUC (Fig. 3). In fact, we have previously described an immune dysfunction scoring system for predicting 28-day mortality in septic patients, with better discrimination than SOFA score; this system was valid and reproducible. The above cases were from part of the current sepsis cohort, who agreed for immune function assessment 29 . However, in the present study, we are focused on ventilator dependency amongst patients who survive sepsis more than 21 days. Combining those 2 tools, we can predict long term ventilator dependence and predict survival. The area under the curve (AUC) of the ventilator dependence risk score was 0.725 in our study group and the AUC of the ventilator dependence risk score was 0.658 in the validation group. After further analysis of the validation group, we found the AUC of ventilator dependence risk score was 0.745 for sepsis with cancer group and the AUC was 0.723 for sepsis with chronic kidney disease group. We are actively studying the effect of co-morbidity on the outcomes of patients with sepsis, although it is out of the scope of this study. Our previous study revealed that among patients admitted to the ICU with sepsis, those with underlying active cancer had higher baseline levels of plasma IL-10, higher trend of G-CSF and higher mortality rate than those without active cancer 30 . Our ventilator dependence risk score could help predict who will need prolonged mechanical ventilation. We did not exclude patients with tuberculosis or severe immunosuppression (human immunodeficiency virus (HIV), oncologic, solid-organ or bone marrow transplantation). Our score can also be used for these patients. Septic patients admitted to the hospital or the intensive care unit are usually screened for contamination with multi-resistant bacteria and subjected to collection of blood cultures and respiratory secretions. As in our previous study 31 , multi-resistant bacteria or specific pathogens influence survival in patients with ventilator associated pneumonia. The phenomenon was not shown for ventilator dependency 14 . Most of our patients came from ER (69.3%) and most of our blood culture showed no growth. We suggest that multi-resistant bacteria may not influence prediction of prolonged mechanical ventilation. However, further study may be needed to determine the effect. Renal replacement therapy could be a risk factor. However, there was no statistical significance in univariate analysis. In addition, the SOFA renal subscores did not differ between ventilator dependent and independent patients. Therefore, renal replacement therapy was not used in the scoring system. In 2011, Sellares J et al. 32 described that COPD, increased heart rate and PaCO 2 during the spontaneous breathing trial independently predicted prolonged weaning. However, our studied group had small proportion of COPD (9.7% in ventilator-dependent group and 12.8% in ventilator-independent group) ( Table 2). In addition, we did not routinely record heart rate and PaCO 2 during the spontaneous breathing trial. Therefore, PaCO 2 and heart rate during the spontaneous breathing trial were not included in our scoring model. Extubation failure before day 7 may be an additional prognostic parameter for ventilator dependence. However, in our study population, no extubation failure before day 7 was noted. The limitations of the study include the retrospective study design and possible selection bias. However, first, we used prospectively collected data and screened consecutive patients. Second, we excluded patients who died within 21 days, which may have masked some predictors associated with both mortality and ventilator dependence. However, mortality prediction was beyond the scope of this study. The application of the score focused on patients who survived sepsis/septic shock with acute respiratory failure on admission day 7. This patient group was not completely recovered and needed further treatment and strategic decision making. From our results, the data from day 7 is enough to calculate the score, which makes it feasible to use for predicting ventilator dependence. Patients require mechanical ventilation due to either pulmonary function problems or neurological function problems. In patients with sepsis, both components may co-exist. It is difficult to delineate what proportion of patients requiring prolonged mechanical ventilation is attributed to pulmonary or neurological problems. We did not incorporate any data on the patients' pulmonary system mechanics or respiratory muscle strength (respiratory system compliance or resistance, maximal exhaled tidal volume, negative inspiratory force, rapid shallow breathing index) that are typically studied during weaning from mechanical ventilation 33 . It is partially because of some missing data owing to the retrospective characteristic of the study, which makes it difficult to analyze. Most importantly, obtaining parameters like static compliance requires an additional procedure such as paralysis and muscle relaxant, which may add risk to those patients with unstable severe sepsis. For easy application to patients with sepsis and septic shock, we chose to incorporate data easily checked in clinical practice. It is now well known that sepsis and multi organ failure can cause neurological dysfunction by way of critical illness neuropathy and myopathy (i.e., ICU acquired weakness), which can cause difficulty weaning from mechanical ventilation due to diaphragmatic weakness. Sepsis and multiple organ dysfunction are the most common and well accepted risk factors for ICU acquired weakness. Some other risk factors like ARDS, neuromuscular blockade, glucose control, and steroid use are missing from the analysis due to the retrospective study design. Those particular entities deserve attention. The diagnosis of ICU acquired weakness is often clinical with EMG support, which is not often conducted in routine clinical practice. With respect to neurological function, we note a significant difference in the groups with the history of prior stroke. We did not have complete data differentiating hemorrhagic or ischemic strokes. In addition, the functional status or delirium data were also lacking. We attempted to use GCS (the required data are already present within the APACHE and SOFA scores) but the results showed poor discrimination. Those issues need to be explored further in the future. A valuable tool to predict which septic patients will need prolonged mechanical ventilation may have not only therapeutic ramifications, but also significant financial and social implications. As shown in Table 1, patients requiring long term mechanical ventilation have significantly longer ICU stay and in hospital mortality. It is primarily due to the medical acuity. However, in part, it is also due to a paucity of ventilator weaning facilities. Patients who require long term mechanical ventilation are often difficult to place, leading to longer hospital stays than expected for their given illness. We did not discuss diagnosis of ARDS in this study. The PaO 2 /FiO 2 were comparable between the two groups. In the same period, our colleagues participated in a multiple center study showing the effects of ARDS and fluid balance on outcomes. Over resuscitation leads to fluid overload and pulmonary edema, and hypoxia, which may influence ventilator dependence. We found a negative day 1-4 cumulative fluid balance was associated with a lower mortality rate in critically ill patients with influenza 34 . We are now exploring whether cumulative fluid balance predicts ventilator dependency. We need to further evaluate an association between over resuscitation and ventilator dependence in the future. Ventilator dependence risk score, including a history of stroke and data from day 7 (thrombocytopenia, acidosis, and the higher fraction of inspired oxygen), can be applied to predict prolonged mechanical ventilation in patients who survive sepsis and septic shock. Materials and Methods Setting and study design. This retrospective study was conducted in three medical ICUs, including 34 beds at Kaohsiung Chang Gung Memorial Hospital, a 2,700-bed tertiary teaching hospital in southern Taiwan. Consecutive adult patients (aged ≥18 years) with acute respiratory failure on admission to the medical ICU with sepsis/septic shock were surveyed from August 2013 to October 2015 through chart review. We excluded patients who passed away within 21 days, those whose families requested palliative treatment before day 21, and those who were already long-term mechanical ventilator dependent. The enrolled patients were divided into two groups, i.e., ventilator-independent and ventilator-dependent, according to their ventilator status at the time of ventilator use on day 21 from the chart review (Fig. 1). We also collected data from sepsis/septic shock patients with respiratory failure who were admitted to the medical ICU from November 2015 to November 2016 as the validation group (Fig. 2). The study was approved by the Institutional Review Board (IRB) of Chang Gung Memorial Hospital and the requirements to obtain informed consent from patients were waived by IRB (105-6824C). We confirmed that all methods were performed in accordance with the relevant guidelines and regulations. Definitions. Long-term ventilator dependence in patients was defined as the need for mechanical ventilation for more than six hours per day for more than 21 days 16 . Sepsis was defined as a life-threatening organ dysfunction due to a disproportionate host response to infection 35 . Patients with septic shock were identified by a vasopressor requirement to maintain a mean arterial pressure of >65 mmHg in the clinical condition 36,37 . All enrolled patients met the new criteria for sepsis and required mechanical ventilation at the time of admission to the ICU. Moreover, they survived for at least 21 days after admission to the ICU. IV titratable sedation was applied if the patient's condition required and the titration protocol was standardized by medical intensive care unit. The standard clinical practices for weaning the patient from mechanical ventilation were performed during the study period (i.e., pressure support and spontaneous breathing trials). Data collection. Clinical data were retrieved from medical records and included age, gender, Sequential Organ Failure Assessment (SOFA) score 19 , Acute Physiological Assessment and Chronic Health Evaluation II (APACHE II) score 38 , Charlson Index and underlying comorbidities 39,40 , and other clinical factors possibly related to prolonged ventilator use. We also collected hematology, biochemistry, fraction of inspired oxygen (FiO 2 ) and PaO 2 /FiO 2 on admission day 7 to follow up on the patient's condition. All variables were evaluated as possible risk factors of prolonged ventilator use. Score construction and calculation. Categorical variables were analyzed using the chi-squared test, and continuous variables were compared using the Student's t-test. A two-tailed P value of <0.05 was considered to indicate a significant result. Univariate analysis was used to identify significant risk factors associated with ventilator dependence. Variables associated with ventilator dependence in the univariate analysis (p < 0.1) were included in a multivariate analysis model. Using the stepwise method, the independent factors associated with ventilator dependence were identified to build a score using the Hosmer-Lemeshow goodness-of-fit test. A clinical score (VD risk score) was calculated based on four variables independently associated with ventilator dependence in the multivariate analysis. The number of points assigned to each variable in the VD score was adjusted according to proportion to beta coefficient in the regression model. The VD risk score is the sum of the points for these four variables. The receiver operating characteristic (ROC) curve was used to evaluate the performance of the VD risk score. All statistical analysis was performed using the SPSS 22.0 software package (SPSS Inc., Chicago, IL, USA).
2018-04-05T13:23:18.649Z
2018-04-04T00:00:00.000
{ "year": 2018, "sha1": "e8cb2855e90c0c0906317fa0641b4e185b01a69a", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1038/s41598-018-24028-4", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e8cb2855e90c0c0906317fa0641b4e185b01a69a", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }