id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
259362990
pes2o/s2orc
v3-fos-license
Secondary User Access Control (SUAC) via Quadratic Programming in Massive MIMO Cognitive Radio Networks In cognitive radio (CR) networks, a secondary user access control (SUAC) technique has been designed to enhance spectrum efficiency, in which a jamming signal is deliberately injected to maintain reliable sensing performance of authorized secondary users (A-SUs) and degrades unauthorized secondary users (UA-SUs) spectrum sensing results. We consider the problem of jamming signal design in massive multiple-input multiple-output (MIMO) CR networks wherein each primary user has a larger number of antennas that coexists with multiple secondary users. In this paper, we propose a jamming signal design framework that combines maximizing the jammer’s influences on UA-SUs and minimizing on A-SUs. The resulting problem is a non-convex quadratically constrained quadratic programming (QCQP) problem, and a semidefinite relaxation (SDR) method can be one of the approximate solutions but cannot meet our stringent constraints and lacks jamming efficiency. We propose a novel optimization algorithm based on the <inline-formula> <tex-math notation="LaTeX">$K$ </tex-math></inline-formula>-best methodology to design the jamming signal. Simulation results show the effectiveness of the proposed <inline-formula> <tex-math notation="LaTeX">$K$ </tex-math></inline-formula>-best based SUAC method in improving the spectrum sensing performance of A-SUs. I. INTRODUCTION I N TRADITIONAL wireless communications systems, there exist significant spectrum under-utilization problems, where some spectrum bands are heavily occupied while others are kept vacant for a long time. Cognitive radio (CR) technology has been introduced to address this issue [1]. Specifically, there are two types of users in the CR network: primary user (PU) and secondary user (SU). Secondary users are allowed to access the spectrum only when the PUs are not utilizing that spectrum. Therefore, one of the crucial tasks in CR is spectrum sensing in order to enhance the spectrum efficiency [2], [3]. The cognitive radio network is very vulnerable to malicious user attacks due to its open and dynamic nature [4], [5], [6]. For example, [4] proposes a primary user emulation attack model, where attackers imitate the PU's behavior and mislead other SUs' spectrum access decisions. In [6], a most active band (MAB) attack model is presented where an attacker selects and performs a denial-of-service (DoS) attack on a spectrum band with the most activities. Thus, it is important to design a robust CR system that maintains PUs/SUs communications quality under severe malicious user attacks. In [7], a secondary user access control (SUAC) framework has been introduced. Two types of secondary users are investigated in SUAC: authorized secondary users (A-SUs) and unauthorized secondary users (UA-SUs). When PU is inactive and vacant in the spectrum, only A-SUs are authorized to utilize the spectrum. To assure a reliable spectrum sensing performance for A-SUs and atrocious sensing results for UA-SUs, a carefully designed jamming signal is injected at the transmission end, where such jamming pattern can only be obtained by A-SUs to eliminate the jamming signal influence when performing spectrum sensing task. To consider A-SUs with different spectrum access priority levels, [8] proposes a prioritized SUAC (P-SUAC) mechanism. Reference [9] presents the secondary user access control technique in a massive multiple-input multiple-output (MIMO) communications environment, where a jamming signal is generated from a PU transmitter based on the estimated channel state information (CSI) which has a small or negligible influence on A-SUs. The proposed jamming signal design optimization technique is within the quadratically constrained quadratic programs (QCQP) domain with a semi-definite relaxation (SDR) solution provided in [10], [11]. Sparse representation provides a powerful emerging model for signals, where a data source is approximated with a linear combination of a few atoms from an over-complete dictionary [12]. The theory of compressive sensing (CS) has shown that a sparse signal can be reconstructed exactly from many few measurements [13]. Since then, intensive research on sparse signal and data processing have been investigated in the area of wireless communications [14], such as compressive channel estimation in massive MIMO [15], compressive spectrum sensing in CR networks [16], and compressive sensing enabled large-scale wireless sensor networks (WSNs) [17]. Massive data and connectivity are expected to be supported in the next-generation wireless communications systems, such a huge amount of data transmission and storage brings tremendous challenges in signal processing in massive MIMO communications systems. Motivated by the proposed research in SUAC and the sparse representation research for wireless communications, the objective of this paper is to explore sparse jamming signal design for SUAC in cognitive radio networks by exploiting quadratic programming. The proposed jamming signal design is a sparse representation technique, where the majority of the jamming signal weights are zero. This implies that the proposed sparse jamming signal design approach is greener. In this paper, we consider the secondary user access control model in a massive MIMO communications environment and propose a jamming signal design approach that utilizes uplink CSI between the PU and SUs. To design the jamming signal, an optimization problem is considered and we show that such a non-convex optimization problem can be optimally solved in some special cases. In addition to that, we show that an alternative solution to solve our nonconvex problem can be determined through a K-best method based technique [18]. Specifically, K is a tunable parameter in the K-best method that controls performance and complexity tradeoffs. Consider this optimization problem, the original problem is decomposed into optimally solvable subproblems. We then formulate this jamming signal design to a non-convex p -norm optimization problem with 0 ≤ p ≤ 1. It has been shown that such p -minimization problem can be effectively solved via an iteratively reweighted algorithm [19], [20], [21], which consists of solving a sequence of weighted 2 -norm minimization problems where the weights used for the next iteration are computed based on the values of the current solution. Finally, we evaluate the proposed technique under practical scenarios with software simulation, in terms of the SU's spectrum sensing results. In summary, there are the following contributions presented in this paper. • We propose an effective sub-optimal approach based on the K-best method to address the jamming signal design of our non-convex QCQP problem at each iteration. The original problem has been decomposed into small sub-problems which can be optimally solved. • We propose a jamming signal design approach in CR networks with sparse weight representation, which leads to a greener system design. The sparsity is obtained by solving the proposed non-convex p -minimization (0 ≤ p ≤ 1) problem with an iterative reweighted approach. • We then evaluate our approach with practical scenarios simulation. We show that a sparse jamming signal weight can be effectively generated and the A-SUs spectrum sensing performance is well-maintained while UA-SUs' sensing performance is significantly impacted. This paper is organized as follows. Section II presents the system model and problem formulation. Section III shows the jamming signal design and detailed K-best method. Section IV describes the jamming signal with the general model considered. Section V presents an alternate optimization approach based on the SDR technique. Section VI describes the iteratively reweighted algorithms for sparse jamming signal design. Section VII presents simulation results and related discussions. Finally, conclusions are given in Section VIII. II. SYSTEM MODEL AND PROBLEM FORMULATION The proposed SUAC system model in a massive MIMO cognitive radio network is presented in Fig. 1. Each PU is composed of M antennas to serve multiple single-antenna SUs. Assume there are total L SUs with K a A-SUs and K ua UA-SUs (K a + K ua = L). h ∈ C M×1 represents the channel link between the PU and SU, s ∈ C M×1 is the transmitted PU message signal vector, g ∈ C M×1 denotes the jamming signal sent by the PU, and n denotes the white Gaussian noise with 0 mean and σ 2 variance. H 0 /H 1 denotes PU absent/present status. The jamming signal is only generated when the PU is absent, and the received signal r at the SU side can be expressed as To further simplify our explanations, the CSI between PU and A-SUs is represented as H a [h a,1 , h a,2 , . . . , h a,K a ] ∈ C M×K a , where h a,i ∈ C M×1 represents the channel link between the PU and i th A-SU. The channel link between the primary and UA-SUs is H ua [h ua,1 , h ua,2 , . . . , h ua,K ua ] ∈ C M×K ua , and h ua,i ∈ C M×1 represents the channel link between the PU and i th UA-SU. Given H a and H ua , the PU plans to generate jamming signal g to minimize g H h a,i and simultaneously maximize g H h ua,i . In the following sections, we present K-best method based algorithms to create the jammer. For convenience, we present some of the mathematical symbols used in this paper in Table 1. III. APPROACH I: JAMMING SIGNAL DESIGN We formulate designing the jamming signal problem as where R a,i = h a,i h H a,i ∈ C M×M (i = 1, . . . , K a ) denotes the covariance of i th A-SU. R ua,j = h ua,j h H ua,j ∈ C M×M (j = 1, . . . , K ua ) denotes the covariance matrix of j th UA-SU. τ is a large-magnitude scalar. We start by noting that the first constraint can be eliminated if the jamming vector g is in the null space of A-SU's corresponding channel covariance matrix R a,i . Let us define g = Bε, where B = ∩ K a i=1 null(R a,i ) is the intersection of vector space null(R a,i ) with dimension B ∈ C M×S (0 < S ≤ M) and satisfies B H B = I. The objective of (2) can be expressed as and the first constraint in (2) is eliminated since B H R a,i = 0, and we always have The second constraint of (2) can be further written as We can now equivalently express problem (2) as with dimension S × K ua , we provide the following proposition. Proposition 1: Any optimal solution of problem (5) must lie in Range(C ). Proof: We first provide the following QR de-compositions where Q is a semi-unitary matrix whenever it is non-zero, and R = [r i,j ] is an upper triangular matrix. The objective in problem (5) can be further written as From (8) we can infer that none of the constraints depend onβ, it is optimal to setβ equals to zero. Define as a K ua ×K ua permutation matrix and C = C . Specify a QR de-composition C = QR, where R is a K ua ×K ua upper triangular matrix with non-negative diagonal elements. Q is an S×K ua matrix that contains a zero-column corresponding to each diagonal element of R that is zero. With Proposition 1 in hand, we can expand the jamming signal vector we seek as ε = Qβ. From (8) and (9), problem (5) can be rewritten as A. K -BEST ALGORITHM In this section, we present the detailed steps of the proposed We present the steps of the K-best algorithm in the following paragraphs. • Step 1: There are K ua selection choices and each choice corresponded descendant is one column from matrix C . Through two-step sub-problems solving which is described in the following sections, we compute each descendant's metrics and sort the metric's value in ascending order. By selecting the K smallest values, we keep their descendants in the first step. • Step j(2 ≤ j ≤ K ua − 1): We expand each K survivor from the j − 1 step by considering all possibilities that are not been selected in previous steps. Thus, each survivor will be expanded with a new column from matrix C that corresponds to the newly selected descendant. In step j, there are K survivors and each survivor with K ua − j + 1 descendants. There are (K ua − j + 1)K updated optimization sub-problems that need to be solved in this step. Given the calculated value, we sort the metrics in descending order and choose those K lowest values corresponding to descendants in this step. • Step K ua (last step): There is only one remaining descendant for each one of the K remaining nodes from the previous K ua − 1 steps. There are K expanded optimization subproblems in the last step. By solving those sub-problems and sorting the value from low to high, the final survivor will be selected as the descendant with the smallest value. Fig. 2 shows a K-best algorithm diagram where there are 5 steps in this example, and we keep K = 2 descendants in each of the first K ua − 1 steps. For instance, after calculating all 5 sub-problems in step 1, we sort the calculated metrics from low to high and keep the user 2 and 4 as the survivors. In the last step, the descendant with the minimum metrics will be selected and the selected path is 4 → 3 → 2 → 1 → 5. B. SUCCESSIVE STEP In j th step, the associated matrix C has j − 1 columns from C and we have C = QR. The computation conducted from this survivor determined vector β = [β 1 , . . . , β j−1 ] T such thatε = Qβ is a sub-optimal solution to a relaxation of problem (10) which only has constraints corresponding to j − 1 columns of C. Let c denotes any column from the remaining K ua − j + 1 un-selected columns from C . denotes the updated and expanded β. Notice that γ and β are two complex variables that are determined by solving the following sub-problem. Notice that only the last constraint depends on the phases of β and γ , the other constraint and the objective function are invariant to the phases. Without loss of optimality, we can suppose that the phase of β equals to the phase of r j,j , and the phase of γ equals the phase of where we have a convex quadratic objective and convex constraint conditions. We can explicitly solve this optimization problem through Karush-Kuhn-Tucker (KKT) conditions. We start by noting that either Detailed proof can be found in Appendix B. (12) is given as follows: IV. APPROACH II: GENERAL MODEL In previous Approach I analysis, jamming vector g is calculated by maximizing impact on UA-SUs (g H R ua g ≥ τ ) and minimizing the influence on A-SUs (g H R a g = 0). However, under the condition that A-SUs channel vector h a and UA-SUs channel vector h ua are highly correlated, it will be challenging to obtain g satisfies the constraints shown in Approach I at the same time. To address this issue, consider the following problem, where w i ∈ R + (i = 0, 1, . . . , K a ) are given weights. Specifically, if we want to find a jamming signal g has a negligible effect on A-SUs, we could change w i (i = 1, 2, . . . , K a ) to be a large value, in order to force the jamming signal within the desired subspace. Define a matrix and a matrix H a = [h a,1 , . . . , h a,K a ] ∈ C M×K a . Consider following QR de-composition where Q is a semi-unitary matrix whenever it is non-zero, and R = [r i,j ] is an upper triangular. Suppose any candidate solutions for problem (13) can be represented as g = Qβ + Qβ, where Range(Q) ∪ Range(Q) spans the whole space, Q H Q = I &Q H Q = 0. We extend Proposition 1 to obtain the following Proposition 3. The detailed proof can be found in Appendix C. Proposition 3: Any optimal solution of problem (13) must lie in Range([C , H a ]). From (60) we have h H a,iQβ = 0. We then rewrite the objective function in problem (13) as follows From Proposition 3, we can further simplify the objective by settingβ to zero. Problem (13) can be expressed as We then present a solution to solve the problem (16) via a sub-problem approach based on the K-best method. A. SUCCESSIVE STEP In j th step, assume the associated matrix C has j − 1 columns from C . The computation conducted from this survivor determined vector β = [β 1 , . . . , β j−1 ,β 1 , . . . ,β K a ] T . Let c denotes any column from the remaining K ua − j + 1 un-selected columns from C . The updated de- denotes the updated and expanded β . Here γ, β,β 1 , . . . ,β K a are K a + 2 variables that are determined by solving following optimization problem min γ,β,β 1 ,...,β Ka ∈C which can be expressed as the following compact form r k,j+i β * k , r j,j+i , r j+1,j+i , . . . , r j+i,j+i , 0, . . . , 0 Denote y D 1 2 x with dimension y ∈ C (K a +2)×1 , and we then have x = D − 1 2 y. Optimization problem (18) can be rewritten as Define a matrixC = [D − 1 2 v, D − 1 2 u] ∈ C (K a +2)×2 , and the corresponded QR de-composition isC = UL, where U ∈ C (K a +2)×2 is a semi-unitary matrix satisfies U H U = I and L ∈ C 2×2 is an upper triangular matrix which can be represented as Without loss of optimality, denote y = U[ν, ζ ] T , the new formulation can be expressed as Notice that only the last constraint depends on the phase of ν whereas the other constraint and the objective function are invariant to them. Thus, we can suppose that the phase of ν equals to the phase of l 1,2 . Suppose |ν| = (ã +b) with a = 1 l 1,1 , and |ζ | =c, problem (23) is equivalent to miñ b,c∈R + ã +b 2 +c 2 s.t.ã = 1 l 1,1 , l 1,2 ã +b +cl 2,2 ≥ 1 (24) such formulation is a convex optimization problem and can be solved in closed form, which is provided in the following Proposition 4. Proposition 4: Any optimal solution of problem (23) is given as follows: • ν opt = 1 When > 0, the optimal solution is obtained by investigating relationships between the circle, (b + 1 l 1,1 ) 2 +c 2 = C, and the line, |l 1,2 |b + l 2,2c = . Specifically, the line is tangential to the circle at the point denoted as S. In Fig. 3, the optimal solution is marked by a red dot. As shown in Fig. 3 (left), the tangent point S is the optimal solution if S is located in the first quadrant. Otherwise, the optimal solution is obtained at the intersection point (shown in Fig. 3 (right)) of the straight line and the vertical axis (c-axis). Denote the coordinate of point S as (Sb, Sc), and after solving the following two equations, V. ALTERNATE METHOD In this section, we briefly show an alternative jamming signal design method based on the semidefinite relaxation (SDR) [10] technique, which has been discussed in [9]. Consider the constraint problem shown in Eq. (2), which we have rewritten as follows Such QCQP problem can be solved via the semidefinite relaxation approach. Denote G = gg H , and G is a positive symmetric semidefinite matrix. Without considering the rank of matrix G equals to one, the problem of (27) can be represented as where H M×1 denotes the set of M × M complex Hermitian matrices. ψ t is a vector randomly distributed with ψ t ∼ CN (0, G * ), t = 1, . . . , T, where T denotes the randomization trials number. Consider each trail (say t th trail), we obtain the feasible solution The best solution after T trails can be determined by solving the problem g * = arg min g t (ψ) g t (ψ) H g t (ψ). The worstcase computation complexity of the SDR problem (28) is O(max{L, M} 4 M 1/2 log(1/δ)) [10], where δ > 0 is a given solution accuracy. VI. ITERATIVELY REWEIGHTED ALGORITHMS FOR SPARSE JAMMING SIGNAL DESIGN From the perspective of a jammer in the cognitive radio network, the jammer intends to generate a jamming signal with a sparse configuration in order to mitigate the influence added to the CR network. To design a sparse jamming signal, instead of formulating a 0 optimization problem (NP-hard), we consider 1 optimization as an approximation solution. In addition, it was shown empirically that using p with p < 1 can obtain more sparse results than with p = 1 [19]. In this research, we plan to consider p -norm optimization problem with p ∈ [0, 1] in the jamming signal g design. In particular, similar to the problem shown in (13), we re-design our problem in the following format: where the objective function is a p -norm of the jamming signal g, with 0 ≤ p ≤ 1. Early papers considered iteratively reweighted algorithms for solving this p (0 ≤ p ≤ 1) norm optimization problem [20], specifically, we replace the p objective function in (29) by a weighted 2 -norm g p = M k=1 α k g * k g k , where g k represents the k th component in vector g, and weights α k are computed from the previous iterate g (t−1) . After considering the damping approach, the weight α k is given as α k = (g * k g k + φ) p 2 −1 . The parameter φ > 0 is introduced in order to provide stability [21]. Appropriate determination of the value of φ is very crucial in order to perform a fast, accurate, and stable algorithm process. Given ϒ = diag{α 1 , . . . , α M }, we have g p = ϒ 1 2 g 2 . Problem (29) can now equivalently be expressed as The above non-convex optimization problem can be solved via an iterative algorithm that alternates between estimating g and updating the weight α k . During each iteration, g will be calculated using the proposed K-best algorithm. The steps of the algorithm are as follows: (1). Initialize the iteration number t to zero, determine the parameter φ's initial value, and set α (2). Solve the following non-convex QCQP minimization problem via the K-best approach: Detailed procedures to solve this problem can be seen in Appendix D. Assume there are total T r iterations, the worst-case (i.e., the convergence condition during the iteration never satisfied) computation complexity of the proposed iteratively reweighted algorithm is O(T r K ua ML 2 ). VII. SIMULATION RESULTS AND RELATED DISCUSSIONS In this section, we present the simulation results of the proposed method. We consider a single-cell scenario in which the PU equipped with M = 64 transmission antennas, communicates with L single antenna-equipped SUs. We adopt the popular one-ring channel model in our simulation, which has been extensively studied to characterize massive MIMO channels [22], [23], [24]. Denote the total number of the propagation paths as P, the channel vector h can be expressed as h = 1 ] T is the steering vector, χ represents the signal wavelength, d denotes the distance between neighboring antenna elements, and θ p ∈ [0, π] represents the p th path azimuth angle of arrival (AoA). In our simulation, we have P = 100, d = 1 2 χ , and = 1. The mean angle of i th SU,θ i , is assumed to be uniformly distributed over an interval [ π 4 − π 6 , π 4 + π 6 ]. Denote the angle of i th SU's p th path as θ i,p , which is assumed uniformly distributed over an interval [θ i − π 10 ,θ i + π 10 ]. In the following figures, the key simulation results are presented as follows. There are two important measures in cognitive radio: PU detection probability (P d ) and false alarm probability (P f ). Notice that, P f should be a very small value in order to let SUs find the empty spectrum efficiently, and P d is a large value (i.e., P d ≥ 90%) in system design in order to avoid significant interference to the PUs. A. PERFORMANCE OF VARIOUS K In this section, we present the spectrum sensing performance of SUs with various numbers of K considering the K-best optimization algorithm. Number of SUs is L = 30, among which there are K a = 15 A-SUs and K ua = 15 UA-SUs. Fig. 4 shows A-SU and UA-SU's false alarm probability P f versus signal-to-noise ratio (SNR) under the condition that the detection probability is P d = 99% consider Approach I. Perfect CSI scenario is considered and K = 1, 5, 15. From the simulation results, we have the following observations. 1) For A-SUs with a different number of K, P f performance improves with the increasing SNR values. 2) The spectrum detection performance of UA-SU is significantly degraded by the proposed method (Approach I). 3) For A-SUs, the detection performance with various K is very close. 4) For UA-SUs, the detection performance becomes worse given larger K. Fig. 5 presents A-SU and UA-SU's P f versus SNR under the condition that P d = 99% consider Approach II. K = 1, 15, and perfect CSI is considered. We set w i = 100(i = 1, . . . , K a ) and w 0 = 1 in Eq. (13). From the simulation results we can see that: 1) A-SUs spectrum sensing performance improves with the increasing SNR value and is irrelevant to the K value. 2) UA-SUs spectrum sensing performance was significantly degraded by the proposed Approach II algorithm. A larger K value corresponds to better jamming performance. B. PERFORMANCE OF VARIOUS W I IN APPROACH II In this section, we show the SUs spectrum sensing performance in Approach II with a various number of weights w i (w i = 10, 1000) in Eq. (13). We consider K a = 15 A-SUs and K ua = 15 UA-SUs in our simulation. Fig. 6 shows A-SU and UA-SU's P f versus SNR given P d = 99%, K = 5, and perfect CSI. We have the following observations from the simulation results. 1) For A-SUs, larger weight w i value (w i = 1000) shows better spectrum sensing performance. 2) For UA-SUs, smaller weight w i value (w i = 10) have a better jamming performance. C. DEPENDENT CHANNEL VECTOR BETWEEN A-SUs AND UA-SUs We consider the spectrum sensing performance when the channel vector of A-SUs (h a ) and UA-SUs (h ua ) are highly dependent for Approach I and II. Specifically, consider there are 25 A-SUs (K a = 25) and 5 UA-SUs (K ua = 5). We first randomly generate the A-SUs channel vector based on the one-ring model and then generate the UA-SUs channel vector by taking a linear combination of the generated A-SUs channel. Fig. 7 presents the A-SU/UA-SU spectrum sensing performance versus SNR given P d = 99%, K = 5, w i = 100, and perfect CSI. From the simulation results, we have the following observations. 1) For Approach I, both A-SUs and UA-SUs spectrum sensing performance improves with increasing SNR value. There will be no secondary user access control given the larger SNR value. Performance is significantly affected due to the channel dependence between A-SU and UA-SU. 2) For Approach II, with increasing SNR value, only A-SUs spectrum sensing performance improved. UA-SU sensing performance is significantly degraded by the proposed algorithm. The channel dependence between A-SUs and UA-SUs will not affect Approach II algorithm effectiveness. The simulation results verified our design purpose for Approach II. Specifically, if the A-SUs channel vector h a and UA-SUs channel vector h ua are dependent, it will be challenging to generate the jamming signal satisfies the constraints shown in Approach I at the same time. Approach II is designed to address this issue, where the objective function is to minimize the weighted sum of the jamming signal vector squares and the jamming signal influence on K a A-SUs. Thus, if the channel vectors of A-SUs and UA-SUs are highly correlated or dependent, the spectrum sensing result of Approach II has a much better performance compared to that of Approach I, as shown in Fig. 7. D. PERFORMANCE OF ITERATIVELY REWEIGHTED ALGORITHMS FOR JAMMING SIGNAL DESIGN In this section, we show the spectrum sensing performance considering the iterative reweighted algorithms. We consider L = 30 SUs (15 A-SUs and 15 UA-SUs), K = 15, and P d = 99% in our simulation. and M = 64. The damping parameter φ has been investigated with various values, say φ = 0.2 and φ = 0.8. As we discussed in Section VI, the damping parameter φ is designed to regularize the optimization process. Without the damping parameter, the weight α k is undefined whenever g k = 0. Our simulation result shows that a larger value of φ (φ = 0.8) corresponds to better jamming performance: A-SU has better spectrum sensing performance while UA-SU has worse sensing performance. Decreasing φ too soon results in poorer performance. In our future research, we plan to perform experiments with the φ-regularization strategy [19] of using a relatively large φ and then repeating the process by decreasing the damping parameter values after convergence to recover the sparse jamming signal. Fig. 9 and Fig. 10 present the false alarm probability P f of A-SU/UA-SU versus SNR given φ = 0.5 with various values of p. The BS antenna number is M = 64 for Fig. 9 and M = 128 for Fig. 10. From the simulation results, we can see that there is no significant performance difference among different p values (p = 0.2, 0.5, and 0.8). Sparse jamming signal representation (lower p values) will maintain good spectrum sensing performance for A-SUs while introducing a significant impact on UA-SUs with a much lower power cost. Fig. 11 and Fig. 12 present the non-zero element ratio of the jamming signal versus p value given φ = 0.5. The BS antenna number is M = 64 for Fig. 11 and M = 128 for Fig. 12. In our simulation, we set the jamming signal element equal to zero whenever the element magnitude is smaller than 10 −4 , and count the non-zero element number, divide the antenna number M in order to calculate the nonzero element ratio. From Fig. 11 and Fig. 12, we can see that the non-zero element ratio rises with p value increasing for both M = 64 and M = 128. In Fig. 13, we present the false alarm probability of A-SU/UA-SU versus SNR given p = 0.5 and φ = 0.5 with 2) For UA-SUs, the spectrum sensing performance is significantly degraded. Larger BS antenna number M (i.e., M = 256) shows better jamming performance. From the above observations, we can see that there is a trade-off between improving spectrum sensing performance on A-SUs and having a powerful jammer on UA-SUs when selecting large or small M values. A larger value of M corresponds to a better jammer while the spectrum sensing performance of A-SUs degrades. ratio decreases (the jamming signal becomes sparser) with the M value increasing. Notice that our proposed iteratively reweighted jamming signal design not only shows equally good or better performance compared with the traditional SUAC method, but we also present a sparse jamming signal weight which leads to a greener system design. The visualization of the jamming signal weights generated from our proposed iteratively reweighted algorithm and traditional SUAC is illustrated in Fig. 15, where we plot the weight magnitude of a 16 × 100 matrix with M = 16 antennas and 100 trails. The figure colors correspond to the data values of the weight matrix and the colorbar on the right indicates the mapping of data values into the colors. From the figure, we can see that our iteratively reweighted approach shows a very sparse jamming signal weight solution. VIII. CONCLUSION We proposed a new formulation for jamming signal design that uses the channel state information between SU and PU in a massive MIMO communication architecture. We consider designing the jamming signal process as a non-convex quadratic programming optimization problem and solve this problem via the K-best method. We have shown that A-SUs are able to achieve reliable spectrum sensing performance while UA-SUs suffer significant performance degradation given the carefully designed jamming signal.
2023-07-08T13:52:19.922Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "441591c5516bd16e96e8ea9e56fb27d1dbdf00b5", "oa_license": "CCBY", "oa_url": "https://ieeexplore.ieee.org/ielx7/8782661/8901158/10168958.pdf", "oa_status": "GOLD", "pdf_src": "IEEE", "pdf_hash": "441591c5516bd16e96e8ea9e56fb27d1dbdf00b5", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [] }
218718848
pes2o/s2orc
v3-fos-license
Quark-Novae in the outskirts of galaxies: An explanation of the Fast Radio Burst phenomenon We show that old isolated neutron stars in groups and clusters of galaxies experiencing a Quark-Nova phase (QN: an explosive transition to a quark star) may be the sources of FRBs. Each fragment ("chunk") of the ultra-relativistic QN ejecta provides a collisionless plasma for which the ambient medium (galactic/halo, the intra-group/intra-cluster medium) acts as a relativistic plasma beam. Plasma instabilities (the Buneman and the Buneman-induced thermal Weibel instabilities, successively) are induced by the beam in the chunk. These generate particle bunching and observed coherent emission at GHz frequency with a corresponding fluence in the Jy ms range. The duration (from micro-seconds to hundreds of milli-seconds), repeats (on timescales of minutes to months), frequency drift and the high occurrence rate of FRBs (a few per thousand years per galaxy) in our model are in good agreement with observed properties of FRBs. All FRBs intrinsically repeat in our model and non-repetition (i.e. the non detection of the fainter QN chunks) is detector-dependent and an artifact of the bandwidth and of the fluence sensitivity threshold. Key properties of FRB 121102 (its years of activity) and of FRB 180916.J0158+65 (its 16 day period) are recovered in our model. We give specific predictions, notably: (i) because of the viewing angle (Doppler) effect, sub-GHz detectors (CHIME) will be associated with dimmer and longer duration FRBs than GHz detectors (e.g. Parkes and ASKAP); (ii) CHIME should detect on average 5 times more FRBs from a given QN than ASKAP and Parkes; (iii) super FRBs (i.e. tens of thousands of Jy ms fluence) should be associated with intra-cluster medium QNe; (iii) monster FRBs (i.e. millions of Jy ms fluence) associated with inter-galactic medium QNe might plausibly occur with frequencies at the lower limit of the LOFAR's low-band antenna. INTRODUCTION The era of FRB science began with the Lorimer burst (Lorimer et al. 2007) and followed with a decade of discovery of dozens of intense, millisecond, highly dispersed radio bursts in the GHz range (see http://frbcat.org/; Petroff et al. 2016). An FRB may consist of a single or of multiple pulses of milliseconds duration. While most of these FRBs were one-off events, a few were repeats (Spitler et al. 2016;Scholz et al. 2016; CHIME/FRB Collaboration 2019a,b). FRB dispersion measures (DM) of hundreds of pc cm −3 put them at extra-Galactic to cosmological distances which makes them very bright (> 10 41 erg s −1 ) with the corresponding high brightness temperatures requiring a coherent emission mechanism (Kellermann & Pauliny-Toth 1969; see also Katz 2014;Popov et al. 2018). Observations and derived properties of these FRBs can be found in the literature (Thornton et al. 2013;Spitler et al. 2014;Kulkarni et al. 2014;Petroff et al. 2016;Ravi et al. 2016;Gajjar et al. 2018;Michilli et al. 2018;Cordes & Chatterjee 2019 with a recent analysis given in Lorimer 2018). The large beam width of current radio telescopes makes it difficult to pin-point the host galaxies of most FRBs let alone their association with arXiv:2005.09793v1 [astro-ph.HE] 19 May 2020 known astrophysical objects. This makes it hard to constrain models despite the numerous ideas suggested in the literature (see Platts et al. (2018) for an account). The repeating nature of FRBs has been used to argue against catastrophic scenarios, but as we show here, it is not necessarily the case. In , we proposed a connection between Quark-Novae (QNe) and long-duration Gammaray bursts (LGRBs). In this model, following a corecollapse Supernova (SN) explosion of a massive star, a rapidly rotating neutron star (NS) with a period of a few milliseconds experiences quark deconfinement in its core due to spin-down. Then an explosive combustion of neutrons to quarks yields a QN which leaves behind a quark star, while ejecting the outermost layers of the NS at ultra-relativistic speeds. The QN ejecta fragments into millions of dense chunks (see Ouyed & Leahy 2009) made of NS crust material, which is favorable to r-process nucleo-synthesis (Jaikumar et al. 2007;Kostka et al. 2014). As shown in , the QN can occur years to decade following the SN proper. In the blow-out regime (i.e. E SpD > E SN , when the spin-down energy exceeds the SN kinetic energy) the magnetized turbulent SN ejecta consists of numerous filaments. The interaction of the QN fragments with these filaments gives the intermittency seen in LGRB prompt emission while providing us with simultaneous fits to the lightcurve and spectra of LGRBs. Other key properties of LGRBs are accounted for in our model (see §4 and Figure 6 in Ouyed al. 2019 for details). In this paper we focus on isolated QNe; i.e. on old NSs experiencing the QN phase, outside their birth galaxies. In particular slowly rotating, massive NSs (those born from stellar progenitors in the 20-40M mass range) rely on quark nucleation in their core to trigger an explosive hadronic-to-quark-matter transition. For nucleation timescales of the order of ≥ 10 8 years (e.g. Bombaci et al. 2004;Harko et al. 2004;Marquez & Menezes 2017 and references therein), a candidate NS with a typical kick velocity of ∼ 300 km s −1 would travel a distance of >∼ 30 kpc from its birth place. I.e. the NSs would explode as QNe in the intra-group or intra-cluster medium noting that Galaxies in groups and in clusters lose a good portion of their interstellar gas, as well as of their coronal (or halo) gas, to ram pressure stripping by the intra-group or intra-cluster hot diffuse gas (e.g. Gunn & Gott (1972); Quilis et al. (2000); Larson et al. (1980); McCarthy et al. (2008)). The millions of chunks of the ejected ultra-relativistic QN ejecta (the NS's outermost layers) travel through the ambient medium/plasma and expand until they become collisionless. Then they will experience two collisionless instabilities: first the Buneman instability followed by the Buneman-induced thermal Weibel instability. This triggers particles bunching, coherent synchrotron emission and FRBs as shown in this paper. Here we consider three media in a plasma state: (i) the intra-group medium (IGpM) representing the plasma in groups of galaxies, with number density n ns amb. 10 −4 -10 −2 cm −3 (e.g. Fabian 1994 and references therein); (iii) the intergalactic medium (IGM) with n ns amb. 10 −7 cm −3 (e.g. McQuinn 2016 and references therein). To avoid confusion with the intergalactic medium, hereafter ICM refers jointly to the hot diffuse gas observed in groups and clusters of galaxies. Because the majority of galaxies are in groups (e.g. Tully 1987) we take conditions in the IgCM with typical ambient density of n ns amb. = 10 −3 cm −3 as our fiducial value. The paper focuses on the interaction of the QN chunks with such an ambient medium/plasma and is structured as follows: In §2 we give a brief overview of the general properties of the QN ejecta, and how it becomes collisionless as it travels in the ambient medium. We also describe the characteristics of the relevant plasma instabilities. The resulting coherent synchrotron emission is analyzed in §3 with the application to FRBs given in §4. Two special cases (FRB 121102 and FRB 1809.J0158+65) are studied in §5. A discussion follows in §6, where we also list our model's predictions and limitations, before we conclude in §7. THE QN AND ITS EJECTA The energy release during the conversion of a NS to a QS is of the order of ∼ 3.8 × 10 53 erg × (M NS /2M ) × (∆E dec. /100 MeV) for a NS mass of M NS = 2M and a conversion energy release of about ∆E dec. = 100 MeV per neutron converted (e.g. Weber (2005)). More precisely, the energy release is a fraction of the combined conversion (of neutrons to free quarks) energy and gravitational binding energy (e.g. Keränen et al. 2005;Niebergal 2011;Ouyed, A 2018; see also §2.1 in and references therein for details). A percentage of this energy is in the form of the kinetic energy of the QN ejecta which amounts to E QN ∼ 10 52 -10 53 erg when the converting NS is hot (as in the case of a QN in the wake of core-collapse SN (ccSN; see ). For the case of an old isolated cold NS (as is the case in our model for FRBs here), the very slow nucleation timescales means that the NS loses most of the conversion energy to neutrinos imparting an even smaller percentage of the conversion energy to the kinetic energy of the QN ejecta (i.e. E QN ∼ 10 51 -10 52 erg). The QN ejecta consists of the outermost crust layers of the NS with a mass M QN ∼ 10 −5 M and an associated Lorentz factor Γ QN = E QN /(M QN c 2 ) 10 2 -10 3 . Ejecta properties and statistics As described in Ouyed & Leahy (2009), the QN ejecta breaks up into millions of chunks. Here we adopt N c = 10 6 as our fiducial value for the number of fragments which yields a typical chunk mass 1 of m c = M QN /N c 10 22.3 gm × M QN,28.3 /N c,6 . The chunk's Lorentz factor is taken to be constant with Γ c = Γ QN = 10 2.5 corresponding to a fiducial QN ejecta's kinetic energy E QN = Γ c M QN c 2 5.7 × 10 51 erg; i.e. roughly 1% of the conversion energy is converted to the kinetic energy of the QN ejecta; these fiducial values are listed in Table 1. Below we summarize some general properties of the QN ejecta (see details in §2.1 and Appendix B.1 in . Hereafter, unprimed quantities are in the chunk's reference frame while the superscripts "ns" and "obs." refer to quantities in the NS frame (i.e. the ambient medium) and the observer's frame, respectively. The chunk's Lorentz factor Γ c does not vary in time, during the FRB phase, in our model. The transformation from the local NS frame to the chunk's frame is given by dt ns = Γ c dt while the transformations from the chunk's frame to the observer's frame (where the emitted light is being observed) are dt obs. = (1 + z)dt/D(Γ c , θ c ), ν obs. = D(Γ c , θ c )ν/(1 + z) with z the source's redshift and θ c the viewing angle (the angle between the observer and chunk's velocity vectors). We note the following: • The chunks are equally spaced in solid angle Ω around the explosion site. Defining N θ as the number of chunks per angle θ, we write dN θ /dΩ = const. = N c /4π with dΩ = 2π sin θdθ so that dN θ /dθ = (N c /2) sin θ (N c /2)θ with θ c << 1. Because N c π∆θ 2 c = 4π when the chunks first form, the average angular separation between them is yielding ∆θ s 1/Γ c for our fiducial value of N c = 10 6 and Γ c = 10 2.5 . The average change in the radial angle from one random chunk to another (i.e. the actual separation projected onto the radial direction; see where the last expression applies for higher i >∼ 2. Because the chunks are not precisely equally spaced, the variation in f (θ c ) between chunks is somewhat variable. The collisionless QN chunks The early dynamical and thermal evolution of the QN chunks is given in Ouyed & Leahy (2009); see also §2.3 in Ouyed al. (2019) for a recent review. We give the later evolution here: (i) The QN chunk becomes optically thin to photons when it expands to a radius R c,opt. 2.2 × 10 10 cm × m 1/2 c,22.3 κ 1/2 c,−1 , obtained by setting R c = 1/κ c ρ c . Here κ c = 0.1 cm 2 gm −1 is the chunk's opacity, ρ c = n c m H = (3m c /4πR 3 c ) × m H is its density and m H the hydrogen mass. The corresponding chunk's baryon number density is n c, The chunk is optically thin to hadronic collisions when it expands to a radius R c,HH 1.5 × 10 9 cm × m 1/2 c,22.3 σ 1/2 HH,−27 , derived from R c = 1/n c σ HH ; σ HH,−27 is the hadron-hadron cross-section expressed in units of milli-barns (Letaw et al. 1983;Tanabashi et al. 2018); (iii) A QN chunk is subject to electron Coulomb collisions which allow it to thermalize and expand beyond R c,opt. from internal (electron Coulomb collisional) pressure. The electron Coulomb collision length for electron number density n c,e and temperature T c,e is (Richardson 2019) λ c,C 1.1 × 10 4 cm × T 2 c,e /n c,e with the Coulomb parameter ln Λ = 20 (Lang 1999). During the early evolution of the chunk we have λ c,C << R c . After the chunk enters the optically thin state, hadronic collisions continue to heat it up. As shown in Appendix A, a combination of hadronic collisions with the ambient medium and thermalization due to Coulomb collisions sees the chunk's radius expand until it becomes collisionless when R c = λ c,C . At this stage, its interaction with the ambient plasma (ICM) triggers inter-penetrating beam-plasma instabilities thus yielding particle bunching and coherent synchrotron emission with observed properties (e.g. frequency, duration and fluence) very similar to those of FRBs. Interaction with the ambient plasma: the relevant instabilities We use results from Particle-In-Cell(PIC) and laboratory studies of instabilities in inter-penetrating plasmas, to identify the relevant plasmas: • The background (e − , p + ) plasma: is the collisionless ionized chunk material dissociated into hadronic constituents during its early evolution when interacting with the ambient medium in the close vicinity of the QN site. When the chunk becomes collisionless, its radius, baryon number density and temperature are (see Eqs. (A8), (A9) and ((A10) in Appendix A): R cc ∼ 10 15 cm, n cc ∼ 10 cm −3 and T cc ∼ 0.1 keV, respectively. Here, the subscript "cc" stands for "collisionless chunk" defining the start of the collisionless phase. This occurs at time t obs. cc 2.6 days × (1 + z)f (θ c ) × (m c,22.3 κ c,−1 ) 1/5 /(σ 3 HH,−27 Γ 11 c,2.5 n 3 amb.,−3 ) 1/5 after the QN (Eq.(A12)); • The (e − , p + ) plasma beam: is the ionized ambient medium (e.g. ICM) incident on the QN chunks as they travel. Its baryon number density is n ns amb. in the NS frame. The parameters that define the regimes of collisionless instabilities are: • Ultra-relativistic motion: Γ c >> 1; • Density ratio: The beam (ambient medium) to background plasma (chunk) baryon number density ratio in the chunk frame is α cc = Γ c n ns amb. With n cc ∼ 10 cm −3 (see Eq. (A9)), one has B 2 cc /8π << n cc m H c 2 when the chunk becomes collisionless, effectively becoming a non-magnetized plasma when experiencing the inter-penetrating instabilities discussed below. The above parameter ranges imply that at the onset of the collisionless stage, the Buneman instability (BI) dominates the dynamics (e.g. Table 1 and Figure 5 in Bret 2009). The BI induces an anisotropy in the chunk's electron temperature distribution, triggering the thermal Weibel instability (WI; Weibel 1959), which has the effect of isotropizing the temperature. The thermal WI requires only a temperature anisotropy to exist and is beam-independent. The Weibel filamentation instability (FI) on the other hand requires a beam to exist (Fried 1959). However, the FI dominates only when α cc ∼ 1 (see Figure 5 in Bret 2009), which is not the case here because α cc << 1 as expressed in Eq. (8). The beam (i.e. the ICM plasma) triggers the longitudinal BI (with wave vector aligned with the beam). This creates the needed anisotropy since the BI yields efficient heating of electrons in the longitudinal direction (parallel to the beam). The scenario is a parallel plasma temperature which exceeds the perpendicular plasma temperature, allowing the thermal WI to act (even in the weak anisotropy case). During the development of the WI, the beam continues to feed the BI by continuous excitation of electrostatic waves. These two instabilities are discussed in more detail below. We define β = v /c and β ⊥ = v ⊥ /c, where c is the light speed, as the chunk electron's speed in the direction parallel and perpendicular to the beam, respectively. When a QN chunk becomes collisionless, it has β = β ⊥ = β cc ∼ 10 −2 (Eq. (A11)): • The Buneman Instability (BI) is an electrostatic instability (i.e. excitations of electrostatic waves). It is an electron-ion two-stream instability caused by the resonance between the electron plasma oscillation of the chunk electrons and proton plasma oscillation of the ambient plasma (Buneman 1958(Buneman , 1959. In our case, it arises when the relative drift velocity between the beam (i.e. ICM) electrons and the plasma (i.e. chunk) ions exceeds the chunk's electron thermal velocity. Its wave vector is parallel to the beam propagation direction and generates stripe-like patterns (density stripes perpendicular to the beam; e.g. Bret et al. 2010). The BI gives rise to rapid electron heating (e.g. Davidson 1970Davidson , 1974Hirose 1978) by transferring a percentage of the beam's kinetic energy into thermal (electron) energy of the background plasma (here the QN chunk) by turbulent (electric field) heating. The end result is an increase in β with β ⊥ unchanged. The wavelength of the dominant mode is where ν p,e = (4πe 2 n cc,e /m e ) 1/2 (9 kHz) × n 1/2 cc is the non-relativistic electron plasma frequency of the chunk and n cc,e = n cc the chunk's electron density; m e and e are the electron's mass and charge, respectively. The e-folding growth timescale is where m p is the proton mass. The above is much shorter than the chunk's crossing time R cc /c ∼ 3.3 × 10 4 s × R cc,15 allowing plenty of time for the BI to grow and saturate locally throughout the collisionless chunk. BI heating occurs by transferring beam electron energy to heating chunk's electrons. BI saturation occurs much before the beam kinetic energy is depleted because of trapping of electrons by turbulence; i.e. BI saturates at a particular electric field (e.g. Hirose 1978). The heat gain by the chunk's electrons is Q BI = (ζ BI Γ c m e c 2 ) × (A cc Γ c n ns amb. c), where A cc = πR 2 cc , or Q BI 7.6×10 35 erg s −1 ×ζ BI,−1 Γ 2 c,2.5 R 2 cc,15 n ns amb.,−3 , (12) expressed in terms of the BI saturation, parameter ζ BI (here free). It is the fraction of the electrons kinetic energy (in the beam) converted to an electrostatic field and subsequently to heating the chunk's electrons (to increasing β ). At saturation, the electron energy gain is ∼ 10% of the beam electron kinetic energy; the protons energy gain is much less than that of chunk electrons (e.g. Dieckmann et al. (2012); Moreno et al. (2018)). • The thermal Weibel Instability (WI) is an electro-magnetic instability which occurs in plasmas with an anisotropic electron temperature distribution (Weibel 1959;see Fried 1959 for Weibel FI). Its wave vector is perpendicular to the high temperature axis which corresponds to the beam propagation direction induced by the BI heating. The WI can efficiently generate magnetic fields. The corresponding currents are in the direction parallel to the beam with the resulting magnetic field perpendicular to it. As the BI accelerates electrons (increasing β ), the WI heats up the chunk's electrons via particle scattering by the generated magnetic field, accelerating them in the transverse direction (increasing β ⊥ ), as to reduce the BI-induced thermal anisotropy. The WI was studied theoretically in the nonrelativistic and relativistic regimes (Weibel 1959;Fried 1959;Yoon & Davidson 1987; see also Medvedev & Loeb 1999;Gruzinov 2001) and numerically using PIC simulations (e.g. Kato 2007;Spitkovsky 2008;Nishikawa et al. 2009). Its key phases which are of relevance to our model for FRBs are: (i) Electron-WI (e-WI): Because of the small inertia, chunk electrons dominate the dynamics setting the characteristic correlation length of magnetic field and its growth rate. The dominant wavelength in the non-relativistic regime (with γ = 1) is given in Appendix B and is: where we defined β WI = β ⊥ /β < 1. From Eq. (B18), we must have β > 2 √ 2β ⊥ for the WI to be triggered. Hereafter we set β = 10β ⊥ (i.e. β WI = β ⊥ /β = 0.1 as the fiducial value) with β ⊥ = β cc ∼ 10 −2 given by the initial conditions when the chunk first becomes collisional (i.e. Eq.(A11)). The WI current filament structures have a transverse width of the order of λ e−WI and are elongated in the beam's direction (see Figure 2). This dominant mode grows on an e-folding timescale of In the linear regime we estimate the saturation time of the e-WI, t e−WI,s , by setting B e−WI,s = B cc e (te−WI,s/te−WI) with B 2 e−WI,s /8π ∼ n cc m e c 2 the magnetic field strength at saturation. This is equivalent to writing ν B ∼ ν p,e at e-WI saturation with here ν B = eB e−WI,s /m e c, the electron cyclotron frequency at e-WI saturation. We get t e−WI,s 21.2 + ln n 1/2 cc,1 with B cc given in Eq. (9); (ii) Proton-WI (p-WI): After the e-WI stage, and still in the linear regime, follows the p-WI stage which grows more slowly than the e-WI, on timescales t p−WI = m p /m e t e−WI . The magnetic field is further amplified to a saturation value B p−WI,s = m p /m e B e−WI,s (Bret et al. 2016 and references therein). The saturation time t p−WI,s of the p-WI phase is found by setting B p−WI,s = B e−WI,s × e (tp−WI,s/tp−WI,) . This gives t p−WI,s = (m p /m e ) 1/2 ln ( m p /m e ) × t e−WI , or, (iii) Filament merging (m-WI): In the nonlinear regime, following the saturation of the p-WI stage, the filaments start merging and grow in size increasing λ e−WI . The merging is a result of the attractive force between parallel currents (Lee & Lampe 1973;Frederiksen et al. 2004;Kato 2005;Medvedev et al. 2005;Milosavljevic & Nakar 2006 where ν p,p = m p /m e ν p,e is the proton plasma frequency and ζ m−WI = 10 2 a parameter which allows us to adjust the merging timescale. The time evolution of the filament size we consider to be a power law with λ F (0) = λ e−WI the filament's transverse size during the linear regime and δ F > 0 (simulations suggest δ F ∼ 0.76; e.g. Takamoto et al. (2019)). Hereafter we adopt δ F = 1 as our fiducial value; (iv) Proton trapping and shock formation: The Weibel shock occurs when the protons are trapped by the growing filaments; i.e. when the filament size becomes of the order of the beam's proton's Larmor radius. The shock quickly converts the chunk's kinetic energy to internal energy by sweeping ambient protons leading to full chunk slowdown and shutting off the BI-WI process. We close this section by discussing a few points: • Table 1 lists the parameters related to the BI and WI instabilities and the fiducial values we adopted in this work. For the BI we have ζ BI which is the percentage of the beam's electron energy (in the chunk fame) converted by the BI to heating the chunk electrons. The WI-related parameters are: (i) β WI = β ⊥ /β , the ratio of transverse to longitudinal thermal speed of electron chunks (Eq. (13)) at the onset of the WI; (ii) ζ m−WI the filament merging characteristic timescale (Eq. (17)) and; (iii) δ F the power index of the filament merging rate as given in Eq. (18); • We adopt β = 10β ⊥ (i.e. β WI = β ⊥ /β = 0.1) during the linear stages of the WI instability which keeps λ e−WI constant. While β grows due to the BI, the WI increases β ⊥ accordingly, as to the keep β WI constant. However, because λ BI /λ e−WI = α cc /β 1/2 WI << 1, the BI deposits energy (i.e. heats up and accelerates electrons) in layers that are much narrower than those of the WI; • With t BI ∼ 17.8/ν p,e and t e−WI = (β WI /β ⊥ )/ν p,e ∼ 10/ν p,e being of the same order, the BI heat deposited within λ BI is quickly mixed into much larger scales given by λ e−WI ; • The Oblique mode instability (when both longitudinal and transversal waves components are present at the same time) dominates when α cc > (m e /m p )Γ c (e.g. Bret 2009). In our case this translates to n ns amb. > (m e /m p )n cc 0.5 cm −3 × n cc,1 . Since the ICM's density is n ns amb. << 1, the BI will always dominate; • The BI heat is partly converted to amplifying the magnetic field (i.e. to magnetic energy density B 2 p−WI,s ), partly to turbulence with energy density δB 2 p−WI,s and, to currents. During filament merging, electrons are accelerated by dissipation of turbulent energy and currents while the WI saturated magnetic field is preserved (Takamoto et al. 2018). The BI energy harnessed during the linear regime is E BI ∼ Q BI t p−WI,s . With Q BI given by Eq. (12) and t p−WI,s given by Eqs. (16) and (14), respectively, we get E BI 4.4 × 10 34 ergs × ζ BI,−1 × β WI,−1 β cc,−2 × × Γ 2 c,2.5 R 2 cc,15 n amb.,−3 n 1/2 cc,1 ; • The top panel in Figure 3 is a schematic representation of the evolution of β during the linear and non-linear WI stages (β ⊥ = 0.1β follows the evolution of β ). The increase in β is due to the BI and proceeds until the end of the p-WI stage, when the magnetic field saturates. At this point the BI excitations are converted entirely to heating electrons with the consequence that β increases rapidly following p-WI saturation. The BI shuts off when γ ∼ 2 because it acts only when the relative drift between the beam electrons and the chunk protons (here c) exceeds the chunk's electrons thermal speed. Despite the BI shutting-off, the electrons continue to be accelerated by magnetic turbulence and by current dissipation during filament merging yielding γ ∼ γ ⊥ ∼ 10 (Takamoto et al. 2018). As discussed below, the increase in electron Lorentz factor during the merging phase, provides conditions favorable for coherent synchrotron emission (CSE) to occur in the WI-amplified magnetic field layers of the chunk. COHERENT SYNCHROTRON EMISSION (CSE) 3.1. Bunching As described in Appendix C, electrons can emit coherently if the characteristic wavelength of the incoherent synchrotron emission (ISE) exceeds the length of the bunch, λ b . Specifically, intense CSE occurs when the corresponding frequency ν CSE = c/λ b is much less than the characteristic incoherent synchrotron frequency Here ν B is the cyclotron frequency and γ CSE > 2 the Lorentz factor of the relativistic electrons in the bunch at CSE trigger. During the linear phase of the WI (up to p-WI saturation), CSE is unlikely to occur because the BI heating cannot yield relativistic electrons (see top panel in Figure 3). Furthermore, bunching cannot be induced by the BI during filament merging because the instability does not grow if the background (i.e. chunk) electrons are so hot (γ CSE > 2) that their thermal velocity spread exceeds the drift velocity relative to the beam (i.e. streaming ambient) ions. Instead, bunching is related to (i.e. entangled with) the WI filaments and CSE is likely to be triggered during filament merging when electrons are accelerated by magnetic turbulence and current dissipation to γ CSE >> 1 as outlined in the previous section. The likely bunch geometry is described in Appendix C.1, which shows that a bunch resembles a cylindrical shell around the Weibel filament extending across the length of the chunk (see Figure 2). Specifically, we expect bunching (and subsequently the CSE) to occur in the periphery around filaments and not inside filaments where the currents reside and the magnetic field is weaker. The details of the bunch geometry is however not crucial in our model. What is important, is that all of the BI heating is released as CSE by the bunches regardless of their size and geometry during filament merging phase (see §3.3). Frequency and duration With ν B ∼ m p /m e ν p,e after p-WI saturation and during the filament merging phase, we calculate the chunk's magnetic field strengthen to be and the characteristic ISE frequency to be ν ISE = 3/2 × γ 2 CSE m p /m e ν p,e . The CSE frequency, ν CSE (t) = c/λ b (t), evolves in time due to the scaling of the bunch size λ b (t) with that of the WI filament λ F (t) which is expressed in Eq. (18). We find the CSE frequency to decrease in time during the filament merging phase at a rate given by with δ F > 0 and ν CSE (0) = c/λ e−WI ; λ e−WI given by Eq. (13) is the filament's transverse size during the linear phase. Because ν CSE << ν ISE we set the initial (also the maximum) CSE frequency as with δ CSE << 1. The CSE frequency decreases in time until it reaches the chunk's plasma frequency ν p,e shutting-off emission. The range in CSE frequency from a QN chunk is thus The duration of CSE is found from ν p,e = ν CSE (0) × (1 + ∆t CSE /t m−WI ) −δF giving us: with δ CSE = 0.1 and γ CSE = 10 the fiducial values listed in Table 1. Luminosity Most of the BI-induced heat is harnessed during the linear regime and up until the start of filament merging. Once the electrons thermal energy becomes relativistic (with γ CSE > 2), the BI shuts-off. Effectively, the electrostatic energy deposited by the BI inside the chunk during the linear regime is E BI Q BI t p−WI,s (see Eq. (19)) where t p−WI,s is the p-WI saturation timescale. As mentioned in §2.3, this energy is converted by the WI to: (i) magnetic field amplification with B p−WI,s ∼ m p /m e B e−WI,s at saturation; (ii) magnetic turbulence; (iii) currents. Filament merging converts about 2/3 of the BI energy (by turbulence acceleration and current dissipation) to accelerating electrons (e.g. Takamoto et al. (2018)). The energy gained by the chunk electrons during filament merging is re-emitted as CSE luminosity expressed as L CSE ∼ (2/3)E BI /t m−WI : Summary Illustrated in the lower panel in Figure 3 are the key phases of the BI-WI episode. The depicted key frequencies are: (i) The electron plasma frequency (ν p,e = 4πn cc e 2 /m e ) which remains constant during the entire BI-WI process. This also sets the minimum observed CSE frequency as ν obs. p,e = D(Γ c , θ c )ν p,e /(1 + z); (ii) The electron cyclotron frequency (ν B = eB c /m e c; with B c = B cc at the start of the BI-WI process). It increases in time as B c increases first during the e-WI phase reaching saturation at B c = B e−WI,s when the cyclotron frequency is ν B ∼ ν p,e . During the p-WI phase, the magnetic field grows further to a saturation value of B p−WI,s = m p /m e B e−WI,s when ν B ∼ m p /m e ν p,e at time t p−WI,s ; (iii) The BI shuts-off in the early stages of filament merging phase once the chunk's electrons are so hot that their thermal velocity spread exceeds their drift velocity relative to the beam's ions (when γ CSE > 2); during filament merging, electron acceleration is due to dissipation of magnetic turbulence and currents (see §2.3); (iv) Once CSE is triggered, electrons in bunches cool rapidly with the cooling timescale of a bunch t b (t) << ∆t CSE (see Appendix C.1). Each bunch emits once during filament merging with bunches emitting uniformly spaced in time during this phase; (v) Beyond the CSE phase, the filaments continue to grow in size until they are of the order of the beam's proton Larmor radius. Once the protons are trapped, the Weibel shock develops slowing down the chunk drastically (in a matter of seconds in the observer's frame; see Eq. (49)) and putting an end to the BI-WI process. where t obs. m−WI (θ c ) = (1 + z))t m−WI /D(Γ c , θ c ) is the characteristic filament merging timescale given by The chunk's number density, n cc , and radius, R cc , when it become collisonless are given in Appendix A and summarized in Table 2. The CSE frequency decreases in time, due to filament (and thus bunch) merging as ν obs. In the observer's frame this translates to a range in CSE frequency of Figures 4 and 5 are schematic representations of frequency drifting in time through the detector's band in our model. Illustrated are the cases of a flat spectrum ( Figure 4) and of a power-law spectrum ( Figure 5). The vertical bands portray the fact that at any given time during filament merging, CSE emerges from the chunk at frequencies ν p,e < ν CSE (t) ≤ ν CSE (0). Eventually, ν CSE (t) drops to the chunk's plasma frequency at which point CSE cannot escape the chunk. For the steep power-law spectrum case ( Figure 5), CSE is detectable mostly around the peak frequency ν CSE (t) instead of extending all the way down to the plasma frequency making the frequency bands at a given time narrower than in the flat spectrum case. Table 2 lists the equations giving us the properties (number density, radius and thermal speed) of a typical chunk from ICM-QNe (QNe occurring in the ICM) when it becomes collisionless. This occurs at time t obs. cc since the QN (see Eq. (A13)). Also listed are the properties (frequency, duration and fluence) of the resulting FRBs. For a detector with bandwidth ∆ν det. = ν det. max. − ν det. (ν det. max. > ν det. min. ; see Table 3): max. and ν obs. CSE,min. (θ c ) < ν det. min. , the duration of the CSE is set by the time it takes emission to drift through the detector's band. It is derived in Eq. (D10) as The duration of the FRB is the minimum between the CSE duration proper and the drifting time through the detector's band expressed as For example If ν obs. with G(θ c , δ F , 0) (given by Eq. (D22)) varying from a value of a few to a few thousands depending on the detector's band (see Table 4). Also listed in Table 2 is the timescale between repeats ∆t obs. repeat (emission between two separate chunks) found by setting f (θ c ) = ∆f chunks S−P (1.6/π) × Γ 2 c,2.5 /N c,6 (Eq. (7)) into t obs. cc (Eq. (A13)): The time delay between two successive CSE bursts (i.e. two emitting QN chunks) for a given ICM-QN depends mainly on the total number of chunks per QN, N c . For a given QN, and for fiducial parameter values, typical time between repeats is constant and is of the order of days in the observer's frame. Table 5 shows examples of FRBs from ICM-QNe obtained using the equations in Table 2 for sources at z = 0.2 (corresponding to a luminosity distance d L 1 Gpc). Because f (θ c ) = 1 + (Γ cθc ) 2 (Eq. (3)) is controlled by N c (θ c ∝ 1/N 1/2 c ) and Γ c we chose to vary these two parameters and show the implication of a range in viewing angles on FRB detections in our model. Chunks with ν obs. CSE,max. (θ c ) > ν det. max. will eventually be detected when the frequency drifts into (i.e. enters the) detector's band. The drift ends when the CSE frequency reaches MAX(ν det. min. , ν obs. CSE,min. (θ c )) with ν obs. CSE,min. (θ c ) = ν obs. p,e (θ c ). For fiducial parameters values, the plasma frequency ν obs. p,e (θ c ) is below the detector's minimum frequency of our listed detectors which implies that CSE drifts through the band. The fluences per detector are given in Table 5 with the shaded cells showing the values within detector's sensitivity we used (listed in Table 3). Non-repeating vs repeating FRBs In our model, FRBs are intrinsically all repeaters because each chunk gives an FRB beamed in a specific direction. Observed single (i.e. non-repeating) FRBs are an artifact of the detector's bandwidth and sensitivity. Consider a detector with maximum and minimum frequency ν det. max. and ν det. min. , respectively, and a fluence sensitivity threshold F det. min. . The two conditions which must be simultaneously satisfied for repeats to occur are whereθ S is the average viewing angle for secondary chunks (see Eq. (2)) 4 . Box "A" in Table 5 shows an example of FRBs where only a few detectors can see the primary chunk (the shaded cells). In Box "A" example, while the ν obs. CSE,max. (θ S ) > ν det. min. is satisfied, the fluence is below threshold for most detectors. Box "B" shows the case where only CHIME sees repeats since the condition ν obs. CSE,max. (θ S ) > ν det. min. in Eq. (36) is violated by the secondary chunks for most detectors (the "N/A" cells). This is also the reason why G(θ c , δ F , 0) = 0 in Table 4 for N c = 10 5 and Γ c = 10 2.5 . In general "non-repeats" occur for f (θ c ) >> 1 which is the case for high Γ c (≥ 10 2.5 ) and/or low N c (< 10 5.5 ) as in Boxes "A" and "B". In this regime, The maximum CSE frequency is weakly dependent is independent of the Lorentz factor and strongly dependent on the viewing angle as θ −8 c . The average viewing angle of the secondary and tertiary chunks as derived in Eq. (2) can be expressed in terms of the primary chunk asθ S (7/3)θ P and θ T 6θ P with the consequence that ν obs. CSE,max. (θ S ) = (3/7) 2 ν obs. CSE,max. (θ P ) and ν obs. (1/6) 8 F (θ P , δ F , 0) which demonstrates that only the primary chunk would fall within most FRB detector bands and above the sensitivity threshold. Boxes "A" and "B" in Table 5 show that the frequency and the fluence for the secondary and tertiary chunks, in the non-repeating FRBs, do follow the θ −2 c and θ −8 c dependencies, respectively. In general, the scaling follows the more general form of the dependency given as f (θ c ) −1 and f (θ c ) −4 , respectively. Repeating FRBs are obtained for relatively lower values of f (θ c ) for the secondary and tertiary chunks which is the case for higher N c values. Boxes "D" and "E" in Table 5 show that most detectors would see the secondary chunks with a few detectors capable of detecting also the tertiary chunks (shaded cells). Boxes "C" and "F" correspond to the low Γ c scenario (in this case 10 2 ) with the maximum CSE frequency (ν obs. CSE,max. ∝ Γ 11/5 c for f (θ c ) ∼ 1) being in the sub-GHz regime thus eliminating ASAKP, Parkes and Arecibo detections. In this regime, CHIME can detect many repeats for a range in N c . Simulations Beyond the fiducial parameter settings discussed above, a parameter survey was performed by simulating the evolution of the QN chunks starting from the moment when they become collisionless within the ICM. Here, and in all subsequent runs in this paper: (i) We assume that at the time of the QN explosion, all chunks are equally distributed on the surface of the NS. We distribute the chunks on the surface of a unit sphere using the "Regular Placement" algorithm described by Deserno (2004). The chunks are placed evenly along rings of constant latitude, and each ring is evenly spaced over the surface of the sphere. The simulation then chooses a random direction vector from which to view the sphere, and calculates the θ c angle of each chunk based on this vector; (ii) The zero time of arrival t obs. OA = 0 is set by the chunk which has the minimum value of t obs. cc . Effectively, the time of arrival of subsequent chunks, t obs. OA (θ c ), are recorded with respect to the signal from the first detected chunk. The time delay between suc-cessive chunks we define as ∆t obs. OA while we define ∆θ C as the difference between the current chunk's θ c and the previous one that arrived; (iii) We fix the QN ejecta's kinetic energy to E QN = 10 51 erg which fixes the chunk's mass for a given N c and Γ c ; m c = E QN /N c Γ c c 2 ; (iv) For the non-constant chunk mass simulations, we sample the chunk mass from a Gaussian distribution with a mean massm c = E QN /N c Γ c c 2 and standard deviation σ m = 1.0. Before focusing on repeating FRBs, we show a typical "non-repeating" FRB in Table 7. As explained in §4.1, single FRBs are detector-dependent and in general occur when one of the conditions in Eq. (36) is violated, which occurs mostly when f (θ T ) >> f (θ S ) >> f (θ P ) >> 1. This is the case when considering fewer QN chunks (typically N c = 10 5 ) and higher Lorentz factors (typically Γ c = 10 3 ) while other parameters are kept to fiducial parameter values listed in Table 1. On the other hand, repeating FRBs occur for lower values of f (θ c ) for the secondary and tertiary chunks. A first example is shown in Table 8 with a repeat time of days. Table 9 shows an example with time delay between bursts ranging from minutes to a few hours and to days which requires a wide distribution of the chunk mass m c . Our simulations show that on average CHIME detects 5 times more FRBs than ASKAP and Parkes. This is due to the fact that the CSE frequency in our model decreases with an increase in f (θ c ) (i.e. with higher viewing angle θ c ) making CHIME more sensitive to secondary chunks (i.e. sees a bigger solid angle) for a given QN. The number of chunks N obs. ν obs. (i.e. FRBs per QN) detectable at any frequency is given in Appendix D.1 and expressed in Eq. (D9) as Applying the above to CHIME and ASKAP detectors, for example, we get independently of Γ c (i.e. for a given QN) in agreement with the simulation results; the subscript "p" refers to the band's peak frequency (see Table 3). Past CHIME's band the FRBs will drift into the LO-FAR's band. In addition, emission from chunks at high viewing angles will be visible to LOFAR. Using Eq. (38) to compare LOFAR (high-band antenna) to CHIME we arrive at LOFAR should thus detect on average 5 times more bursts than CHIME from a given QN. Our simulations do not yield LOFAR's detections too often except in a few cases when the chunk is massive and very close to the observer's line-of-sight such as in Tables 10-13 with LOFAR's fluence very close to the threshold of 10 3 Jy ms (see also cases in Table 5). This is understandable because for a given QN, an f (θ c ) ∼ 100 is necessary for the CSE frequency to fall within LOFAR's band. However, these high f (θ c ) values yield a fluence (∝ f (θ c ) −4 ) below the LOFAR's sensitivity limit. The ratio given in Eq. (40) is likely to be reduced by: (i) dispersion effects (which are more pronounced at MHz frequencies); (ii) the Earth's ionosphere which affects signals in the tens of MHz range. Frequency drifting Frequency drifting is a consequence of the decrease of the CSE frequency in time during filament merging, ν obs. m−WI (θ c )) −δF , and lasts for ∆t obs. CSE (Eq. (24)). Our fits (see Table 6) to drifting in these FRBs yield viewing angles suggestive of secondary and tertiary chunks (θ S 0.008/N 1/2 c,5 andθ T 0.02/N 1/2 c,5 ; see Eq. (2)) except for FRB 121102/GB-BL burst which points at a primary chunk (θ P 0.004/N 1/2 c,5 ). We also require ζ m−WI to be of the order of a few thousands which suggests slower filament merging timescales. These two effects combined give longer FRB durations making these FRBs easier to resolve in time. Waterfall plots The analytical and normalized band-integrated flux density is given by Eq. (D14). Figure 7 shows examples of the band-integrated flux in our model for the CHIME detector when ν obs. CSE,max. (0) = 2ν det. max. and ν obs. p,e. (0) = ν det. min. /2. The three different curves show different filament merging rate defined by the parameter δ F (see Eq. (13)). Figure 8 is the frequency-time ("waterfall") plot for the simulation shown in Table 8. Each pixel in the wa-terfall plot is the flux density, i.e. f ν obs. (θ c , t obs. ) given in Eq. (D12) with L CSE = (2/3) × E BI /t m−WI given by Eq. (25). To obtain the integrated flux density plot we add up the flux in each pixel (i.e. over the detector's frequency band) along the vertical axis for each time with f ν obs. (θ c , t obs. ) = 0 when ν obs. CSE (t) < ν det. pixel . The resulting band(frequency)-summed flux density is shown in the upper sub-panels and matches the analytically derived one (see Appendix D.3 and Figure 7). For completeness, Figures 9 and 10 show waterfall plots for the repeating FRBs listed in Tables 10 and 11. Figure 11 shows an example where for all chunks the maximum CSE frequency falls within the detector's band (here CHIME); see Table 12 for the corresponding simulations. Our model can reproduce the cases portrayed in the upper panels in Figures 4 and 5. Patchiness (i.e. gaps) in the frequency-time diagram during drifting (in the milli-second timescales) has been observed. It may be a consequence of scintillation effects induced by the ambient medium as suggested in the literature (Macquart et al. 2018) from the comparison of the bright nearby ASKAP FRBs to the dimmer farther away Parkes FRBs (i.e. based on the DM-brightness relation; Shannon et al. 2018). However, there remains the possibility that the patchiness may be intrinsic to the chunk and may be a result of different parts of the chunk acting at different times. This is beyond the scope of this paper and will be explored elsewhere. CASE STUDY Overall, our model can reproduce general properties of observed non-repeating and repeating FRBs. In this section, we focus particularly on FRB 180916.J0158+65 and FRB 121102. FRB 180916.J0158+65 A year long observation of FRB 180916.J0158+65 led to the detection of tens of bursts with a regular ∼ 16 day cycle with bursts arriving in a 4-day phases (CHIME/FRB Collaboration (2020)). In our model, repetition is set by the angular separation between emitting chunks which yields a roughly constant time delay between bursts (see discusion around Eq. (7)). Boxes A, B and C in Table 5 (i.e. for N c = 10 5 and 10 2 ≤ Γ c ≤ 10 3 ), show that typical time delays between bursts within a repeating FRB is 12 days < ∆t obs. repeat < 20 days. The simulations use randomly spaced chunks rather than the simple honeycomb geometry. It is possible to view the QN such that we get FRBs from chunks arriving roughly periodically. An example is given in Table 13 with a ∼16-day period repeating FRB. A 4-day window (a "smearing" effect) can also obtained by varying the chunk parameters such as the mass and the Lorentz factor and/or the ambient number density n ns amb. for a given QN. FRB 121102 FRB 121102 was discovered by PARKES at a redshift of z ∼ 0.1972 (Spitler et al. 2014). Its main properties include the quiescent and active periods on month-long scales (Michilli et al. 2018), with hundreds of bursts so far detected (e.g. Gajjar et al. (2018); Hessels et al. (2019)). It has been associated with a star-forming region in an irregular, low-metallicity dwarf galaxy (Bassa et al. (2017)). The high RM measured in FRB 121102 (RM ∼ 10 5 rad m −2 ; Michilli et al. 2018) sets it apart from other FRBs. Table 14 shows an example of an FRB from an ICM-QN in our model lasting for ∼ 20 years reminiscent of FRB 121102. This is obtained by setting a higher γ CSE (here 40) and a low Γ c (here 40) compared to fiducial values listed in Table 1. A variation in chunk mass is necessary to obtain the variability in width and fluence seen in FRB 121102. We find that the unique properties of FRB 121102 mentioned above may be best explained in our model if we assume that the QN responsible for it occurred inside a galaxy. This would be the case for NSs with small kick velocities. For example for a velocity of ∼ 10 km s −1 , the NS would have travelled only about a kilo-parsec in ∼ 10 8 years by the time it experience a QN transition. Table 15 shows an example of a galactic FRB, lasting for ∼ 3 years, obtained by considering an ambient density of n ns amb. = 10 −2 cm −3 representative of a galactic/halo environment. If the QN occurs in the vicinity of a star forming region in the galaxy (i.e. probably rich in HII regions), as seems to be the case for FRB 121102, the CSE from the QN chunks would be susceptible to lensing thus enhancing the number of bursts (Cordes & Chatterjee 2019). Lensing would "scramble" any regular cycle (i.e. the ∆t obs. repeat period) expected due to the spatial distribution of the QN chunk. An FRB from a galactic QN at low redshift would mean a sensitivity to more chunks at higher θ c ; i.e. a bigger solid angle is accessible to detectors. Finally, it may be possible that the high RM associated with FRB 121102 is intrinsic to the QN chunks. The rotation measure is RM = 0.81 for n amb. = 10 −1 cm −3 representative of the hot ISM component within galaxies (Cox 2005). Rate Assuming that the progenitors of ICM-QNe are old massive NSs, we can estimate the ICM-QN occurrence rate based on a simplified population model. We use a lognormal initial magnetic field distribution (with mean of 12.5 and standard deviation of 0.5) and a normal distribution for the initial period (with mean of 300 ms and standard deviation of 150 ms); both of these distributions were taken from Faucher-Giguère & Kaspi (2006). We assume that ICM-QNe occur after a common nucleation timescale (in the core of the parent NSs) of ∼ 10 8 years. Slow rotating, massive NSs are the most likely to experience quark deconfinement via nucleation and undergo a QN phase (i.e. becoming FRB candidates). We count only NSs which have birth periods greater than ∼ 100 ms and whose stellar progenitors had masses between 20-40 M . Integration of the initial mass function (IMF; Salpeter 1955) and the initial period distribution (assuming the period and magnetic field are independent) gives approximately 10% of all neutron stars as QN candidates in the ICM (i.e. as FRB progenitors). For a galactic core-collapse SN rate of ∼ 1/50 years and over 10 10 years about ∼ 2×10 8 NSs would have formed. This would yield an approximate ICM-QNe (i.e. FRB) rate of MHz which falls below most radio detectors/receivers except may be for LOFAR's low-band antenna for which ν det. min. = 30 MHz (van Haarlem et al. (2013)). Because f (θ c ) >> 1 for non-repeating FRBs (see §(4.1)), the maximum CSE frequency will fall below LOFAR minimum frequency. Also, repeating FRBs (i.e. with low Γ c ) from IGM-QNe at high high-redshift would yield frequencies below the LOFAR's band. Thus FRBs from IGM-QNe may not be detectable with current detectors. Besides the CSE frequency which would likely fall below the LOFAR band, we also argue that IGM-QNe may not occur in nature. Isolated massive NS in field galaxies (with halos extending up to ∼ 100 kpc or more) would need to travel long distances before they enter the IGM. For a NS with a typical kick velocity of 300 km s −1 , nucleation timescales of at least ∼ 10 9 years would be required for the NS to enter the IGM prior to the QN event. For typical quark nucleation timescales of ∼ 10 8 years (and a narrow nucleation timescale distribution), even NSs with a kick velocity of ∼ 10 3 km s −1 would travel only about 100 kpc reaching at most the edge of their galaxies. While we cannot with full certainty rule out FRBs from IGM-QNe they seem unlikely. Instead, in field galaxies it is likely that FRBs would be associated with halo-QNe (see §5.2), meaning that in field galaxies old NSs would experience the QN phase (yielding FRBs) while still embedded in the halo. Implications and Predictions In no particular order, some predictions of our model include • Repeats vs "non-repeats": All FRBs are repeats according to our model because every chunk emits an FRB beamed in a specific direction. Single (i.e. non-repeating) FRBs, occur when only emission from the primary chunk is detected. Thus non-repeaters are a consequence of observing limitations when emission from the secondary and tertiary chunks is either too faint (with a fluence below sensitivity threshold) and/or when the corresponding frequency is below the bandwidth (see Eq. (36)); • The halo/ICM low dispersion measure (DM): : Recent studies (Caleb et al. 2019;Ravi 2019) concluded that FRBs sources must repeat during their lifetime in order to account for the high FRB volumetric rate. This constraint is relaxed in our model given the low DM, and thus larger volume, of the ambient medium (galactic halo, ICM, IGM) and our estimated rate of FRBs (∼ 10% of the core-collapse SN rate). Within uncertainties on FRB rate, our model is in the allowed region (see Figure 2 in Caleb et al. 2019) with no need for FRB sources to repeat over their lifetime; • "Periodicity": All FRBs, if viewed at the right angle, will appear periodic in time with the period (see Eq. (35)) set by the roughly constant spatial separation between chunks (Eq. (7)). This "periodicity" may be washed out with a variation in chunks parameters (e.g. mass and Lorentz factor) and/or a variation in the ambient density n ns amb. ; • FRBs from galactic/halo-QNe: These FRBs could be associated with field galaxies as well as galaxy clusters. While in galaxy clusters they would be induced by QNe from NSs with a low kick velocity, in field galaxies with extended haloes, isolated old NSs would likely experience the QN event before reaching the IGM (see §6.2). A possible differentiator between FRBs from ICM-QNe and those from galactic/halo-QNe may be the high RM in the latter ones (Eq. (41)); • The solid angle effect: CHIME (sub-GHz) which is more sensitive to higher angle chunks should detect more FRBs per QN than for example ASKAP and Parkes (GHz) detectors (see Eq. (39)). It also implies that CHIME FRBs should be dimmer and will be associated with duration (burst width) on average longer (but with variations) than ASKAP and Parkes FRBs; • Super FRBs from halo-and ICM-QNe: FRBs from the primary chunk would be extremely bright with a fluence in the tens of thousands of Jy ms for CHIME's band and hundreds of Jy ms for LOFAR's high-band antenna (see examples in boxes "D" and "E" in Table 5). However these events may be rare if a typical ICM-QN yields N c < 10 5.5 based on our model's fits to FRB data; • Monster FRBs from IGM-QNe: FRBs from chunks seen very close to the line-of-sight (i.e. f (θ c ) ∼ 1) could reach a fluence in the millions of Jy ms (see Table 16). Several effects conspire to make FRBs from IGM-QNe much brighter than those from galactic-and ICM-QNe. The low IGM density means the chunks must travel large distance, and thus reaching larger radii, and becoming colder (i.e. associated with lower β cc values) when they become collisionless (see Table 16). There is also the band effect with the lower frequency ones contributing higher values of G(θ c , δ F , 0) (see Table 4) to the total fluence, F (θ c , δ F , 0) = F(θ c , 0) × G(θ c , δ F , 0). However, FRBs from IGM-QNe if they occur would be rare events and even so their frequencies may fall outside the LOFAR's band (i.e. ν obs. CSE,max. < 30 MHz); see discussion in §6.2; • The pre-CSE phases: There are two other plausible emission mechanisms prior to the CSE phase: (i) Thermal Bremsstrahlung (TB) emission from the chunks before they enter the collisionless phase (see Appendix A.1). The corresponding spectrum is flat and has a maximum frequency ν obs. TB = D(Γ c , θ c )T c,ic /(1 + z) with T ic 13.6 eV the chunk's temperature when it becomes ionized by hadronic collisions with the ambient medium. This gives ν obs. which is in the keV range. The corresponding maximum X-ray luminosity, given by Eq. (A17), is: The TB phase would persist for ∆t obs. TB ∼ t obs. cc which is of the order of a days (see Eq. (A13)). (ii) Incoherent synchrotron emission (ISE) in the very early stages of filament merging phase, preceding the CSE phase. The corresponding ISE frequency in the observer's frame (D(Γ c , θ c )ν ISE /(1+ z)) would be ν obs. GHz The maximum luminosity (which assumes contribution form all chunk's electrons) is L ISE,max. = (m c /m H ) × P e with the ISE power per electron P e = 1.6 × 10 −15 γ 2 CSE B 2 p−WI,s (e.g. Lang (1999) which is much dimmer than the subsequent CSE phase. The ISE phase is short lived (<< t m−WI ) compared to the CSE phase and may be hard to detect; • QN compact remnant in X-rays: The QS is born with a surface magnetic field of the order of ∼ 10 14 G owing to strong fields generated during the hadronic-to-quark-matter phase transition (Iwazaki 2005;Dvornikov 2016a,b). Despite such high magnetic field, QSs according to the QN model do not pulse in radio since they are born as aligned rotators (Ouyed et al. , 2006. Instead, during the quark star spin-down, vortices (and the magnetic field they confine) are expelled ; Niebergal et al. (2010b)). The subsequent magnetic field reconnection leads to the production of X-rays at a rate of L X ∼ 2 × 10 34 erg s −1 × η X,−1Ṗ 2 −11 where η X is an efficiency parameter related to the rate of conversion of magnetic energy to radiation andṖ the period derivative (see §5 in Ouyed et al. 2007); • FRBs and UHECRs: The Weibel shock (which ends the BI-WI process), may be inductive to Fermi acceleration (Fermi, 1949). The particles in the ambient medium and/or in the chunk can be boosted by ∼ 2Γ 2 c (e.g. Gallant & Achterberg 1999) reaching energies of the order of where A is the atomic weight of the accelerated particles (i.e. the chemical imprint of both the ambient medium and of the chunk material). A distribution in Γ c (with 10 1.5 < Γ c < 10 3.5 as suggested by our fits to FRB data) would allow a range in UHECR of 2 × 10 13 eV < E UHECR /A < 2 × 10 17 eV. (9)) and may induce polarization at some level. At the beginning of filament merging, the many independent (i.e. non-communicating) bunches should yield a relatively less polarized CSE despite the high B p−WI,s . CSE may show more polarization towards the end of filament merging when emission from the reduced number of (and thus larger size) bunches is expected to be more synchronized. Alternatively, if one bunch triggers another they may emit in the same polarization. This will be explored elsewhere; • FRB 121102 high RM: FRB 121102 high rotation measure of RM ∼ 10 5 rad m −2 (Michilli et al. 2018) sets it apart from other FRBs. The RM induced by the chunk on the CSE is given by Eq. (41) which shows that in our model high RM values can be obtained for FRBs from galactic-QNe with a high ambient medium density n ns amb. > 10 −3 cm −3 . However, in the high ambient medium density case, and for fiducial parameter values, our simulations yield repeating FRBs lasting at most only a few years (Tables 14 and 15). A parameter survey is needed which may yield longer timescales. It may also be the case that the high RM associated with FRB 121102 is due to plasma within the associated galaxy. This issue will be investigated elsewhere; • FRB 121102 persistent radio source: FRB 121102 has also been associated with a persistent radio source with luminosity L ∼ 10 39 erg s −1 (Tendulkar et al. 2017;Bassa et al. 2017;Chatterjee et al. 2017;Marcote et al. 2017) hinting at a pulsar. This would seem to support our suggestion that this FRB may be from a galactic-QN in a star-forming dwarf galaxy (see §5.2). In this case, we would argue that the radio source (may be a pulsar) is independent of the FRB proper; • The minimum CSE frequency: It is set by the chunk's plasma frequency ν obs. CSE,min. (θ c ) = ν obs. p,e (θ c ) in our model (see Eq. (30)) and is below the minimum frequency of most FRB detectors (see Table 3). A parameter survey will be performed in the future to determine which parameters can yield scenarios with ν obs. CSE,min. (θ c ) > ν det. min. . There is the possibility that the CSE may be suppressed before the CSE frequency drops below the plasma frequency; e.g. if Weibel filaments do not grow beyond a size of ∼ c/ν p,e during the merging process; • Chunk's composition: The extremely neutronrich, relativistically expanding, QN ejecta is converted to unstable r-process material in a fraction of a second following the explosion (Jaikumar et al. 2007;Kostka et al. 2014; for details, see Appendix B.2 in Ouyed al. 2019). Here, we assumed that the chunk is dissociated into its hadronic constituents yielding the background (e − , p + ) plasma. A future avenue would consist of taking into account the ionic composition of the chunk. FRBs as probes of collisionless plasma instabilities FRBs can become a laboratory for studying collisionless plasma instabilities if indeed, as suggested by our model, the Buneman and the thermal Weibel instabilities are at the heart of this phenomenon. FRBs from QNe may provide some guidance to models and PIC simulations of inter-penetrating plasma instabilities. In particular: • Buneman saturation: Our fits to FRB data suggests a BI saturation parameter ζ B ∼ 10 −1 which translates to about 10% of the beam electron kinetic energy (in the chunk's frame) being converted to heating chunk's electrons. These numbers are comparable to those derived from PIC simulations (e.g. Dieckmann et al. 2012;Moreno et al. 2018); • Filament merging: FRBs in our model can shed light on the filament merging process. For example, our simulations of FRB data suggests δ F ≥ 1.0 and γ CSE ≥ 10, in line with recent PIC simulations (e.g. Takamoto et al. (2019)) and may further be used to inform future models and PIC simulations of the filament merging process; • The Weibel shock: Association of FRBS with UHECRs as suggested above, would confirm that the Weibel shock took place. Comparing the energy in UHECRs to the kinetic energy of a typical QN ejecta ∼ 10 51 -10 52 erg could in principle provide an estimate of the efficiency of particle acceleration in Weibel shocks; • Micro-bunching instability: Perturbations to the bunch density can be amplified by the interaction with the CSE proper which may result in a "sawtooth" instability (Heifets & Stupakov 2002;Venturini & Warnock 2002). One possible manifestation of the instability is by inducing spikeness in FRB lightcurves which if confirmed by observations would support our model and would offer a unique in-sight into the micro-bunching mechanism in inter-penetrating plasmas. FRBs as probes of the QCD phase diagram Of relevance to Quantum-Chromo-Dynamics (QCD) and its phase diagram, in particular to the still poorly known phases of quark matter (e.g. Rajagopal 1999 and references therein), we note: • Quark nucleation timescales: Our model's fits to FRB data hint at a quark nucleation timescale of ∼ 10 8 years. This may constrain models of nucleation in dense matter and in neutron stars (e.g. Bombaci et al. 2004;Harko et al. 2004) and may be used to constraint quark deconfinement density; • Quark nucleation in cold and hot NSs: The energy release during the conversion of a NS to a QS is of the order of ∼ 3.8 × 10 53 erg × (M NS /2M )×∆E con.,−4 for a 2M NS and a conversion energy release, ∆E con. , of about 100 MeV (∼ 10 −4 erg) per neutron converted (e.g. Weber 2005). Our model for FRBs (involving slowly rotating, old and cold NSs) and for GRBs (involving rapidly rotating, young and hot NSs; see Ouyed al. (2019)) suggests two nucleation regimes. The hot NS case (with trapped neutrinos) releases an important fraction (up-to ∼ 30%) of the conversion energy as kinetic energy of the QN ejecta (on average E QN ∼ 5×10 52 ergs) while for the cold NS case (with free-streaming neutrinos) a substantial fraction of the conversion energy is lost to neutrinos before the QN event; the kinetic energy of the QN ejecta in this case is about a percent of the conversion energy with E QN ∼ 5 × 10 51 erg; • Color super-conductivity: A future detection of the radio-quiet ICM-QN compact remnant via its X-ray emission (see §6.3), would mean that the QS is likely born in a superconducting state (i.e. the Color-Flavor-Locked phase; Alford et al. 1999). Implications to astrophysics Implications of QN to astrophysics have been reviewed in Ouyed et al. (2018a,b). If the model is a correct representation of FRBs then it would particularly strengthen the idea that: • Quark stars exist in nature and do form mainly from old NSs exploding as QNe at a rate of about 10% of the core-collapse SN rate; • Missing pulsars: The formed quark star is radio quiet owing to the quark-matter Meissner effect which forces the magnetic dipole field to be aligned with the spin axis (Ouyed et al. , 2006Niebergal et al. 2010b). Because an important fraction of these old NSs are potential galactic/halo-QN and ICM-QN candidates (i.e. becoming radio-quiet after the FRB phase), it would thus appear as if these went missing from the outskirsts of galaxies; • QNe in the wake of the core-collapse SN of massive stars may be at the origin of LGRBs as demonstrated in Ouyed al. (2019); see §7.4 in that paper for short duration GRBs; • QNe in binaries may be of relevance to cosmology. When the companion of the exploding NS is a CO white dwarf, a Type-Ia QN results. A QN-Ia is effectively a Type-Ia SN triggered by the QN ejecta impacting the WD. The QN is triggered by accretion onto the NS from the companion which drives the NS core density above the deconfinement value. The properties of Type-Ia QNe, and the lightcurve, are redshift dependent (see Figure 3 in Ouyed et al. (2014)) 5 . If Type-Ia QNe contaminate Type-Ia SNe samples, the latter may not be standardizable ). Kang et al. (2020) provide a recent analysis of the impact of the luminosity evolution on the light-curve fitters used by the SNe Ia community. CONCLUSION We presented a novel model for FRBs involving old, slowly rotating and isolated NSs converting explosively to QSs (i.e. experiencing a QN event) in the ICM of galaxy groups and clusters. For quark nucleation timescales of ∼ 10 8 years, the NSs find themselves embedded in the ICM when the QN occurs. The millions of QN chunks (the fragmented relativistically ejected outermost layers of the exploding NS) expand, due to heating induced by hadronic collisions with ambient protons, and become collisionless as they propagate in the ICM away from the QN site. The interaction of the collisonless chunks (acting as the background plasma) with the ambient medium (acting as the plasma beam), successively triggers the Buneman and the thermal Weibel instabilities yielding electron bunching and coherent synchrotron emission with properties reminiscent of repeating and non-repeating FRBs such as the GHz frequency, the milli-second duration and a fluence in the Jy ms range. There are three classes of FRBs in our model: those from ICM-QNe (i.e. galaxy group and cluster FRBs; §4), those from galactic/halo-QNe ( §5.2), and a third class, but the least likely one to occur, corresponding to FRBs from IGM-FRBs ( §6.2) with frequencies at the lower limit of LOFAR's low-band antenna. Ultimately, the distribution of NS natal kick velocities would control the ratio of galactic versus extra-galactic QNe (and their corresponding FRBs) in our model. We estimate an FRB rate in our model of about 10% of that of core-collapse SNe. Because of the low DM of the ambient medium where they occur their volumetric rate can be explained without the need for the FRB sources to repeat over their lifetimes ( §6.3). Our model is successful at reproducing general properties of non-repeating and repeating FRBs including the years long activity of FRB 121102 and the 16-day cycle of FRB 180916.J0158+65. Among the key predictions of our model, because of the viewing angle (i.e. Doppler) effect, sub-GHz detectors (e.g. CHIME) will be associated with dimmer and longer duration FRBs than GHz detectors (e.g. Parkes and ASKAP). We expect the future detection of super FRBs (with fluence in the thousands to tens of thousands of Jy ms) from ICM-QNe due to chunks close to the observer's line-ofsight. Monster of FRBs from IGM-QNe with a fluence in the millions of Jy ms, may be detected by LOFAR's low-band antenna. These however are extremely rare and may not occur in nature. Here, we demonstrated that FRBs can be caused by a cataclysmic event namely, the QN. Our model relies on the feasibility of our working hypothesis namely, an explosive transition of a NS to a QS following quark deconfinement in the NS core. While such a transition is already hinted at by analytical studies (e.g. Keränen et al. (2005) The Phillips relationship (Phillips (1993)) is a natural outcome of Type-Ia QNe: In addition to the energy from the 56 Ni decay powering the QN-exploded CO white dwarf, a QN-Ia is powered by spin-down from the Quark star (the QN compact remnant which ends up buried within the expanding CO ejecta). This results in QN-Ia obeying a Phillips-like relation where the variation in luminosity is due to the QS spin-down power ); see in particular §4.1 and Figure 1 in that paper where it is shown that the correlation between peak absolute magnitude and light curve shape is redshift-dependent c,−1 with m c,22.3 its mass (in units of 10 22.3 gm) and κ c,−1 its opacity (in units of 0.1 cm 2 gm −1 ). For R c > R c,opt. , the thermal and dynamical evolution of the chunk is governed by heating (Q HH ) from hadronic collisions with the ambient medium and thermalization due to electron Coulomb collisions followed by adiabatic cooling (PdV expansion). The heat transfer equations describing the time evolution of the chunk's radius R c and its temperature T c are where c c,s = γ ad. k B T c /µ e m H is the chunk's sound speed, p c = n c k B T c its pressure, V c = (4π/3)R 3 c its volume and C V = (m c /m H ) × (3k B /2) its heat capacity. The adiabatic index we take to be γ ad. = 5/3 with a mass per electron µ e = 2; k B is the Boltzmann constant. Equations above can be combined into The optical depth to hadronic collisions is τ HH = n c σ HH R c = 3m c σ HH /m H 4A c << 1 where A c = πR 2 c is the chunk's area and n c = 3m H /4πR 3 c m H its baryon number density. Thus, heating due to hadronic collisions can be written as where σ HH,−27 is the proton hadronic collision cross-section in units of milli-barns (e.g. Letaw et al. 1983;Tanabashi et al. 2018). Eq. (A2) becomes where q HH = 2γ ad. Q HH /3µ e m c 1.5 × 10 6 × σ HH,−27 Γ 2 c,2.5 n ns amb.,−3 is the specific heating term due to hadronic collisions. The solution of the system above is R c (t) = R c,0 (t/t 0 ) 3/2 and c c,s (t) = c c,s,0 (t/t 0 ) 1/2 with c c,s,0 = 3R c,0 /2t 0 and t 0 = (27R 2 c,0 /2q HH ) 1/3 . For R c,0 = R c,opt. , we get The chunk becomes collisionless when the electron Coulomb collision length inside the chunk, λ Coul.,e 1.1×10 4 cm× T 2 c,e /n c,e (Richardson (2019) with a Coulomb parameter ln Λ = 20; e.g. Lang 1999) is of the order of the chunk's radius R c . Setting λ Coul.,e (t cc ) = R c (t cc ) with n c (t) = 3m c /4πR c (t) 3 m H , R c (t) = R c,0 (t/t 0 ) 3/2 and T (t) = T 0 (t/t 0 ) yields where the chunk's initial temperature (when it becomes optically thin) is in units of 10 3 K. The subscript "cc" stands for collisionless chunk. The chunk's temperature, radius and number density when it enters the collisionless regime are T cc 8.8 × 10 5 K × T The chunk becomes ionized at time t = t ic when hadronic collisions heats it up to T ic = 13.6 eV prior to becoming collisionless; "ic" stands for ionized chunk. For t ic ≤ t < t cc , we can associate a thermal Bremsstrahlung (TB) luminosity to the chunk L TB (t) = 1.43 × 10 −27 n c,e (t)n c,i (t)T c (t) 1/2 V c (t)Z 2 g (e.g. Lang 1999). In our case we have Z = 1, n c,e = n c,i = n c,0 (t/t 0 ) −9/2 (with n c,0 = 3m c /4πR 3 c,opt. m H ), T c (t) = T 0 (t/t 0 ) and, V c = (4π/3)R c (t) 3 the chunk's volume; g 1.2 is the frequency averaged Gaunt factor. We get L TB (t) 1.7 × 10 35 erg s −1 × m Setting T ic = T 0 (t ic /t 0 ) gives us and a maximum (i.e. initial) thermal Bremsstrahlung luminosity (at t = t ic ) The above is negligible compared to heating from hadronic collision (Q HH ; see Eq.(A3)). When the chunk enters the collisionless phase at t cc , with t cc /t ic (878.8/157.8) × T 3/5 0,3 , the thermal Bremsstrahlung is even smaller with L TB (t cc ) 10 −3 × L TB (t ic ). Although negligible compared to hadronic heating, thermal Bremsstrahlung (when t ic ≤ t ≤ t cc ) is boosted to a maximum observed luminosity The TB phase lasts for t obs. cc which is of the order of days for fiducial parameter values. C. COHERENT SYNCHROTRON EMISSION (CSE) A relativistic electron beam moving in a circular orbit can radiate coherently if the characteristic wavelength of the incoherent synchrotron emission (ISE), λ ISE , exceeds the length of the electron bunch λ b . The near field of the radiation from each electron overlaps the entire bunch structure, resulting in a coherent interaction yielding a CSE frequency ν CSE = c/λ b . With N e,b the number of electrons in a bunch, the intensity of CSE scales as N 2 e,b instead of N e,b as in the incoherent case (Schiff 1946;Schwinger 1949;Motz 1951;Nodvick & Saxon 1954;Ginzburg & Syrovatskii 1965). The total power per bunch is estimated as is the incoherent synchrotron frequency distribution (in erg s −1 Hz −1 ) at the characteristic frequency ν ISE = (3/2)γ 2 e ν B with ν B = eB/m e c the cyclotron frequency and γ e the electrons' Lorentz factor. At ν CSE ∼ c/λ b << ν ISE , we have F (ν/ν ISE ) ∼ 2.15(ν/ν ISE ) 1/3 which gives a total power per bunch of This agrees within a factor of a few with expressions given in the literature (e.g. Murphy et a. 1997 and references therein). The spectrum of CSE is the same as the incoherent one except for the N e,b boosting and a decrease in the maximum (peak) frequency. C.1. Bunch geometry and CSE luminosity As illustrated in Figure 2, the Weibel filament extend across the collisionless chunk with length 2R cc . The initial filament's diameter is λ F (0) = λ e−WI as expressed in Eq. (13). Bunching would manifest itself in a narrow region around the Weibel filaments where the magnetic field amplification is expected to occur and not inside filaments where the currents reside and the magnetic field is weaker. In other words, a typical bunch, where CSE occurs, would resemble a cylindrical shell around the Weibel filament with initial thickness λ b (0), initial area A b (0) = 2πλ e−WI λ b (0) and, extending across the chunk (see Figure 2). We have and because the maximum CSE frequency is expressed as ν CSE (0) = c/λ e−WI = δ CSE ν ISE (see Eq. (22)), this implies During filament merging the filament's diameter (and thus the associated bunch thickness λ b (t) = δ b λ F (t)) increases in time as λ F (t) = λ e−WI × (1 + t/t m−WI ) −δF (see Eq. (18)) with t m−WI , given by Eq. (17), the characteristic filament merging timescale. There is one bunch per filament which implies that the total number of bunches per chunk is N b,T = πR 2 cc /πλ F (t) 2 and decreases in time a rate given by N b,T (t) 9 × 10 18 × R 2 cc,15 n cc,1 The corresponding number of electrons per bunch is (C5) The luminosity per bunch L b (t) is given by inserting Eq. (C5) into Eq. (C1) with ν B m p /m e ν p,e at proton-WI (p-WI) saturation. We get L b (t) 1.6 × 10 36 erg s −1 × R 2 cc,15 n cc,1 × γ 2 CSE,1 δ The corresponding cooling timescale of a bunch t b = N e,b γ CSE m e c 2 /L b (t) can be shown to be extremely fast compared to the duration of CSE ∆t CSE (see Eq. (24)). With t b << ∆t CSE it points to the fact that a given bunch has a very low duty cycle and emits only once (i.e. a single pulse) during the duration of the CSE, ∆t CSE . It also has the consequence that the fraction of bunches emitting at any give time during the CSE phase is t b (t) ∆tCSE . The total CSE luminosity is thus ( . The CSE duration in the chunk frame is given in units of 10 3 s for fiducial parameter values (Eq. (24)). Comparing the equation above to Eq. (25) which gives L CSE 10 33 -10 34 erg s −1 suggests that the length of a bunch does not extend across the entire chunk and that it may instead be a small fraction of the chunk's radius; i.e. ∼ (10 −3 -10 −2 )R cc . However, this has no consequence to our findings here since the bunches are very effective at releasing the heat harnessed during the BI phase regardless of their shape and size (see §3.3). D.2. FRB duration We define ν det. max. and ν det. min. as the maximum and minimum frequencies of the detector's band with t det. start and t det. end the times corresponding to the start (at ν det. max. ) and end of detection (at ν det. min. ). When the chunk's plasma frequency, ν p,e 9 kHz × n 1/2 cc , is such that ν obs. p,e (θ c ) < ν det. min. , the CSE frequency will drift through the entire detector's band (this is illustrated in Figures 4 and 5) with ν obs. CSE,max. (θ c ) given by Eq. (27) and t obs. m−WI given by Eq. (29). There are three other possible scenarios, depicted in Figures 4 and 5, which could make the duration shorter than the one given in Eq. (D10). D.3. Band-integrated flux density and corresponding fluence With regards to the spectrum, each bunch emits at all frequencies within 0 ≤ ν ≤ ν CSE even though radiation below the plasma frequency is re-absorbed by the chunk material. Because I obs. ν (t)/ν obs. 3 = I ν (t)/ν 3 is an invariant, the flux density is found from (e.g. Ryden 2016) f ν obs. (θ c , t) = I obs. A cc the spectral luminosity and A cc the chunk's area which is also invariant; z is the redshift and d L the luminosity distance. In the emitter's frame (i.e. the QN chunk), we assume a spectrum with positive index α CSE is the spectral luminosity at maximum frequency ν CSE (t). The above means that once ν obs. CSE (θ c , t) drops below the detector's maximum frequency ν det. max. , the band-averaged flux density starts to drop with time until the CSE frequency exits the detector's band at ν det. min. or when the plasma frequency is reached; this is illustrated in Figures 4 and 5. Our calculations of G(θ c , δ F , 0) is detector's dependent via x end and x start (see Eq. (D21)) and varies from a value of a few for ASKAP, Parkes and Arecibo detectors to about a few hundreds for CHIME's and even higher for the LOFAR's detectors (see Table 4). Γc is the Lorentz factor of the QN ejecta (the chunk's Lorentz factor). The ejecta's kinetic energy Γc × (Ncmc)c 2 erg is a few percents of the NS to QS conversion energy (see §2). κc is the chunk's opacity. n ns amb. is the baryon number density of the ambient medium (representative of the ICM) in the NS frame. σHH is the hadronic collision cross-section. ζBI is the percentage of the beam's electron energy (in the chunk's frame) converted to heating the chunk electrons by the BI. βWI = β⊥/β the ratio of transverse to longitudinal thermal speed of electron chunks at the onset of the WI (Eq. (13)). ζm−WI sets the filament merging characteristic timescale (Eq. (17)). δF controls the filament merging rate (Eq. (18)). δCSE sets the CSE frequency (Eq. (22)) which also sets the bunch's scaling parameter δb (Eq. (C2)). γCSE is the electron's Lorentz factor at CSE trigger during filament merging (Eq. (22)). αCSE the positive power-law spectral index (αCSE = 0.0 corresponds to a flat spectrum). Table 2. FRBs from ICM-QNe: Key equations describing the properties (baryon number density, radius and sound speed) of the collisionless QN chunks in the ICM and the resulting CSE features (frequency, duration and fluence). Also shown is the time since the QN, t obs. cc , and the time separation between emitting chunks ∆t obs. repeat (see §2.1). The fiducial parameter values are given in Table 1. Table 3). 3 "N/A" (not applicable) means the maximum CSE frequency, ν obs. CE,max. (θc), is below the detector's minimum frequency ν det. min. . Table 5. FRBs from ICM-QNe: FRB properties (frequency, duration and fluence; see Table 2) for the detectors listed in Table 3. The redshift is z = 0.2 which corresponds to a luminosity distance of dL 1 Gpc. The time delay between repeats is ∆t obs. repeat . The fluences per detector are given with the shaded cells showing the fluence values within detector's sensitivity (listed in Table 3)). Varied parameter 1 Box A : (Nc = 10 5 , Γc = 10 3 ) Box D : (Nc = 10 6 , Γc = 10 3 ) Chunk type 1 (primary) 6 (secondaries) 2 12 (tertiaries) 2 1 (primary) 6 (secondaries) 12 ( Box C : (Nc = 10 5 , Γc = 10 2 ) Box F : (Nc = 10 6 , Γc = 10 2 ) Chunk type 1 (primary) 6 (secondaries) 12 (tertiaries) 1 (primary) 6 (secondaries) 12 (tertiaries) f (θc) Figure 6). Other parameters are kept to their fiducial values in Table 1. (27)). 5 In all tables, only detectors with fluence above sensitivity threshold (see Table 3) are shown. Here for example, the fluence associated with Arecibo, Parkes and ASKAP were 0.07 Jy ms, 0.09 Jy ms and 0.09 Jy ms, respectively, all below the threshold. The open circles represent randomly spaced chunks (only illustrated for "ring" 1 and "ring" 2), which are offset at small random angles and directions from the uniformly spaced case. In this case the arrival times of the different chunks in a given ring (e.g. "ring" 2) are different, again depending on the l.o.s. angle of each chunk. 3), the BI heating of chunk electrons (i.e. the increase in β ) is converted by WI into magnetic field amplification, into magnetic turbulence and into currents. In this regime, β increases from β ∼ βcc (where vcc is the electron thermal speed when the chunk become collisionless; see Eq. (A11)) to β ∼ 1. During filament merging, magnetic turbulence and current dissipation accelerates electrons to relativistic speed, γCSE >> 1, shutting-off the BI. The BI requires the drift velocity (here the light speed c) between the beam protons and the chunk's electrons to exceed the thermal speed of the chunk's electrons (see §2.3). The decrease in γCSE is due to Coherent Synchrotron Emission (CSE) cooling. Lower panel: A schematic representation of the evolution of the different frequencies during the BI-WI process in our model. The electron plasma frequency (νp,e = 4πncce 2 /me, dot-dashed horizontal line) remains constant. The electron cyclotron frequency (νB = eBc/mec, thick green line) saturates first during the e-WI phase when νB ∼ νp,e (i.e. Bc = Be−WI,s) and later at the end of the p-WI phase with νB ∼ mp/meνp,e (i.e. Bc = Bp−WI,s). CSE at frequency νCSE is triggered throughout the filament merging phase when νCSE << γ 2 CSE mp/meνp,e is satisfied (see §3). The CSE frequency νCSE decreases over time (the thick black line) due to the increase in bunch size during filament merging. CSE ceases when its frequency drops to the chunk's plasma frequency (νp,e). The end of filament merging occurs when the filaments grow to a size of the order of the beam's protons Larmor radius. The trapping of the protons is followed by the formation of the Weibel shock (not shown here), quickly decelerating the chunk and putting an end to the BI-WI process. Figure 5. Same as in Figure 4 but for the general case of a power-law spectrum with positive index αCSE. The difference is that for a steep spectrum only the emission near the peak frequency (the narrower vertical bands) is detected at a given time. Table 6 and discussed in §4.3. (29)). Shown here is the case "a" in the top panel of Figure 4 applied to CHIME's detector with ν det. max. = 800 MHz and ν det. min. = 400 MHz and with ν obs. CSE,max. (0) = 2ν det.
2020-05-21T01:01:27.855Z
2020-05-19T00:00:00.000
{ "year": 2020, "sha1": "529fc54c342b84ecece48af3fccbfa3e62c745e1", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2005.09793", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "529fc54c342b84ecece48af3fccbfa3e62c745e1", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
60582929
pes2o/s2orc
v3-fos-license
Bayesian Age-Period-Cohort Modeling and Prediction – BAMP The software package BAMP provides a method of analyzing incidence or mortality data on the Lexis diagram, using a Bayesian version of an age-period-cohort model. A hierarchical model is assumed with a binomial model in the first-stage. As smoothing priors for the age, period and cohort parameters random walks of first and second order, with and without an additional unstructured component are available. Unstructured heterogeneity can also be included in the model. In order to evaluate the model fit, posterior deviance, DIC and predictive deviances are computed. By projecting the random walk prior into the future, future death rates can be predicted. Introduction In epidemiology, incidence and mortality counts are usually stratified by incidence (or mortality) year and age groups. Figure 1 shows an example dataset for the 5-year-periods 1965-69 to 1995-99 with data for the age groups 30-34 up to 70-74 depicted in a Lexis diagram (Lexis 1875). Often the covariates which cause these incidences cannot be observed directly. A commonly used approach in such a situation is the age-period-cohort (APC) model. Here the incidence rates are analyzed with regard to three different time scales, age (time between birth and death), period (time of death) and cohort (time of birth). This three time scales are surrogate measures, which can give an indication to the underlying causes of a disease Holford (1998): Age is often the main factor in such an analysis as it accounts for consistent extrinsic factors. The effect of the period accounts for all factors, which affect every person at a time period in history such as pollution or medical advances. The cohort effect accounts for events, which affect generations, e.g., malnutrition of children during or after wars or changing habits like the increasing number of young female smokers (Knorr-Held and Rainer 2001). In this paper we present the software BAMP (Bayesian Age-Period-Cohort Modeling and Prediction), which allows to analyze incidence count data with a Bayesian age-period-cohort model. BAMP analyzes incidence or mortality count data with a Bayesian age-period-cohort model and allows to include several features: • The input data do not have to be on the same time scale, for example period can be in one year intervals and age grouped in five year intervals; however, time scales have to be equidistant, • BAMP allows for prediction of the future number of cases, • BAMP allows retrospective predictions for existing data for model checking purposes, • BAMP can analyze different APC models, i.e., with and without additional global and local heterogeneity, with RW priors of first or second order, and AP and AC models (Clayton and Schifflers 1987). There are some graphical routines available in order to • plot estimated age, period and cohort effects (only for RW1 model) • compare observed with fitted and predicted rates • graphically assess the "significance" of the unstructured parameters. This helps to identify unstructured variation in the data which cannot be described by the age, period and cohort parameters alone. Section 2 describes the theory of Bayesian age-period-cohort models and modifications to the basic modeland to add additional heterogeneity parameters. An important feature of Bayesian APC models is the possibility of projecting future incidence rates; this is also described in Section 2. Section 3 describes how BAMP is used and how different models can be compared. Section 4 describes additional scripts for visualizing results. Bayesian APC models In the following let i = i, . . . , I denote the index of the age group, j = 1, . . . , J the index of the period and k = 1, . . . , K the index of the birth cohort. The cohort index can explicitly be computed from age group and period index of an incidence, see also Figure 1; if age and period are on the same time scale, the cohort index is If age and period are measured on different scales Equation 1 has to be changed accordingly (Holford 1983); as example, if data are give per year, but the age groups cover five years, the cohort index is k = k(i, j) = 5 · (I − i) + j. Figure 1: Lexis diagram with an example data set. The Lexis diagram shows number of incidence cases for several years (periods) in different age groups. In the Lexis diagram incidences can be related to the relevant birth cohort. As Knorr-Held and Rainer (2001) point out, cohort groups are overlapping. In classical APC literature (for an overview see Heuer 1994) the APC model is often regarded as a log-linear Poisson model. As an alternative a binomial logit model can be formulated (both models are approximately identical): The counts of incidences y ij in age group i in period j follow a binomial distribution with parameters p ij and n ij . Here n ij is the known population size of age group i at period j and p ij is the unknown incidence probability. The logit of the incidence probability is decomposed in an intercept µ, age effect θ i , period effect φ j and cohort effect ψ k , Non-identifiability To make the intercept µ identifiable, the following restrictions have to be imposed However, the APC parameters in this model are still not identifiable. It can be seen from (1) that for every transformation for all i, j and k with any c ∈ IR the linear predictor η ij is not changed. Therefore the linear predictor can always be identified and interpreted, but age, period and cohort effects cannot. Figure 2 shows the problem of non-identifiability. The figure shows two possible parameter sets for the example dataset in Figure 1. Depending on the transformation, the period effect is decreasing and the cohort effect is increasing or vice versa. However, the change in trend at cohort k = 8 remains unchanged. This is, because the non-identifiability only affects linear trends; change points and other non-linear trends can be identified (up to the linear trend)like the non-linear effect of age in the example. Therefore the interpretation of the estimated APC effects has to be restricted to non-linear trends. BAMP provides second differences for this purpose. Bayesian hierarchical model BAMP uses a Bayesian hierarchical approach for the APC model. Following Berzuini and Clayton (1994) random walk (RW) priors of different orders are used for the APC parameters θ, φ and ψ. The RW 1 prior assumes a constant trend over the time scale, whereas the RW 2 prior assumes a linear time trend. Therefore the order of the random walk should be chosen depending on whether a constant or a linear time trend can be assumed. However, in the APC model the RW prior also acts as a stochastic restriction: With a RW 1 the firstorder differences of an effects are stochastically restricted to zero; this makes the APC effects identifiable. With a RW 2 the second-order differences are stochastically restricted to zero; this restriction however is to weak to assure identifiability. In the following we describe random walk priors for the period effect φ -the age and the cohort effects θ and ψ can be treated analogously. For random walks of first order (RW 1) we assume with κ −1 a precision parameter; for random walks of second order (RW 2) we assume The with R the precision matrix of RW 1 (Clayton 1996) or RW 2 (Berzuini, Clayton, and Bernardinelli 1993). The precision parameter κ is a smoothing parameter and will be estimated simultaneously in the model. The higher the precision, the smoother the estimated parameter vector. A Gamma distribution Ga(a, b) is used for the precision parameters. Often a = 1 and b = 0.001 or a = b = 0.001 is chosen. When using a RW 2 prior it is often advisable to use a smaller value for b in order to get smooth estimates (e.g., b = 10 −5 ). For the intercept µ BAMP uses a flat prior Identifiability of the intercept is guaranteed with the restrictions stated in (5). In the Bayesian setting a random walk of first order imposes a stochastic constraint: the RW 1 model will prefer the one transformation Equation 6, where the quadratic first differences are minimal -i.e., the APC effects are kept as constant as possible. Therefore the APC effects can be identified by the software with this model, nevertheless the restriction in interpretation of the resulting estimations still apply. Projection Whereas RW 1 implies a constant time trend, RW 2 implies a linear time trend. In both cases the priors can be used to gain projections for future time points. The projected effects can be computed by The Bayesian APC model allows to extrapolate period and cohort effects. The age effect can also be extrapolated, however this is usually not necessary, as data cover all age groups of interest. In order to get the projection of a future rate p i,J+1 the projected period and cohort effects from Equation 9 are applied to Equation 4: and . The projection of future cases can then be computed using the binomial distribution Equation 3; however the population size n i,J+1 must be known or projected by an adequate method to estimate the number of cases. Further future rates p i,J+2 , p i,J+3 , . . . can be derived analogously. Inference The full conditional of the APC parameter vectors are non-standard distributions. Therefore BAMP uses a Metropolis-Hastings (MH) algorithm to sample from the posterior. Useful Gaussian proposals for the MH algorithm can be derived using Taylor approximation of second degree (Gamerman 1997;Schmid 2004). BAMP uses an algorithm proposed by Rue (2001) for effective sampling from multivariate normal distributions. The full conditional of the hyper parameter κ is a Gamma distribution with parameters a + rg(R)/2 and b + 1 2 Φ RΦ. Also, the full conditionals of τ and ν are Gamma distributions; therefore the precisions can be sampled using Gibbs steps. Additional global heterogeneity Knorr-Held and Rainer (2001) propose an additional parameter z ij for the APC model. This parameter accounts for heterogeneity which can not be explained by the APC effects, like overdispersion. The model has the following form: A Gaussian distribution is used as prior for the additional heterogeneity with a Ga(a δ , b δ ) prior for the precision parameter δ. This model has a interesting technical advantage. Using a reparametrization as proposed by Besag, Green, Higdon, and Mengersen (1995), the full conditionals of the APC effects are Gaussian distributions, such that Gibbs step can be applied (Knorr-Held and Rainer 2001). Additional heterogeneity on a time scale In some analyzes using APC models, the estimated effects are too rough because of additional heterogeneity in the effects or outliers in the data. Therefore not only the variance of these effects is overestimated, but also the credibility intervals of the projection will be too broad. A solution for this can be to introduce additional parameters for heterogeneity on one or more time scales. Even in the model with additional global heterogeneity parameter z ij (Section 2.5) an additional heterogeneity parameter may be useful, in order to keep the variance of z ij small. An analogous modification of the prior can be formulated for cohort and age, however, in practice there is rarely a need for additional heterogeneity for age. Models with additional heterogeneity on more then one time scale are possible, but in these models the stochastic restriction, which comes with the use of RW 1 priors, does not longer apply. Data files Two files containing data are necessary: the cases file contains the number of disease cases, the population file the number of persons under risk. Both files must be in the form of a matrix with T rows and J columns. If dataorder:1, the data files will be transposed (i.e. the files are matrices with J rows and T columns). To predict rates for existing data (predictions:2) for S periods both files must have T + S rows (T + S columns for dataorder:1). For given population data (predictions:3), the population file must have T + S rows (T + S columns for dataorder:1). In all cases redundant rows will be ignored. Priors for APC effects For the age, period and cohort effects the following priors are available, set age block, period block and cohort block accordingly. 0: Discard effect from the model (for AP and AC model) age block prior of age effect, see Paragraph Priors for APC effects 0 age hyperpar. a parameters a and b of the gamma prior of the precision ‡ age hyperpar. b of the age effect ‡ age hyperpar.2 a parameters a and b of the gamma prior of the precision ‡ age hyperpar.2 b of the unstructured age effect ‡ period block prior of period effect, see Paragraph Priors for APC effects 0 period hyperpar. a parameters a and b of the gamma prior of the precision ‡ period hyperpar. b of the period effect ‡ period hyperpar.2 a parameters a and b of the gamma prior of the precision ‡ period hyperpar.2 b of the unstructured period effect ‡ cohort block prior of the cohort effect, see Paragraph Priors for APC effects 0 cohort hyperpar. a parameters a and b of the gamma prior of the precision ‡ cohort hyperpar. b of the cohort effect ‡ cohort hyperpar.2 a parameters a and b of the gamma prior of the precision ‡ cohort hyperpar.2 b of the unstructured cohort effect ‡ z mode 0: with, 1: without pixel-wise heterogeneity parameter 1 z hyperpar. a parameters a and b of the gamma prior of the precision of the ‡ z hyperpar. b pixel-wise heterogeneity parameter ‡ quantile 1 quantiles of the posterior for output -1 quantile 2 (-1: none) -1 quantile 3 -1 quantile 4 -1 quantile 5 -1 period covariate path and name of the covariable data file for period ‡ period start first time point to use from covariables data set ‡ cohort covariate path and name of the covariable data file for cohort ‡ cohort start first time point to use from covariable data set ‡ Output files The MCMC algorithm produces samples from the posterior distribution of all unknown parameters. These samples are written to the temp folder. After finishing the MCMC run, the software calculates quantiles of the samples from the posterior distribution. These quantiles will be written in the output folder. The variables quantile 1 to quantile 5 specify the quantiles, which will be calculated. Variables set to -1 will be omitted. The output consists of the following files: • theta.txt, phi.txt, psi.txt: Quantiles of the age, period, cohort effect; each quantile is a row (these will only be produced for RW 1 priors) • theta2.txt, phi2.txt, psi2.txt: Quantiles of second differences of the age, period, cohort effects (these can always be identified and interpreted) • hyper.txt: Quantiles of the precision parameters of age, period and cohort effects followed by the precision of the unstructured "random effects", name of parameter and quantiles in each row Model fit, predictions In order to evaluate the goodness of the model fit, BAMP calculates the posterior deviance. The deviance is defined as: with l(ŷ jt ) the individual log-likelihood of the model and l(y jt ) the maximum individual loglikelihood achievable. The smaller the deviance, the better the fit of the model. However, the model fit will typically be good for any model as long the unstructured parameters are included. To compare the performance of different models, BAMP also provides the deviance information criterion DIC (Spiegelhalter et al. 2002) and the predictive deviance, see below. To make predictions of further death rates, set predictions:1. Set number of predictions to the number of periods for which you want the projection. If you have population data for these periods, set predictions:3, the software will then provide predicted cases. In order to test the model you can also make prediction for existing data. Then the input files must include this data, the matrices have to be of dimension J times (T + S) (see Section 3.1). Set deviance:2 and BAMP will calculate the predictive deviance for every projected period. The predictive deviance at time t is defined as withŷ jt the predicted number of cases in age group j at period t. The predictive deviance is a monotonic transformation of the logarithmic scoring rule as suggested initially by Good (1952), see also Gneiting and Raftery (2006), to quantify the quality of probabilistic forecasts. Using covariates By including a time-variable covariate effect, the Bayesian APC model might be improved: with a random walk prior for the regression parameter β = (β 1 , . . . , β J ). The covariate x = (x 1−L , . . . , x J−L ) enters with a time lag L. The model is based on the idea, that one of the main effects can be explained by a covariate measure. In an analysis of lung cancer mortality data in West Germany Knorr-Held and Rainer (2001) present a model, where the period effect for females is explained by smoking habits, with a lag of 20-25 years. In a way, the substitution of one main effect with a covariate can be seen as a different type of restriction, where the model is restricted, so that one effect is linear dependent on the covariate. However, it is not clear, if and how the covariates have to be standardized. If the covariate is near to zero, the credibility intervals get very broad, as Figure 3 shows for a lung disease dataset with the estimated amount of consumed tobacco as covariate, here the values are near to zero in the first years. If the covariate is exactly zero at one time point, the adjoined parameter indeed can not be estimated. BAMP allows to include covariates in the age-period-cohort model. Covariate effects can either replace period or cohort effects. Lets assume we have covariates x t for periods t = 1 . . . T . We can introduce this covariate by replacing the period effect φ with c φ · x t β (φ) t with c φ = T xt . For β (φ) , a random walk of first or second order can be chosen. In order to include covariates, period block has to been set to 8 (RW1) or 9 (RW2). The variable period covariate gives the file name of the covariate data. The data have to be separated by space, tabulator or newline. The variable period start specifies, which line of the covariate file matches the first period of the cases/population data. Covariates for cohorts can equivalently be included; also both period and cohort effect can be replaced by covariates. S-PLUS and R code for graphical output The graphical routines for S-PLUS (Insightful Corp. 2003) and R (R Development Core Team 2007) • plot estimated age, period and cohort effects (for RW1 priors only!) • compare observed and fitted rates • plot projected rates • assess the "significance" of the unstructured parameters Download the code from the BAMP homepage and edit the first lines as appropriate: • startingage: the first age of the first age group of the data • startingperiod: the first period of the data • yearsperperiod: how man years are one period • inifile: name of the .ini file To start, copy the code to the directory where the .ini file is located, change to this directory and type Splus < bamp.splus resp. R --nosave < bamp.R or start in S-PLUS or R via source(path //bamp.S) resp. source(path //bamp.R) The resulting graphic file is named bamp.ps in the output directory. Summary Bayesian age-period-cohort models are a useful tool to analyze mortality and incidence rates stratified by age and period. A RW 1 prior for all APC effects implies a stochastic restriction, solving the identifiability problem. However, with RW 2 priors the main effects can not be identified. Whatever RW model is used, the linear trends can not be interpreted without considering the stochastic restrictions implied by the model, however, break points and other non-linear changes can be interpreted. Therefore APC models can be a first step for further analysis of the causes of incidence. Several recent papers, including the work of exploit the use of Bayesian inference in APC modeling. A Bayesian APC models with RW 2 priors are used in Bray, Brennan, and Boffetta (2000) and Baker and Bray (2005) for projections of cancer mortality, Knorr-Held and Rainer (2001) present a model with overdispersion in a lung cancer study and exploit the use of covariates in a APC model, and Hansell, Held, Best, Schmid, and Aylin (2003) and Lopez, Shibuya, Rao, Mathers, Hansell, Held, Schmid, and Buist (2006) analyze COPD mortality trends with Bayesian APC model. These papers not only show the methodology of Bayesian APC models, but also the interpretation of results and of different possible models. Predictions in the Bayesian APC model can easily be made by carrying forward the autoregressive priors. The prior has to be chosen carefully: predictions with RW 1 models may have a bias, if the assumption of constant time trends does not apply; models with RW 2 priors tend to have broader credibility intervals, see also the results of Knorr-Held and Rainer (2001) and Schmid and Held (2004). BAMP provides a variety of different APC models. Global heterogeneity can be in-or excluded, additional heterogeneity on time scales is possible; one or even two of the APC effects can be discarded from the model. The different models can be compared via deviance, DIC and predictive qualities. BAMP is available for free download at http://www.volkerschmid. de/bamp.html.
2017-05-05T08:25:15.282Z
2007-10-16T00:00:00.000
{ "year": 2007, "sha1": "dfd0d0186ef41f2990504a0ce82da845aaa55ee3", "oa_license": "CCBY", "oa_url": "https://doi.org/10.18637/jss.v021.i08", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "96ae1139c6214adcd32eeba06a68bb0366000d60", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Computer Science" ] }
17909487
pes2o/s2orc
v3-fos-license
A comparison between the β-globin gene clusters of domestic sheep (Ovis aries) and Sardinian mouflon (Ovis gmelini musimon) L'organisation du groupe de genes de la globine-β chez des mouflons de Sardaigne montrant divers phenotypes d'hemoglobine a ete analysee par buvardage de Southern avec des sondes specifiques pour les genes e IV et β F . Avec trois endonucleases (Bam HI, Eco RI et Hind III), les mouflons caracterises par les phenotypes d'hemoglobine B, BM et M montrent des patrons de restriction identiques aux moutons ayant les phenotypes d'hemoglobine A, AB et B respectivement. Par consequent, le groupe de genes de la globine-β manifeste dans ces deux especes deux haplotypes caracterises par la duplication ou le triplement d'un ensemble de quatre genes ancestraux : e-e-ψβ-β. INTRODUCTION The domestic sheep (Ovis aries) #-globin gene cluster shows two common haplotypes: the A haplotype bearing the adult HBB A allele and the B haplotype bearing the adult HBB B allele Lingrel, 1988, 1989). The A haplotype is similar to the goat (Capra hircus) #-globin gene cluster (Townes et al, 1984) and shows the triplication (5! -e&dquo; -! -/?c _ !111 _ !iv _ !ti _ !A _ !v _ !vi _ !m _ !Fg.) ! an ancestral four gene set 8) characterized by two embryonic genes ( E ), one pseudogene ('IjJ,8), and one gene ( 0 ) whose expression varies as a function of ontogenic development and physiological conditions. In fact, 13C,,3 A , and 0 F genes are expressed during juvenile, adult, and fetal life, respectively (Huisman et al, 1969). The ,8c and (!A switch is reversible under particular physiological or experimental conditions (anaemia, hypoxia or under administration of erythropoietin) (Huisman et al, 1967;Boyer et al, 1968). The B haplotype, lacking the whole juvenile fourgene set, is duplicated (5'EI -E II -'IjJ,81 _,8B -E III -E IV -'IjJ,81 I -!F3!), Therefore, sheep homozygous for the HBB B allele do not exhibit the property of ,8 B ---> ,8 c switching as they do not possess the ,8 c gene. Since domestic sheep ,8 A and !3B allelic chains differ by at least seven scattered amino-acid residues and no intermediate haplotype has been found, it has been proposed that this polymorphism is the product of genetic isolation followed by admixture (Boyer et al, 1966). Manwell and Baker (1976) suggest that man has played a role in generating haemoglobin polymorphisms in domesticating sheep by hybridizing individuals that would otherwise be geographically isolated. Southern blot analysis, using /3-like globin genes as probes and several endonucleases, strongly supports the polyphyletic origin of the domestic sheep. In fact, by means of this technique, A and B haplotypes can be easily distinguished since they differ in both number and length of restriction fragments and show no intermediates (Di Gregorio et al, 1987;Rando et al, 1989). Some authors (Bunch et al, 1976;Bunch and Nadler, 1980;Bunch and Nguyen, 1982;Ryder, 1984;Di Gregorio et al, 1987) consider the mouflon as one of the ancestors of the present day domestic sheep, whereas others (Poplin, 1979;Vigne, 1983) claim that it originated by feralization of the first domesticated sheep in the Corsico-Sardinian islands (Neolithic). In wild mouflons (Ovis gmelini rnusimon) captured in Sardinia, two alleles at the adult /3-globin locus have been observed: HBB B and HBB'u, with frequencies of 0.94 and 0.06 respectively. Both adult #-globin variants in this species are electrophoretically different from those observed in sheep (Naitana et al, 1990). No homozygous individuals for the HBB' u allele had been found (Naitana et al, 1990). In this paper, with the availability of a mouflon homozygous for the !fBB! allele, we compared the organization of the #-globin clusters of wild Sardinian mouflons and domestic sheep with different Hb phenotypes by means of Southern blot analysis. MATERIALS AND METHODS Haemoglobin phenotypes of sheep and wild Sardinian mouflons were determined by means of isoelectric focusing in the pH range 6.7-7.7 and by gel electrophoresis of dissociated globin chains (Naitana et al, 1990;Masala et al, 1991). Southern blot analysis was accomplished on DNA samples obtained from six mouflons selected according to Hb phenotype (3 Hb B, 2 Hb BM, and 1 Hb M) and, as a comparison, from six sheep (2 Hb A, 2 Hb AB, and 2 Hb B). DNA samples were digested with Hind III, Ba!n HI, and Eco RI and probed with plasmid pG16Ec3Bm2 (containing the 5' of the goat E I V -globin gene) and plasmid pGq5' (containing the 5' of the goat !3F-globin gene). According to the hybridization conditions reported by Rando et al (1989), these plasmids (a kind gift from JB Lingrel) strongly cross-hybridize with the paralogous genes. RESULTS AND DISCUSSION Southern blot analysis of mouflon and domestic sheep genomic DNAs digested with Hind III, Barra HI, and Eco RI and hybridized with e iv and ,8 F probes demonstrates that mouflons with Hb B, Hb BM, and Hb M show the same electrophoretic patterns as domestic sheep with Hb A, Hb AB, and Hb B, respectively. As an example, figure 1 shows digestion of mouflon and domestic sheep genomic DNA with Bam HI and hybridization with the e IV gene. It can be seen that restriction patterns of mouflons with Hb B and domestic sheep with Hb A (homozygotes for the triplicated haplotype) are characterized by fragments of 9.0, 5.5, and 6.6 kb containing the E pair genes of the juvenile, adult, and fetal sets, respectively (Rando et al, 1989). On the other hand, restriction patterns of mouflons with Hb M and domestic sheep with Hb B (homozygotes for the duplicated haplotype) are characterized by fragments of 5.7 and 6.6 kb that previous reports show contain the e pair genes of the adult and fetal sets, respectively (Garner and Lingrel, 1988;Rando et al, 1989). Figure 2 summarizes results obtained with the three endonucleases and the two probes. Thus the Sardinian mouflon shows two haplotypes at the #-globin gene cluster, a triplicated one bearing the adult HBB B allele corresponding to the sheep. A haplotype, and a duplicated one bearing the adult HBB M allele corresponding to the sheep B haplotype. According to the results presented in this paper, the two haplotypes differ not only by the presence/absence of the juvenile set but also by many other mutations evidenced by different restriction endonucleases (see fig 2). The absence of intermediate haplotypes in both sheep (Di Gregorio et al, 1987;Rando et al, 1989) and mouflon confirms the hypothesis that the two haplotypes evolved separately (Boyer et al, 1966) and, at the same time, demonstrates that they were present in ancestor common to both species. According to data presented by Naitana et al (1990), the frequency of the HBB B allele (triplicated switching cluster) is much higher in Sardinian mouflons (0.94) than in domestic sheep and, in particular, in the Sarda breed sheep which is almost monomorphic for the HBB B allele (duplicated non-switching cluster) (Manca et al, 1993) and lives in the same environment. Therefore, it remains to be established whether the marked differences in the frequencies of 'switching' and 'non-switching' chromosomes between Sardinian mouflon and sheep are the result of genetic drift, natural selection or domestication. Masala et al (1991) put forward the possibility of an advantage in synthesizing Hb C in a wild-type niche in the case of mouflon. Evans and Turner (1965) evidenced a certain reproductive advantage of the duplicated cluster in sheep. If we consider that the domestic sheep has been the object of selective pressure for milk, wool, and meat production, it could be that man, through domestication and artificial selection, is responsible for the high frequency of the duplicated cluster in domestic sheep.
2014-10-01T00:00:00.000Z
1996-09-15T00:00:00.000
{ "year": 1996, "sha1": "5446cf3df820fe226150ff3473d96117cb383705", "oa_license": "CCBY", "oa_url": "https://gsejournal.biomedcentral.com/track/pdf/10.1186/1297-9686-28-3-217", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5446cf3df820fe226150ff3473d96117cb383705", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Biology" ] }
259200121
pes2o/s2orc
v3-fos-license
The role of cellular and molecular neuroimmune crosstalk in gut immunity The gastrointestinal tract is densely innervated by the peripheral nervous system and populated by the immune system. These two systems critically coordinate the sensations of and adaptations to dietary, microbial, and damaging stimuli from the external and internal microenvironment during tissue homeostasis and inflammation. The brain receives and integrates ascending sensory signals from the gut and transduces descending signals back to the gut via autonomic neurons. Neurons regulate intestinal immune responses through the action of local axon reflexes or through neuronal circuits via the gut-brain axis. This neuroimmune crosstalk is critical for gut homeostatic maintenance and disease resolution. In this review, we discuss the roles of distinct types of gut-innervating neurons in the modulation of intestinal mucosal immunity. We will focus on the molecular mechanisms governing how different immune cells respond to neural signals in host defense and inflammation. We also discuss the therapeutic potential of strategies targeting neuroimmune crosstalk for intestinal diseases. INTRODUCTION While the immune and nervous systems have traditionally been studied separately, it is increasingly clear that these two complex systems are intricately linked at the functional level.Neuronal crosstalk with the immune system is an old concept.Two thousand years ago, the Roman physician Aulus Cornelius Celsus defined the four cardinal signs of inflammation as pain, redness, swelling, and heat; the first sign was driven by the sensory nervous system, and the latter three were linked to vascular and immune functions [1].Neuronal regulation of host defenses against pathogens is also evolutionarily conserved, with empirical evidence from simple metazoans, including C. elegans, to vertebrates, including fish and mammals [2].The past decades have witnessed major discoveries revealing the pleiotropic roles of neuroimmune crosstalk in physiology, host defense, repair, and pathology [3].The focus of this review is to discuss advances in this field based on neuroimmune interactions in the gastrointestinal tract and on organ system, where this crosstalk plays a prominent role in homeostasis and disease. The primary function of the gastrointestinal tract is food processing and digestion.As such, it is densely innervated by peripheral neurons, including sensory and autonomic neurons, that coordinate the detection of nutrients, gut motility, and the secretion of enzymes necessary for proper digestive functions.The GI tract is also populated by a myriad of immunocytes, including both innate and adaptive immune cells, which critically mediate gut tissue homeostasis and host defense against enteric pathogens [4].Furthermore, the gut is colonized by a community of microbes, including bacteria, viruses, and fungi, that form symbiotic relationships with the host [5]. There is now no doubt that for proper gastrointestinal function and tissue maintenance, neurons must be able to sense stimuli, including those from microbes and immune cells, to mediate both sensory and motor functions in the gut. The neuroanatomy of the gut is composed of both sensory and autonomic neurons that reside within and outside the organ (Fig. 1).Gut-extrinsic sensory neurons reside in the nodose/jugular vagal ganglia (VG) and the dorsal root ganglia (DRG), taking signals from gut to the brainstem and spinal cord, respectively.Autonomic neurons also innervate the gut, which includes vagal efferent parasympathetic motor neurons and sympathetic neurons that reside in the autonomic ganglia [1,6].The gut also houses its own intrinsic and autonomous nervous system, comprised of enteric neurons, with their cell bodies in the myenteric plexus and submucosal plexus [1,6].In response to external or internal perturbations, one or more branches of gutinnervating neurons are activated, and neuronal reflexes occur that are essential for their communication with the central nervous system (CNS) and the gut.For example, vagal sensory neurons transduce action potentials to the brainstem, where information is processed, integrated, and perceived, and then descending signals are transmitted via vagal efferent motor neurons back to the gut.Efferent motor neuron activation also results in neurotransmitter release, such as the release of acetylcholine from parasympathetic neurons or catecholamines (norepinephrine, epinephrine) from sympathetic neurons, which can act directly on immune cells to tune their function. The nervous system and immune system have evolved a common language to communicate with each other at every step of their response to environmental insults from inception to resolution [2].Neurons express many receptors that are canonically expressed in immune cells, including pattern recognition receptors such as Toll-like receptors (TLRs) and inflammatory cytokine receptors, which allow immune cells to modulate neuronal activity.For example, the inflammatory cytokine IL-1β sensitizes sensory neurons to regulate pain in the context of inflammation [7].Immune cells are also able to sense neuronderived cues by virtue of expressing receptors for neurotransmitters and neuropeptides.For example, innate lymphoid cells express receptors for the neuropeptides calcitonin gene-related peptide (CGRP) and neuromedin U (NMU) [8][9][10].The mechanism of communication between the nervous and immune systems evolutionarily makes sense, as it decreases the cost of dealing with certain insults and enables the two systems to coordinate complex host responses.The microbiome also plays a critical role in regulating neuronal activation and immune development.Given that both immune cells and neurons can sense microbes directly or indirectly, the composition of the microbiome plays a critical role in neuronal programming or maturation to modulate visceral pain, gut motility, and other aspects of intestinal physiology [11][12][13][14]. Here, we focus on mechanisms underlying the neuronal regulation of inflammation and immunity in the gut.We discuss how neurons and neurotransmitters are involved in immune responses during tissue repair and host defense.Advances and challenges coexist in this fascinating field.We highlight the most recent studies that take advantage of new tools to overcome the limitations, and we argue that targeting neurons is a promising strategy for the treatment of diseases in the gut.We are unable to comprehensively cover this topic, as it is a fast-moving field.For further reading on the gut neuro-immune axis and how the Fig. 1 Neuroanatomy of the gastrointestinal tract.The gastrointestinal tract is innervated by gut-extrinsic and gut-intrinsic sensory and autonomic neurons.Extrinsic parasympathetic, sympathetic and sensory neurons originate in the brainstem and spinal cord, projecting to the outer muscle layers and the inner mucosa of the gut.Sensory neurons are pseudounipolar neurons that have their cell bodies in the dorsal root ganglia (DRG) and nodose/jugular vagal ganglia (VG), and they transduce signals from the gut to the spinal cord and brainstem, respectively.Sympathetic neurons consist of preganglionic neurons in the spinal cord that project to the sympathetic ganglia, where they synapse with postganglionic neurons that project to the gut.Parasympathetic neurons communicate from the brainstem to the gut via the vagus nerve.Gut-intrinsic enteric neurons reside in the myenteric and submucosal plexus layers of the intestine and innervate all intestinal layers microbiome regulates these interactions, we recommend these excellent reviews [15][16][17][18]. SENSORY NEURON REGULATION OF GUT IMMUNITY Gut-innervating sensory neurons arise from the DRG and the nodose/jugular vagal ganglia (VG), transducing signals from the gut to the CNS to mediate the perception of various luminal stimuli, including mechanical stretch, nutrients, microbial cues and immune mediators (Fig. 1).Sensory neurons that are responsible for the unpleasant sensation of pain are called nociceptors.Nociceptor neurons express ion channels such as transient receptor potential (TRP) channels, ATP sensors such as P2X channels, and G-protein coupled receptors (GPCRs) that allow them to sense noxious or harmful stimuli in the gut, such as bacterial pathogens, damaging foods and harmful chemicals.Capsaicin, the active ingredient in chili peppers and spicy food, also activates nociceptor neurons.Therefore, nociceptors play a key role in serving as an alarm system, warning the host of the presence of external and internal insults by mediating nausea, visceral pain, and other protective reflexes [19]. An important aspect of sensory neurons is their ability to mediate local neurogenic inflammation [20].In this process, antidromic axon reflexes between peripheral nerve terminals and calcium influx lead to the immediate release of neuropeptides stored in dense-core vesicles of nociceptive neurons into the gut.These neuropeptides include CGRP, substance P (SP), and vasoactive intestinal peptide (VIP), which can act on vascular smooth muscle cells, endothelial cells, epithelial cells and immune cells to modulate inflammation and immunity in the gut (Fig. 2).Therefore, these neurons have a nontraditional "efferent" role in releasing neuropeptides in the gut that directly regulate immune cells. Nociceptor neurons communicate with immune cells such as mast cells in the gut to regulate pain and inflammation.Mast cell expansion is a hallmark of irritable bowel syndrome, which is usually accompanied by visceral pain.Mast cells are located close to sensory nerves in the gut mucosa, and mediators derived from mast cells, such as histamine, PGE2 and tryptase, are able to enhance the excitation of DRG neurons, leading to visceral pain [21,22].One recent study showed that during IgE-mediated anaphylaxis, activated mast cells secrete chymase binding to the receptor protease-activated receptor-1 on Trpv1+ neurons, which modulate the body's thermoregulatory neural network and cause hypothermia [23].Sensory neurons can secrete substance P to facilitate mast cell activation, leading to the release of cytokines and chemokines that cause diarrhea, inflammation and altered motility during inflammatory bowel disease pathogenesis [24].Substance P is also involved in defense responses against Salmonella infection, as mice lacking its receptor neurokinin-1 receptor (NK-1R) had enhanced host protection [25].Whether mast cells contribute to this process needs to be further explored.TRPV1+ vagal neurons that innervate the liver are also capable of detecting nutritional and microbial cues that flow from the gut to the liver via the hepatic portal vein [26].These vagal afferent neurons transduce signals from the liver to the nucleus tractus solitarius (NTS) of the brainstem, which then relay the signal to the Fig. 2 Sensory neuron regulation of gut immunity.In response to microbial and dietary cues, TRPV1 + Nav1.8 + DRG nociceptor neurons are activated, leading to release of the neuropeptides CGRP and Substance P (SP).CGRP suppresses the differentiation of microfold (M) cells in the Peyer's patch dome, thereby limiting S. Typhimurium pathogen invasion.CGRP also promotes goblet cell mucus production through its co-receptor Ramp1, which mediates gut barrier protection against colitis.SP maintains gut microbiota homeostasis, which contributes to mucosal protection.SP also promotes mast cell degranulation, which contributes to visceral pain and inflammation gut via vagal parasympathetic neurons to regulate the differentiation of gut residing regulator T (pTreg) cells [26].This illustrates that the liver-gut-brain axis senses and regulates gut immunity, which requires autonomic neurons. Recent studies have shown that DRG neurons play critical roles in gut barrier protection, microbial homeostasis, and protection against colitis [27,28].Goblet cells are the intestinal epithelial cells that produce mucus, which coats intestinal surfaces and serves as the first defensive barrier against external dangers.Nav1.8+ nociceptor neurons signal to intestinal goblet cells via the CGRP-Ramp1 signaling axis to regulate mucus production and mediate barrier protection [27].Nav1.8+ nociceptor neuron ablation in mice leads to decreased colonic mucus layers, accompanied by a dysregulated gut microbiota [27].These neurons secrete the neuropeptide CGRP in response to luminal stimuli, including microbial cues and capsaicin [27].Loss of nociceptors leads to increased susceptibility to dextran sulfate sodium (DSS)-induced colitis in mice.Nociceptor activation through chemogenetics or administration of CGRP enhances mucus production and protects mice against colitis pathogenesis [27].The protective role of CGRP is also consistent with previous studies showing that CGRPdeficient mice are susceptible to colitis [29,30].Another independent study also showed that chemogenetic inhibition or pharmacological ablation of Trpv1+ neurons exacerbates DSSinduced colitis through dysregulation of the gut microbiota [28].Trpv1+ neuron ablation leads to microbial dysbiosis in the mouse colon, and transplantation of the dysregulated microbiota to germ-free mice renders the host susceptible to colitis pathogenesis.Conversely, depletion of the gut microbiota ameliorates colitis pathogenesis in neuron-ablated mice [28].In this study, substance P (SP), another neuropeptide secreted by nociceptor neurons, mediated host protection against colitis and helped maintain microbiota homeostasis [28].These two complementary studies indicate that sensory neurons are critical for maintaining mucus production and microbial homeostasis, which protect the gut barrier. Nociceptor neurons are also critical for host defense against enteric infections.TRPV1+Nav1.8 + DRG neurons protect against infections caused by the gram-negative bacterial pathogen Salmonella typhimurium in mice [31].S. typhimurium penetrates the gut barrier through microfold (M) cells, specialized epithelial cells in the small intestine ileum Peyer's patch (PP) follicleassociated epithelia (FAE) [31].Nociceptor neurons respond to S. typhimurium infection by releasing CGRP, which reduces M cell numbers to limit S. typhimurium infection.Nociceptors also regulate the level of the gut commensal microbe segmented filamentous bacteria (SFB), which colonizes the ileum villi and PP FAE to promote host defense against infection [31].TRPV1+ neurons and other neuropeptides, including SP, VIP, and PACAP, also mediate host defense against pathogenic infections induced by Citrobacter rodentium and enterotoxigenic Escherichia coli [32]. Sensory neuron activity is also dictated by multiple other cell types in the gut.Enteroendocrine cells (EECs) are specialized gut epithelial cells that form synaptic-like structures with enteric and sensory neurons.In response to microbial products, chemical irritants and dietary stimuli, EECs secrete the neurotransmitter serotonin, which binds to the receptor 5HT 3 R expressed on neurons to modulate afferent mechanosensory functions [33].EECs also mediate food-induced defensive responses such as nausea and retching through vagal sensory neurons and transmit toxin-induced signals to Tac1+ neurons in the dorsal vagal complex (DVC) of the brainstem; blocking 5HT 3 R signaling blocks toxin-induced nausea-like behaviors [34]. Therefore, sensory neurons play pleiotropic roles in gut barrier protection under both homeostasis and host defense by signaling to immune and epithelial cells (Fig. 2).Remaining questions exist on the detailed cellular and molecular mechanisms that lead to activation of these neurons and their role in protection and host defense against other gut inflammatory triggers such as viral infection, helminth infection, allergic immunity, and autoimmune disease. AUTONOMIC NEURONAL REGULATION OF GUT IMMUNITY Autonomic neurons that regulate involuntary physiologic processes in the gut, including digestion and motility, also play a critical role in controlling gut immunity.Autonomic neurons that innervate the gastrointestinal system include enteric neurons, sympathetic neurons, and parasympathetic neurons (Fig. 1).While sympathetic neurons originate from the spinal cord and sympathetic ganglia, parasympathetic neurons originate from the brainstem.Sympathetic neurons mediate the "flight or fight" stress response and execute inhibitory functions, including slowing gut motility and secretion.Parasympathetic neurons mediate excitatory functions, including promotion of gut motility, digestion, and secretion.In contrast, gut-intrinsic enteric neurons have their cell bodies and axons fully within the gut.Enteric neurons are organized into ganglionated networks from the myenteric plexus and submucosal plexus that can be called the "second brain" to regulate gut physiology.Sympathetic and parasympathetic neurons also form synaptic connections with enteric neurons to orchestrate their responses.Each of these distinct neuronal subsets is capable of signaling to immune cells by releasing neurotransmitters into various layers of the gut, which can act on innate and adaptive immune cells. ENTERIC NEURON REGULATION OF GUT IMMUNITY Enteric neurons are capable of sensing various gut luminal stimuli and crosstalk between immune cells and epithelial cells in both healthy and disease states via the production of neurotransmitters, neuropeptides, and cytokines (Fig. 3).Enteric neurons in the myenteric plexus crosstalk with resident muscularis macrophages (MMs) to mediate peristalsis, a fundamental aspect of digestion.Enteric neurons sense commensal microbiome-derived cues to secrete CSF1 (mCSF), whose receptor CSF1R is highly expressed on MMs [35].CSF1R signaling is indispensable for the development of MMs, which in turn produce BMP that acts on enteric neurons to modulate intestinal peristalsis [35].This bidirectional crosstalk coordinates the contraction and relaxation of smooth muscle cells to regulate intestinal peristalsis in response to microbial and nutritional cues. Enteric neurons also express cytokines that can regulate both immune and epithelial barrier function as a part of host defense.IL-6, a typical inflammatory cytokine produced by immune cells such as macrophages, is also expressed in enteric neurons [36].Enteric neuron-derived IL-6 inhibits the differentiation of induced RORγ+ regulatory T cells (iTregs) in the colon.Neuronal-specific IL-6 depletion leads to increased iTreg numbers [36].The gut microbiome helps shape this enteric neuron-Treg axis, as certain microbes stimulate the loss of enteric neuron networks upon colonization in the gut, which leads to increased iTreg numbers [36].Enteric neurons express IL-18, which has been found to signal to goblet cells to protect against enteric pathogens.IL-18, which can be produced by immune cells and epithelial cells, is indispensable for antimicrobial peptide (AMP) production, which is required to help maintain microbiota homeostasis and combat pathogen infections [37].A study showed that enteric neuron-derived IL-18 plays a nonredundant role in regulating AMP production from goblet cells, where enteric neuron-specific IL-18 deficiency results in microbial dysbiosis and susceptibility to S. typhimurium infection [38].It has not been determined how IL-18 is activated and released from enteric neurons. Enteric neurons also play a major role in coordinating the function of innate lymphoid cells (ILCs) in the gut.ILCs are early responding innate lymphocytes that coordinate downstream adaptive immunity.Enteric neurons are in close proximity to ILCs, laying a cellular basis for these neurons regulate ILC functions [39,40].ILC2s highly express NMUR1, which is the receptor for the neuropeptide neuromedin U (NMU).A subset of enteric sensory neurons express NMU, which is released in allergic conditions.Enteric neurons secrete NMU to boost ILC2 activation and type 2 cytokine production during N. brasiliensis helminth infection, leading to improved host protection and worm clearance [39].Both the alarmin cytokine IL-33 and products from N. brasiliensis can trigger NMU production from enteric neuron organoids in a MyD88-dependent manner in vitro, suggesting that these neurons are also responding in a transcriptionally plastic manner to type 2 immune triggers [40].In contrast to NMU, the neuropeptide CGRP, which is also expressed by enteric neurons and extrinsic sensory neurons, antagonizes ILC2 proliferation and IL-13 expression in helminth infection and intestinal type 2 immunity [8,41].ILC2s themselves also upregulate CGRP during infection, which mediates autoinhibition of IL-13 expression [41].Given the contrasting roles of NMU and CGRP in type 2 immunity host defense, it remains unknown how the release or expression of these neuropeptides are coordinated during infection. Enteric neurons also play a key role in regulating the function of type 3 ILCs (ILC3s) in the gut lamina propria.ILC3s express high levels of VIPR2, a receptor for the neuropeptide VIP.A subset of enteric neurons (as well as vagal sensory neurons) express high levels of VIP, and they have been found to regulate ILC3 function at homeostasis and during host defense [42,43].In one study, food consumption triggered neuronal production of VIP in a manner dependent on circadian rhythms; VIP then inhibited ILC3mediated production of IL-22 and abrogated intestinal expression of AMPs [42].Chemogenetic activation of VIP+ neurons results in a decreased proportion of IL-22 + ILC3s and renders the host susceptible to oral Citrobacter rodentium infection [42].A contrasting study showed that VIP promotes ILC3 expansion and IL-22 production, is abrogated in VIPR2-deficient mice [43].In this study, VIPR2-deficient mice displayed constitutively fewer IL-22 + ILC3s, along with enhanced susceptibility to DSS-induced colitis [43].Another study showed that VIP is also able to recruit ILC3s into the intestine and that either VIP or VIPR2 deficiency impairs ILC3 recruitment and IL-22 production and renders mice susceptible to Citrobacter rodentium infection [44].Interestingly, one recent study revealed that VIP is able to potentiate both ILC3 and ILC2 activation by synergizing with the cytokines IL-23 or IL-33, thereby boosting ILC-mediated host immunity against Citrobacter rodentium or Trichuris muris infection, respectively [45].These studies used somewhat different approaches for the blockade or inhibition of VIP+ neuron activity, but it has not been determined why there were distinct results.Furthermore, the signaling mechanisms within the ILC3s following VIPR2 stimulation that lead to cytokine regulation and transcriptional changes are worthy of future studies. SYMPATHETIC NEURON REGULATION OF GUT IMMUNITY Sympathetic neurons are efferent neurons of the gut-brain axis and the major branch of the stress signaling response.Their main effector neurotransmitters are catecholamines, which bind to adrenergic receptors on target cells.Sympathetic neuron activity is modulated by luminal microbial cues in the gut.Loss of the microbiome leads to increased expression of cFos, a marker of neuronal activation, in sympathetic neurons, suggesting that the endogenous flora plays a regulatory role in suppressing sympathetic neuron activation [46].Accompanying the increased sympathetic neuron activation, microbiota-depleted mice display slower gut motility, which can be partially rescued by blockade of catecholamine release [46].Sympathetic nerve fibers spread along the blood vessels in the colon, and local neuron activation also reduces immune cell extravasation into the colon [47].Endothelial cells express MAdCAM-1, a cell adhesion molecule responsible for leukocyte migration.Optogenetic activation of sympathetic neurons decreases MAdCAM-1 expression on endothelial cells, which mitigates DSS-induced colitis [47].Blockade of beta-adrenergic receptor signaling reverses the decrease in MAdCAM-1 expression induced by sympathetic neuron activation, suggesting that norepinephrine plays an important role in regulating immune cell migration [47]. In addition to regulating gut motility and immune cell migration, sympathetic neurons modulate intestinal immune cell with the former having a tissue protective and wound healing "M2-like" macrophage signature and the latter having an "M1-like" macrophage signature [48].MMs specifically express the β2 adrenergic receptor, and the main source of norepinephrine that drives MM polarization is gut extrinsic sympathetic neurons [48].Salmonella typhimurium (SpiB mutant) infection activates sympathetic neurons, resulting in NE release in the myenteric plexus; this signals to MMs to polarize their transcriptional profile in vivo in a β2AR-dependent manner [48].S. typhimurium infection also leads to the loss of intrinsic enteric-associated neurons (iEAN), resulting in decreased gut motility [49].The sympathetic neuron-MM axis helps protect these neurons.It was found that MMs upregulate arginase 1 (Arg1), which helps preserve iEAN cell health during enteric infection through polyamine synthesis [49].β2AR ablation in MMs exacerbates S. typhimurium Spib infection-induced neuronal loss, while sympathetic neuron activation protects the gut from pathogen-induced iEAN damage [49].This sympathetic neuron-MM mediated neuroprotective axis, connected through MMs intrinsic β2AR signaling, is also important under the setting of repeated infections by multiple enteric pathogens, including Salmonella, Yersinia, and helminths [50].In this study, a primary pathogen infection induced β2AR signaling activation in MMs, which protected the gut from subsequent infection by another unrelated pathogen [50]. Sympathetic neurons also signal to ILC2s in the small intestine, where these cells are closely located in the villi and submucosa.Similar to macrophages, ILC2s also highly express the receptor β2AR [51].β2AR signaling was found to potently inhibit the proliferation of ILCs.β2AR deficiency leads to increased numbers of IL-5 + and IL-13 + ILC2s and protects the host from gastrointestinal helminth Nippostrongylus brasiliensis (N.brasiliensis) infection, while β2AR treatment inhibits ILC2-mediated type 2 immune responses and compromises the hosts protection against helminths [51]. These studies together show that sympathetic neurons and local stress signaling through catecholamines can powerfully regulate gut homeostasis and immunity against pathogens (Fig. 4).Allowing for the wide expression of adrenergic receptors on immune cells and endothelial cells, it remains to be determined how these different stress signaling immunoregulations are coordinated in each context. PARASYMPATHETIC NEURON REGULATION OF GUT IMMUNITY Parasympathetic neurons reside in the brainstem and innervate peripheral organs via the vagus nerve.The vagus nerve may directly innervate the gut or signal through intermediary ganglia that in turn innervate the gut.The main neurotransmitter produced by these neurons is acetylcholine (ACh).It is increasingly clear that vagal efferent parasympathetic neurons have major immunomodulatory effects (Fig. 5).The "cholinergic antiinflammatory reflex" was first discovered as a neural reflex in which the brain modulates systemic inflammation in response to endotoxic shock [52].After sensing peripheral inflammation, vagal efferents convey signals to choline acetyltransferase (ChAT)expressing T cells in the spleen, which relay signals to macrophages via Ach.Ach in turn activates a7 nicotinic acetylcholine receptor signaling to suppress the production of TNFα from Fig. 4 Sympathetic neuron regulation of gut immunity.Sympathetic neuron activation leads to the release of catecholamines, such as norepinephrine (NE), which regulates immunity.A NE inhibits the expression of MAdCAM-1 on blood vessel endothelial cells in a βARdependent manner, limiting immune cell extravasation during colitis; B NE enhances arginase 1 expression and polyamine synthesis in macrophages in a β2AR-dependent manner, preventing enteric neuron cell death; C NE limits IL-5 and IL-13 production by β2AR-expressing ILC2s, which weakens host defense against N. braliensis infection macrophages [53].This axis is also relevant to inflammation.ChAT + T cells recruited into the colon during C. infection, and T-cell-specific ChAT deficiency renders the host more susceptible to C. rodentium infection, accompanied by increased expression of TNFα, IL1β, and IL6, indicating that T-cell-derived Ach mediates antimicrobial gut defenses [54].Vagal parasympathetic neurons also modulate gut immunity in a T-cell-independent manner but rather through the enteric nervous system and macrophages [55].Intestinal alteration is often accompanied by delayed gut motility and proinflammatory cytokine production.Vagal neuron activation reduces the inflammation caused by intestinal manipulation through crosstalk with myenteric enteric neurons, which have close contact with and inhibit MMs that express the a7 nicotinic acetylcholine receptor [55]. Vagal-derived acetylcholine could also regulate T-cell function during adaptive immunity responses to helminth and enteric pathogen infections.The M3 muscarinic Ach receptor (M3R) has been found to mediate host defense against N. brasiliensis and S. typhimurium intestinal infection.M3R-deficient mice are more susceptible to both infections and display compromised T-cell activation and cytokine release during infections, suggesting a Tcell-dependent role for Ach in protection against pathogens [56].Ach and M3R agonists promote IL-13 and IFNγ production from T cells in an M3R-dependent manner [56]. Ach also plays a major role in regulating gut epithelial barrier protection.Ach is the most well-known neurotransmitter that regulates goblet cell mucus secretion [57].Goblet cell-associated antigen passages (GAPs) deliver luminal antigens across the epithelium to the underlying antigen presenting cells (APCs) in the lamina propria, which is also modulated by Ach [58].Recent studies have shown that goblet cells express muscarinic Ach receptor 4, which senses Ach to promote goblet cell mucus secretion and subsequent GAP formation, and is critical for the induction and maintenance of oral tolerance to luminal antigens [59,60].Therefore, parasympathetic neurons are critical for maintaining epithelial barrier integrity and immune homeostasis in the gut. Of note, one of the challenges in studying how vagal parasympathetic neurons regulate gut immunity is the lack of more specific tools to target only these neurons in the brainstem, as these motor efferents are molecularly diverse.It is also difficult to study because cholinergic neurons reside in both the brainstem and the enteric nervous system.Future work will require investigating the role of distinct vagal efferent subsets in regulating gut immunity. BRAIN-TO-GUT REGULATION OF IMMUNITY A major question in this field is how the brain senses inflammation and crosstalks with the gut from a top-down perspective.Recent work suggests that there could be specific brain regions that respond to colitis and that specific neural circuits provide feedback to the gut to regulate inflammation.Using cFOS-TRAPbased techniques, a recent study showed that intestinal inflammation induced by DSS treatment leads to neuron activation in the thalamus, paraventricular hypothalamic (PVH) nuclei, central amygdala (CeA), anterior cingulate cortex (ACC), supplementary somatosensory cortex, and insular cortex [61].Chemogenetic reactivation of colitis-responsive insular cortex neurons leads to both leukocyte recruitment and activation in the colon, suggesting that the gut immune status or signature can be retrieved by neuron reactivation in the brain [61].Conversely, inhibition of these neurons ameliorates DSS-induced inflammation and colitis in the colon [61], suggesting a potential avenue of targeting brain neurons for immune intervention in the gut.The mechanism and neural circuits that connect the insular cortex to the gut to regulate the gut immune response need to be further explored.How neuronal activation in other brain regions affects peripheral immune states in the gut is still relatively unknown.An elegant study related to the spleen showed that neurons in the central nucleus of the amygdala (CeA) and the paraventricular nucleus (PVN) are connected to the splenic nerve, facilitating plasma cell differentiation in the spleen [62].It would be interesting if similar pathways connected these regions to the GI tract. Another way the brain affects gut immunity is via neuroendocrine signals, in particular, the hypothalamus pituitary adrenal gland (HPA) axis, which can be activated under conditions of stress and inflammation [63].In this case, hypothalamic neurons secrete corticotropin releasing factor (CRF), which induces ACTH release from the pituitary gland, and this induces the adrenal gland to produce immunomodulatory hormones, including catecholamines and cortisol, which can circulate to the gut to induce a feedback loop.Stress signaling can also be activated by gut microbiota changes.Gut microbiome deprivation is associated with social deficits and increased release of the stress hormone corticosterone; blocking HPA axis activation corrects social deficits induced by microbiota depletion [64].In the context of vascular disease, stress-induced HPA axis activation promotes glucocorticoid hormone release, which causes enhanced gut permeability accompanied by increased IL-17 production from Th17 cells [65].IL-17 in turn amplifies stress-induced inflammation by promoting neutrophil expansion [65].In the context of a chronic stress model induced social defeat in mice, intestinal dysbiosis specifically leads to of colonic dectin-1+ γδ T cells, which enhances the differentiation and accumulation of IL-17-producing γδ T cells in the colon and meninges and concomitantly results in social avoidance [66].These studies demonstrate the critical role of stress-induced IL-17 production in mediating pathological and psychological outcomes. Neurological disorders, including autism spectrum disorder (ASD), have been shown to be accompanied by GI dysfunction [67], suggesting an important role for the brain in regulating gut immunity.The mouse maternal immune activation (MIA) model is commonly used to mimic ASD.MIA offspring that display ASD-like behavioral impairment have deficient gut barrier functions associated with abnormal expression of tight junction molecules and intestinal dysbiosis [67].However, the cellular basis underlying this gut barrier regulation by the brain is unclear.It would be interesting to determine in situations such as ASD whether dysregulated brain to gut neural circuits regulate gut immunity and barrier function. Therefore, future work is needed to determine how brain activity and signals can be transduced to the GI tract to affect immunity.It also remains to be determined whether GI dysfunction is the result or cause of neurological disorders. FUTURE DIRECTIONS AND OUTLOOK Neuroimmunology is a fast-growing field.Recent breakthroughs in how different neuronal subsets regulate immune responses in the gut deepen our understanding of intestinal immunity under physiological and pathological conditions.In addition to passively accepting and responding to harmful insults, the immune system signals to the nervous system to initiate defensive responses.Meanwhile, under the anticipation and perception of potential threats, the nervous system actively modulates the immune response in the gut.Coordination between the nervous system and immune system allows the host to properly deal with complex stimuli and an ever-changing environment.The cellular neural basis of how distinct neuron types regulate gut immunity and the molecular mechanism of how different neurotransmitters modulate immune cells are major questions that need to be characterized in the future. An important consideration is that neurons and immune cells integrate signals from a complex tissue microenvironment.Therefore, analysis of neuroimmune signaling will require analysis of other cell types.Enteric glial cells, which support enteric neuron development, exhibit immunoregulatory functions by signaling to both immune cells and epithelial cells in the gut [68][69][70].Endothelial cells, fibroblasts, and other mesenchymal cells support intestine structure and function, which may also mediate neuroimmune crosstalk.Spatial transcriptomics is one approach that can provide an unbiased picture of cellular composition in the tissue at the transcriptional level [71], and utilizing this approach in neuroimmunology could allow a better picture of how the heterogeneous cellular atlas is regulated by neurons in the gut under physiological and disease conditions. The gut microbiota is also a critical arm that regulates both neuronal and immune activation in the gut-brain axis.The used of gnotobiotic mice with bacterial genetic manipulation remains the most powerful approach in uncoupling microbiota from the gutbrain axis.Multi-omic studies combining the transcriptome, proteome, metagenome and microbiome will improve our understanding of how neurons shape the intestinal ecosystem [72]. Recent technical advancements have promoted the identification and manipulation of neurons in regulating gut immunity.Optogenetics and chemogenetics have allowed for the manipulation of neuronal activity in a temporally and spatially controlled manner [47,73,74], providing an elegant approach to dissect the role of neuronal types in gut immunity.Single-cell sequencing of neurons shows that gut-innervating neurons have diverse cellular compositions within each tissue layer and stimulus modality [75][76][77].Studies using neuronal tracing from the gut to the brain and other organs combined with imaging are starting to identify the neural circuits involved in neuro-immune communication.Future research will require targeting specific neuron subsets to elucidate their roles in regulating the intestinal immune response, which will also be the cellular basis of potential therapeutic interventions. The immunoregulatory role of the brain in gut immunity is an open area of exploration.This includes mapping how emotional and cognitive brain regions could signal to the body to induce intestinal dysfunction.The communication between the brain and gut is bidirectionally mediated by neural, endocrine, immune, and humoral links [78].One recent study describes the phenomenon that neuron activation in the insular cortex activates colitis-related immune responses in the gut, but the cellular and molecular basis for this regulation still needs to be determined.A comprehensive study dissecting the contribution of distinct factors involved in immune regulation from the brain to the gut under different disease contexts will facilitate our understanding of the gut-brain axis. These findings also have therapeutic implications for targeting neuroimmune interactions for inflammatory disease treatment.Since the discovery that electrical vagal efferent neuron stimulation attenuates endotoxin-induced inflammation in vivo [52], bioelectronic devices have been used to treat or dampen immune responses in patients with kidney injury, lung injury, and spinal cord injury [79].The vagus nerve also plays a counterinflammatory role during colitis in a macrophage-dependent manner by inhibiting proinflammatory cytokine expression while stimulating anti-inflammatory cytokine production, indicating that activating vagal neurons could be a potential strategy for inflammatory bowel disease (IBD) treatment [80][81][82].Clinical trials have been undertaken to evaluate the therapeutic effects of noninvasive vagus nerve stimulation on the treatment of Crohn's disease.Therapeutic targeting of neurotransmitters and neuropeptide receptor signaling could also be another approach to apply neuroimmune principles to treat disease.Medications such as beta-adrenergic receptor antagonists (beta-blockers) developed to treat hypertension, angina pectoris, and cardiac arrhythmias [83] or CGRP receptor antagonists used to treat migraine [84] can be repurposed for gastrointestinal dysfunction interference by virtue of their ability to modulate gut immunity.We also expect to see more translational studies targeting neurons as therapeutic strategies with the development of noninvasive tools such as engineered adenovirus and electroacupuncture [85][86][87]. The advances and breakthroughs in neuroimmune research in the gut during the last several years have demonstrated the potential of therapeutic strategies that manipulate neuroimmune interactions for clinical disease treatment.There are still challenges that need to be addressed before we can understand this interaction.The neuronal connections and neurotransmitters that mediate neuroimmune crosstalk need to be further characterized, and the role of brain networks in the regulation of peripheral immunity remains elusive.These are the fundamental questions in this realm that we expect to witness more breakthroughs in relation to in the coming years. Fig. 3 Fig. 3 Enteric neuron regulation of gut immunity.Enteric neurons are heterogeneous and can release cytokines (e.g., IL-18 and IL-6) and neuropeptides (e.g., neuromedin U (NMU), calcitonin gene-related peptide (CGRP) and vasoactive intestinal peptide (VIP)) to regulate immune function.A Neuronal IL-18 regulates goblet cell expression of antimicrobial peptides, which mediates host protection against intestinal S. Typhimurium infection.B Neuronal IL-6 inhibits the differentiation of RORγ+ regulatory T cells (iTregs) in the colon.C NMU promotes NMURdependent ILC2 production of IL-5 and IL-13, which promotes host defense against N. braliensis infection.D CGRP downregulates IL-13 production by ILC2s, ameliorating OVA-induced allergy.E VIP suppresses IL-22 production by ILC3s in a VIPR2-dependent manner, compromising host defense against C. rodentium infection.F VIP boosts IL-22 production by VIPR2-expressing ILC3s, protecting the host from DSS colitis Fig. 5 Fig. 5 Parasympathetic neuronal regulation of gut immunity.Parasympathetic neurons release the neurotransmitter acetylcholine (Ach), which regulates immunity.A Ach enhances T-cell production of IL-13 and IFNγ, facilitating host defense against intestinal Salmonella and N. braliensis infections; B Ach promotes luminal antigen sampling through goblet cell-associated antigen passages (GAPs), resulting in antigen presentation by tissue CD103+ DCs
2023-06-21T06:17:00.759Z
2023-06-19T00:00:00.000
{ "year": 2023, "sha1": "aea73e4236e67e667e743401eb30aa4edc24620f", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41423-023-01054-5.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "eafeca07933a12e9fd66aaa9df24d4b328b1ebe1", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
7017692
pes2o/s2orc
v3-fos-license
The Alexander-Orbach conjecture holds in high dimensions We examine the incipient infinite cluster (IIC) of critical percolation in regimes where mean-field behavior has been established, namely when the dimension d is large enough or when d>6 and the lattice is sufficiently spread out. We find that random walk on the IIC exhibits anomalous diffusion with the spectral dimension d_s=4/3, that is, p_t(x,x)= t^{-2/3+o(1)}. This establishes a conjecture of Alexander and Orbach. En route we calculate the one-arm exponent with respect to the intrinsic distance. Introduction We study the behavior of the simple random walk on the incipient infinite cluster where x ∈ G and p n (x, x) is the return probability of the simple random walk on G after n steps (note that if the limit exists, then it is independent of the choice of x). Alexander and Orbach [4] conjectured that d s = 4/3 for the IIC in all dimensions d > 1, but their basis for conjecturing this in low dimensions was mostly rough correspondence with numerical results and it is now believed that the conjecture is false when d < 6 [33, 7.4]. In this paper we establish their conjecture in high dimensions. where τ r is the hitting time of distance r from the origin (the expectation E is only over the randomness of the walk) and W n is the range of the random walk after n steps. Our main contribution is the analysis of the geometry of the IIC. The IIC admits fractal geometry which is dramatically different from the one of the infinite component of supercritical percolation. The latter behaves in many ways as Z d after a "renormalization" i.e. ignoring the local structure [23] (see also [22] for a comprehensive exposition). In particular, the random walk on the supercritical infinite cluster has an invariance principle, the spectral dimension is d s = d and other Z d -like properties hold, see [19,13,5,52,15,43]. Our analysis establishes that balls of radius r in the IIC typically have volume of order r 2 and that the effective resistance between the center of the ball and its boundary is of order r. These facts alone suffice to control the behavior of the random walk and yield Theorem 1.1, as shown by Barlow, Járai, Kumagai and Slade [9]. The key ingredient of our proofs is establishing that the critical exponents dealing with the intrinsic metric (i.e., the metric of the percolated graph) attain their mean-field values. It was demonstrated first in the work of AN and Peres [44] that these exponents yield analogous statements to the Alexander-Orbach conjecture in the finite graph setting. In particular, in [44], the diameter and mixing time of critical clusters in mean-field percolation on finite graphs were analyzed. In different settings the Alexander-Orbach conjecture was proved by various authors. When the underlying graph is an infinite regular tree, this was proved by Kesten [37] and Barlow and Kumagai [10] and in the setting of oriented spread-out percolation with d > 6, this was proved recently in the aforementioned paper [9]. 1.1. Anomalous diffusion. The fact that d s = 4/3 should best be contrasted against another natural definition of dimension, and that is the volume growth exponent d f defined, for any infinite connected graph G by, where B G (x, r) is the ball, in the shortest-path metric with center x and radius r, and celebrated result [24] shows that d f exists and is integer for any Cayley graph, and then Theorem 5.1 in [29] shows that d f = d s ) and there are other rich families which satisfy this. To understand the discrepancy, we need to understand anomalous diffusion. Anomalous diffusion is the phenomenon that for many natural fractals, or more precisely, graphical analogs of fractals, random walk on the fractal is significantly slower than in Euclidean space. In particular, while we expect a random walker on Z d to be at distance t 1/2 at time t, on a fractal we find it in distance t 1/β where β ≥ 2 and often the inequality is strict 1 . In fact, we now know that any value of β between 2 and d f + 1 may appear [6]. This phenomenon was first observed by physicists in the context of disordered media [4,48] i.e. in our context. Correspondingly, the first mathematical results are Kesten's [37] who analyzed random walk on the IIC in Z 2 and on an infinite regular tree. For the IIC on a tree Kesten's results are complete and he shows that β = 3. On the IIC in two dimensions, Kesten showed that the expected distance of the random walk from the origin after t steps is at most t 1/2−ǫ for some ǫ > 0, hence β > 2 if it exists. Despite the great progress seen since on critical two-dimensional percolation, the exact value of β in this case is still unknown. The attention of the mathematical community then shifted to regular fractals. The first to be analyzed were finitely ramified fractals, namely fractals that can be disconnected by the removal of a constant number of points in any scale. For example, arbitrarily large portions of the Sierpinski gasket (figure 1) can be disconnected by the removal of 3 points. In these cases β can be calculated explicitly, for example for the Sierpinski gasket β = log 5/ log 2 [11]. A significant step forward was done by Barlow 1 β is sometimes called the "walk dimension" and denoted by dw. and Bass [7,8] who showed that on any generalized Sierpinski carpet β is well defined, and the random walk exhibits many regularity properties analogous to those of ran- is a significant difference between low and high dimensions. Let us therefore spend a little effort on the difference between "low" and "high" dimensions in percolation. Many models in mathematical physics exhibit an upper critical dimension and for percolation this happens at d = 6. The picture, as developed by physicists, is that for d > 6 the space is so vast that different pieces of the critical cluster no longer interact. The effect of this is that the geometry "trivializes" and for most questions the answer would be as for percolation on an infinite regular tree. This is also known as mean-field behavior. Aspects of this picture were confirmed rigorously but with one important caveat. The technique used, lace expansion, is perturbative and hence requires one of the following to hold: • The dimension d should be large enough (d ≥ 19 seems to be the limit of current techniques). • The dimension should satisfy d > 6 but the lattice needs to be sufficiently spread out. For example, one may take some L sufficiently large and put an edge between every x, y ∈ Z d with |x − y| ≤ L. Credit for these remarkable results goes to Hara and Slade [27]. For d < 6 it has been proved that percolation cannot attain mean-field behavior [18]. Specifically, Hara and Slade proved the that for these lattices the triangle condition holds. The triangle condition, suggested as an indicator of mean-field behavior by Aizenman and Newman where x ↔ y denotes the event that x is connected to y by an open path (for simplicity we assume that the set of vertices of the lattice is always Z d and denote the set of edges by E(Z d )). To see how to analyze the behavior of critical and near-critical percolation using the triangle condition, see [3,12,45] A slightly different approach to mean-field behavior is via the two-point function i.e. the probability that x is connected to y by an open path. It has the estimate, for all x, y ∈ Z d , where ≈ means that the ratio of the quantities on the left and on the right is bounded by two constants depending only on d and L. Here and below we abuse notation by considering that 0 2−d = 1. A simple calculation shows that, when d > 6, (1.2) implies (1.1) hence the assumption on the two-point function is stronger. It was obtained using the lace expansion by Hara, van der Hofstad and Slade [26] for the spread-out model and d > 6, and by Hara [25] for the nearest-neighbor model with d ≥ 19 (in fact, they obtained the right asymptotic behavior of (1.2), including the constant). At present there is no known lattice in R d for which the triangle condition is known and the two-point function is unknown (or false). Nevertheless, We believe that there is value in noting which results require the (formally) stronger two-point function estimate (1.2) and which require only the triangle condition. Reasons to keep this distinction come from the fields of long-range percolation [12,30] and of percolation on general transitive graphs [50,51]. In both cases the triangle condition makes more sense and was proved in many interesting examples. We will not dwell on these topics in this paper, but in general we believe that any result we prove only using the triangle condition should hold (perhaps with minor modifications) for long-range percolation and for percolation on unimodular transitive graphs. Returning to the Alexander-Orbach conjecture, our aim is to study random walk on a typical large cluster. The term incipient infinite cluster was coined by Kesten, borrowing a vaguely-defined term from the physics literature. His approach in [36] for the two-dimensional case is to fix some integer n, to condition on the event 0 ↔ ∂[−n, n] 2 and then take n → ∞. In this paper we take the approach suggested by van der Hofstad and Járai [32], and that is to fix some arbitrary far point x, condition on the event 0 ↔ x, and then take x → ∞. For both approaches one still needs to show that the limit exists. This was done in [36] for the case d = 2, in [32] for large d as above and in [31] for the oriented percolation model with d > 4. Formally, we endow the space of all configurations {0, 1} E(Z d ) with the product topology (recall that E(Z d ) is the set of edges of our lattice). We consider the conditional measures given 0 ↔ x and finally, the IIC is the limit as x → ∞ in the space with the weak topology. Put differently, for any cylinder event F (i.e., an event that can be determined by observing the status of a finite number of edges) we have where p c = p c (Z d ) is the percolation critical probability. The convergence of the limit in the right hand side, independently on how x → ∞, is proved in [32] for d large using the lace expansion. We note in passing that the existence of the limit is not relevant for our arguments. Indeed, even if the limit would not exist, subsequence limits would exist due to compactness, and our results would hold for each one. Thus the conclusions of Theorem 1.1 hold for any lattice in R d with d > 6 for which the two-point function estimate (1.2) holds, and for any IIC measure (i.e. any subsequence limit as above). 1.3. Intrinsic metric critical exponents. The key ingredient in our proofs is showing that the intrinsic metric critical exponents defined below assume their mean-field values in high dimensions. Let G be a graph and write G p for the result of p-bond percolation on it. Write It will be occasionally important to take some G ⊂ E(Z d ) and sample p c (Z d )-percolation on G. Be careful not to confuse the notation B(x, r; G) which refers to a random ball in the percolation on G with B G (x, r) which is just the (deterministic) ball in G. We usually take G to be Z d , and in this case it would be suppressed from the notation. Our most frequent notation is B(x, r) which stands for B(x, r; Z d ). Define now the event H(r; G) = ∂B(0, r; G) = ∅ , and finally define Note again that we define Γ by the maximum over all subgraphs of Z d , but each one is "tested" with the p c of Z d rather than with its own p c . The corresponding lower bounds to Theorem 1.2 are much easier to prove and are not needed for the proof of Theorem 1.1. We state them for the sake of completeness. The extrinsic metric corresponds to the shortest-path metric in Z d while the intrinsic metric corresponds to the (random) shortest-path metric in the percolated graph Z d p . The classical one-arm critical exponent ρ > 0 describes the power law decay of the probability that the origin is connected to sphere of radius r in the extrinsic metric, that is where |x| denotes the usual Euclidean norm. This exponent takes the value 48/5 in the two dimensional triangular grid, as shown by Lawler, Schramm and Werner [41] and Smirnov [53]. In the case of an infinite regular tree we have ρ = 1 by a Theorem of Kolmogorov [38] (here the critical probability is p c = 1 ℓ−1 , where ℓ is the vertex degree of the tree). In high dimensions it was conjectured that ρ = 1/2 (see [49] and the upcoming paper [40] for a proof) -a surprising belief at first, since we expect critical exponents in high dimensions to take the same value they do on a tree. Measuring distance with respect to the intrinsic metric offers a simple explanation of this discrepancy. Indeed, as the extrinsic and intrinsic metrics on the tree are the same, we have that on a tree P(H(r)) ≈ r −1 and by Theorem 1.2 above we learn that this is the same order in the high-dimension lattices. Similar results exist for critical Erdős-Rényi random graphs [44]. 1.4. About the proof. From the point of view of analysis of fractals, the IIC is one of the simplest cases to handle because of its tree-like structure. Indeed, the main difficulty is the proof of Theorem 1.2. Once that is proved the proof proceeds roughly as follows. Write B IIC (0, r) and ∂B IIC (0, r) for the corresponding shortestpath metric balls in the IIC. Firstly, since Γ(r) ≤ r −1 and E|B(0, r)| ≈ r we learn that |B IIC (0, r)| ≈ r 2 . Secondly, the intrinsic metric exponents show that there are ≥ cr "approximately pivotal" edges -λ-lanes in the language of [44] -between 0 and ∂B(0, r) (Lemma 2.6) and therefore the electric resistance R eff between 0 and ∂B IIC (0, r) is ≈ r. We conclude that B IIC (0, r) is a graph on approximately r 2 vertices with effective resistance between 0 and ∂B IIC (0, r) of order r -the same structure a critical branching process conditioned to survive to level r has with high probability. Now, there are many ways to connect electric resistance and volume estimates to hitting times, and in fact we simply quote a perfectly-tailored-for-our-needs result from [9] which concludes the proof of the theorem. However, let us briefly describe a somewhat different but very natural approach. It starts with the fact [17] that, in any finite graph G, and for any two vertices x and y, where Hit(x, y) is the expected hitting time from x to y (or in other words, the left hand side is the expected commute time between x and y). Since in our case |E(G)| ≈ r 2 we get that the commute time is ≈ r 3 . Now, in general the commute time only bounds the hitting time Hit(0, ∂B IIC (0, r)) from above, but in strongly recurrent graphs this turns out to be sharp [39]. Thus, in time r 3 the random walk has walked only in B IIC (0, r) and it can be shown that the end point is approximately uniformly distributed (the walk has mixed in B IIC (0, r) in that time). Since |B IIC (0, r)| ≈ r 2 we get that p r 3 (0, 0) = r −2 , as required. The details of this approach are described in the setting of finite graphs in [44] and can be adapted to this case as well. A natural approach towards the proof of the volume growth exponent (part (i) of Theorem 1.2) is to show that E|∂B(0, 2r)| ≥ c(E|∂B(0, r)|) 2 which would show that if E|∂B(0, r)| is too large for some r, it will start exploding, leading to a contradiction. We were not able to pull this approach directly -∂B(0, r) is hard to analyze -our substitute is to show that E|B(0, 2r)| ≥ (c/r)(E|B(0, r)|) 2 . This can be proved using relatively standard "inverse BK inequalities" and the same argument then applies. The proof of the one-arm exponent (part (ii) of Theorem 1.2) uses the precise determination of the exponent δ by Barsky and Aizenman [12], which allows us to use a regeneration argument to show, roughly, that Γ(r) ≤ r(Γ(r/4)) 2 + C/r (the second term comes from the results of [12]), from which the estimate follows by induction. The lengths of the proofs of both pieces are equivalent, which might hide the fact that the proof of the one-arm exponent was much harder for us to obtain. A final note is due about the use of Γ(r). It would have been more natural to discuss only P(H(r)) rather than Γ(r). However, we need to use a regeneration argument. Basically we claim that, once you reached a certain level r, each vertex v ∈ ∂B(0, r) has probability ≤ Γ(s) to "reach" to ∂B(0, r +s). Heuristically, one would assume that it would work even with H(s), because the part of the cluster you already "explored", B(0, r) only makes it more difficult to reach the level r + s. The problem is that H(r) is not a monotone event. In general, if you have a graph G satisfying ∂B G (0, r) = ∅ and you remove an edge, it could increase the distance to some vertex v, pushing it outside of B G (0, r), and restoring the event ∂B G (0, r) = ∅. Hence it is not possible to use the regeneration argument with H(r) -there is simply no inequality in either direction relating P(H(r)) with the conditional probability of H(r) given some partial configuration of edges. The use of Γ(r) helps us circumvent this problem. See the proofs of lemma 2.6 (page 14) and of part (ii) of Theorem 1.2 (page 20). 1.5. Organization and notation. In §2 we show how the intrinsic metric critical exponents (Theorem 1.2), together with (1.2) yield our main result, Theorem 1.1. In §3 we derive the mean-field estimates of Theorem 1.2 and 1.3. For x, y ∈ Z d we write x ↔ y for the event that x is connected to y by an open path. We write x r ↔ y if there is an open path of length ≤ r connecting x and y. In order to improve readability, we denote constants which depend only on d and the lattice by C (to denote a large constant) and c (to denote a small constant) and as we do not attempt to optimize these constants we frequently use the same notation to indicate different constants. For two monotone events of percolation A and B we write A • B for the event that A and B occurs in disjoint edges and we often use the van den Berg and Kesten inequality (BK for short) P(A • B) ≤ P(A)P(B) (see [14,22] or [16] for more details). Deriving the Alexander-Orbach Conjecture from Theorem 1.2 In this entire section we assume the two-point function estimate (1.2) and Theorem 1.2. We will use results of Barlow, Járai, Kumagai and Slade [9] which are stated for random graphs and hence are perfectly suited for our case. It is interesting to note that log log fluctuations really do exist, and hence any result for fixed graphs will naturally be somewhat imprecise. To state the results of [9], we need the following definitions. Given an instance of the IIC (that is, an infinite connected graph containing the origin) write B IIC (0, r) and ∂B IIC (0, r) for the ball of radius r around 0 and the boundary of the ball, respectively, in the shortest path metric on the IIC. Denote by R eff (0, ∂B IIC (0, r)) the effective resistance between 0 and ∂B IIC (0, r) when one considers B IIC (0, r) as an electric network and gives each edge a resistance of 1 -see [20] for a formal definition. Theorem 2.1 ( [9]). If there exist some constants K, q > 0 such that for any large enough r we have then the conclusions of Theorem 1.1 hold. We begin with some lemmas leading to the fact that condition (2.1) holds in our setting. We start with some volume estimates. Here and below "|x| sufficiently large" means essentially that |x| > 4Lr where L is the length of the longest edge in E(Z d ). This point will not play any role, though. Proof. We have If 0 r ↔ z and 0 ↔ x, then there must exist some y such that the events 0 r ↔ y, y r ↔ z and y ↔ x occur disjointly. So and applying the BK inequality twice, C > 0 such that for any r ≥ 1, any ǫ < 1 and any x ∈ Z d with |x| sufficiently large we have that Proof. If |B(0, r)| ≤ ǫr 2 then there must exists a (random) level j ∈ [r/2, r] in which |∂B(0, j)| ≤ 2ǫr. Fix the smallest such j. Now, if 0 ↔ x then there must be some vertex y ∈ ∂B(0, j) which is connected to x "off B(0, j − 1)" i.e. with a path that does not use any of the vertices in B(0, j − 1). Let therefore A be some subgraph of Z d which is "admissible" for being B(0, j) i.e. P(B(0, j) = A) > 0. We get where ∂A stands for the vertices in the graph A furthest from 0. A moment's reflection shows that, for any A and any y ∈ ∂A, the event {y ↔ x off A \ ∂A} is independent of the event {B(0, j) = A}. Therefore we can write where the last inequality uses the two-point function estimate (1.2) and the fact that |x − y| ≥ |x|/2. By the definition of j we have |∂A| ≤ 2ǫr and summing over all However, the events B(0, j) = A 1 and B(0, j) = A 2 are disjoint and the union of these over all A imply that ∂B(0, 1 2 r) = ∅. Part (ii) of Theorem 1.2 shows that the probability of this union is ≤ C/r, finishing our lemma. We continue with some effective resistance estimates. Recall the following definitions from [44]. • An edge e between ∂B(0, j − 1) and ∂B(0, j) is called a lane for r if it there is a path with initial edge e from ∂B(0, j − 1) to ∂B(0, r) that does not return to ∂B(0, j − 1). • Say that a level j (with 0 < j < r) has λ lanes for r if there are at least λ edges between ∂B(0, j − 1) and ∂B(0, j) which are lanes for r. • We say that 0 is λ-lane rich for r, if more than half of the levels j ∈ [r/4, r/2] have λ lanes for r. Recall also the Nash-Williams [46] inequality (see also [47,Corollary 9.2]). Lemma 2.4 ([46]). If {Π j } J j=1 are disjoint cut-sets separating v from U in a graph with unit conductance for each edge, then the effective resistance from v to U satisfies C > 0 such that for any r ≥ 1, for any event E measurable with respect to B(0, r) and for any x ∈ Z d with |x| sufficiently large, Proof. We first note that by Lemma 2.2 there exists some j ∈ [r, 2r] such that Now fix some M > 0 (which we shall optimize in the end) and write Now, for the first term we use (2.2) and Markov's inequality and get For the second term we do as in Lemma 2.3. We condition over B(0, j) and note that for any A we have Summing over all subgraphs A which satisfy E (here is where we use that E is measurable with respect to B(0, r)) gives that the second term is ≤ P(E) · CM |x| 2−d . Summing both terms we get Taking M = r/P(E) proves the lemma. Lemma 2.6. For any lattice Z d with d > 6 satisfying (1.2), there exists a constant C > 0 such for any r ≥ 1, any λ > 1 and any x ∈ Z d with |x| sufficiently large we have that Proof. Let j ∈ [r/4, r/2] and denote by L j the number of lanes between ∂B(0, j − 1) and ∂B(0, j). Let us condition on B(0, j) and take some edge between ∂B(0, j − 1) and Intrinsic Metric Critical Exponents In this section we prove Theorem 1.2. Our assumption on the lattice is therefore that it satisfies the triangle condition (1.1). In effect we will be using a 3.1. Intrinsic metric volume exponent. Here we prove part (i) of Theorem 1.2. We use the notation The main part of the proof is the following Lemma. Let us first see how to use Lemma 3.1. Proof of part (i) of Theorem 1.2. Assume without loss of generality that c 1 < 1 in Lemma 3.1 and take C 1 > max{(2/c 1 ), 2 d }. Assume by contradiction that there exists r 0 such that G(r 0 ) ≥ C 1 r 0 . Under this assumption we prove by induction that for any integer k ≥ 0 we have G(2 k r 0 ) ≥ C k+1 1 r 0 . The case k = 0 is our assumption, and Lemma 3.1 gives that where in the last inequality we used the induction hypothesis and the fact that C 1 > 2/c 1 . This completes our induction. Now, since the number of vertices of distance r from the origin is at most Cr d for some constant C which depends on d and on the lattice, but not on r, we get that for any integer k ≥ 0 and since C 1 > 2 d we arrive at a contradiction (for some k sufficiently large) which proves the upper bound on G(r). The next lemma is used in the proof of Lemma 3.1. Lemma 3.2. There exists some constant c > 0 such that Proof. By translation invariance of Z d it suffices to prove that The proof requires that we separate slightly the starting points of the two paths. Hence we shall prove that there exists some Assuming this, we will then take u, v to be antipodal vertices on the sphere of radius K 2) And it is clear that G(r) ≤ CK d G(r − K) and the assertion of the lemma follows, if only K can be chosen independently of r. We proceed to prove (3.2). For any u, v ∈ Z d and an integer ℓ > 0 (later we put where C(u) and C(v) denote the connected components containing u and v, respectively. By conditioning on C(u) we get that the right hand side equals Putting this into the second term of the right hand side of (3.4) and changing the order of summation gives that we can bound this term from above by Figure 2. The couple (x, y) is over-counted. If u ℓ ↔ x and u ↔ z then there exists z ′ such that the events u ↔ z ′ and z ′ ↔ z and z ′ ℓ ↔ x occur disjointly. Using the BK inequality we bound this sum above by We sum this over x and y and use (3.4) to get that Since |u − v| ≥ K, by the open triangle condition (3.1) we can take K large enough (independently of r) so that which immediately yields (3.2) and concludes our proof. Proof of Lemma 3.1. We start with a definition. For an integer K > 0 we say two vertices x, y ∈ Z d are K-over-counted if there exists u, v ∈ Z d with |u − x| ≥ K and |v − x| ≥ K such that We claim that where the ordering is induced by γ 1 and γ 2 respectively. Hence the map (x, y) → y from N (K) into B(0, 2r) is at most CK d · 2r to 1, which shows (3.5). We now estimate EN (K). For any (x, y) the BK inequality and (1.2) implies that the probability that (x, y) are K-over-counted is at most Writing v ′ = v − x and u ′ = u − x and using translation invariance we get that this sum equals We sum this over y and then over x and get that This together with the triangle condition (1.1) and Lemma 3.2 gives that for some small c > 0 we can choose some large K such that EN (K) ≥ cG(r) 2 . We take expectations in (3.5) and plug the estimate EN (K) ≥ cG(r) 2 in to get the assertion of the lemma. This concludes the proof of part (i) of Theorem 1.2. 3.2. Intrinsic metric arm exponent. Here we prove part (ii) of Theorem 1.2. The proof relies on the result of Barsky and Aizenman [12] stating that A lattice in R d satisfying the triangle condition satisfies, as h → 0 that This implies an estimate of P(|C(0)|) > n. Just fix h = 1/n and get We remark that Hara and Slade achieved a significantly stronger estimate [28]. Since the event {|C(0)| > n} is monotone, we get where C G (0) is the component containing 0 in percolation on G with p = p c (Z d ) (and as in the definition of Γ, not in the critical p of G itself). Proof of part (ii) of Theorem 1.2. Let A ≥ 1 be a large number such that where C 1 is from (3.7). We will now prove that Γ(r) ≤ 3Ar −1 . This will follow by showing inductively that for any integer k > 0 we have This is trivial for k = 0 since A ≥ 1. Assume the claim for all j < k and we prove for k. Let ǫ = ǫ(C 1 ) > 0 be a small constant to be chosen later and for any G ⊂ E(Z d ) write where the last inequality is due to (3.7). To estimate the first term on the right hand side we claim that P ∂B(0, 3 k ; G) = ∅ and |C G (0)| ≤ ǫ9 k ≤ ǫ3 k+1 (Γ(3 k−1 )) 2 . (3.9) To see this observe that if |C G (0)| ≤ ǫ9 k then there must be some level j ∈ [ 1 3 3 k , 2 3 3 k ] such that |∂B(0, j; G)| ≤ ǫ3 k+1 . Denote by j the first such level. If, in addition, ∂B(0, 3 k ; G) = ∅ then at least one vertex v of the ǫ3 k+1 vertices of level j "reaches level 3 k−1 ". Formally we do as in the proof of lemma 2.6, i.e. define G 2 to be G with all edges needed to calculate B(0, j; G) removed and get that ∂B(v, 3 k−1 ; G 2 ) = ∅ which, by the definition of Γ (with G 2 ) has probability ≤ Γ(3 k−1 ). Applying Markov's inequality gives P ∂B(0, 3 k ; G) = ∅ and |C G (0)| ≤ ǫ9 k B(0, j; G) ≤ ǫ3 k+1 Γ(3 k−1 ) . As in the proof of Lemma 2.6, we now sum over possible values of B(0, j; G) and get an extra term of P(H(0, 3 k−1 ; G)) because we need to reach level 3 k−1 to begin with. Since the last inequality holds for any G ⊂ E(Z d ) we have where the last inequality is by our choice of A. This completes our inductive proof that Γ(3 k ) ≤ A3 −k . Now, for any r choose k such that 3 k−1 ≤ r < 3 k then we have 3.3. Corresponding lower bounds. In the following we provide the corresponding lower bounds to the estimates of Theorem which concludes our proof since the event above implies H(r).
2009-11-16T23:00:11.000Z
2008-06-09T00:00:00.000
{ "year": 2009, "sha1": "322e545847444db366e0266488a58662533986e9", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0806.1442", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "322e545847444db366e0266488a58662533986e9", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
14143132
pes2o/s2orc
v3-fos-license
Life Extinctions By Cosmic Ray Bursts High energy cosmic ray jets from nearby mergers or accretion induced collapse (AIC) of neutron stars (NS) that hit the atmosphere can produce lethal fluxes of atmospheric muons at ground level, underground and underwater, destroy the ozone layer and radioactivate the environment. They could have caused most of the massive life extinctions on planet Earth in the past 600 My. Biological mutations due to ionizing radiations could have caused the fast appearance of new species after the massive extinctions. An early warning of future extinctions due to NS mergers may be obtained by identifying, mapping and timing all the nearby binary neutron stars systems. A warning of an approaching cosmic ray burst from a nearby NS merger/AIC may be provided by a very intense gamma ray burst which preceeds it. Introduction The early history of single-celled organisms during the Precambrian, 4560 to 570 million years (M y) ago, is poorly known. Since the end of the Precambrian the diversity of both marine and continental life increased exponentially. Analysis of fossil record of microbes, algae, fungi, protists, plants and animals shows that this diversification was interrupted by five major mass extinction "events" and some smaller extinction peaks [1] . The "big five" mass extinctions occurred in the Late Ordovician, Late Devonian Late Permian, Late Triassic and end-Cretaceous and included both marine and continental life. The largest extinction occurred about 251 M y ago at the end of the Permian period. The global species extinction ranged then between 80% to 95%, much more than, for instance, the end-Ordovician extinction 439 M y ago which eliminated 57% of marine genera, or the Cretaceous-Tertiary extinction 64 M y ago which killed the dinosaurs and claimed 47% of existing genera [2]. In spite of intensive studies it is still not known what caused the mass extinctions, how quick were they and whether they were subject to regional variations. Many extinction mechanisms have been proposed but no single mechanism seems to provide a satisfactory explanation of both the marine and continental extinction levels, the biological extinction patterns and the repetition rate of the mass extinctions [1,2]. These include astrophysical extinction mechanisms, such as meteoritic impact that explains the iridiun anomaly which was found at the Cretaceous/Tertiary boundary [3] but has not been found in all the other extinctions [4], supernova explosions [5] and gamma ray bursts [6], which do not occur close enough at sufficiently high rate to explain the observed rate of mass extinctions. In this paper we propose that high energy cosmic ray jets (CRJs) from mergers or accretion induced collapse (AIC) of neutron stars (NS) that hit the atmosphere of an Earth-like planet can produce lethal fluxes of atmospheric muons at ground level, underground and underwater, destroy the ozone layer and radioactivate the environment. Nearby NS mergers/AIC can explain the massive extinction on the ground, underground and underwater and the higher survival levels of radiation resistant species and terrain sheltered species in the five "great" extinctions in the past 600 M y. More distant galactic mergers/AIC can cause smaller extinctions. Biological mutations due to ionizing radiations may explain the fast appearance of new species after massive extinctions. Intense cosmic ray bursts enrich rock layers with detectable traces of cosmogenically produced radioactive nucleides such as 129 I, 146 Sm, 205 Pb and 244 Pu. Tracks of high energy particles in rock layers on Earth and on the moon may also contain records of intense cosmic irradiations. An early warning of future extinctions due to neutron star mergers can be obtained by identifying, mapping and timing all the nearby binary neutron stars systems. A final warning of an approaching CRJ from a nearby neutron stars merger is provided by a very intense gamma ray burst a few days before the arrival of the CRJ. Cosmic Ray Jets From NS Mergers Three NS-NS binaries are presently known within the galactic disc; B1913+16 [7] at a distance of D ∼ 7.3 kpc, B203+46 [8] at D ∼ 2.3 kpc, and B1534+12 [9] at D ∼ 0.5 kpc, and B2127+11C [10] in the globular cluster M15 at D ∼ 10.6 kpc. Although unseen as radio pulsars, the companion stars in these systems have been identified as neutron stars from their mass which is inferred via measurements of the relativistic periastron advance using standard pulse timing techniques [11]. The continuous energy loss through gravitational radiation brings the two NS closer and closer until they merge. It is believed that their final merger releases an enormous amount of gravitational binding energy, ∼ M ⊙ c 2 , in a few ms, in the form of gravitational waves, neutrinos and kinetic energy of relativistic ejecta. It was suggested that the neutrinos and/or the relativistic ejecta from NS merger/AIC in distant galaxies produce the mysterious gamma ray bursts (GRBs), that occur at a rate of about one per day, although the mechanism which converts their energy into gamma rays is still not clear [12]. It is also believed that due to strong gravitational tidal forces the final NS-NS or NS-BH merger proceeds through the formation of an accretion disc. Observations seem to indicate that highly collimated jets are ejected by all systems where matter is undergoing disc accretion onto a compact central object [13,14]. They also indicate that the jet kinetic energy is a considerable fraction of the accretion power and that the jets reach enormous distances without being de-flected by galactic or intergalactic magnetic fields: The highly relativistic and highly collimated jets from active galactic nuclei (AGN), which are believed to be powered by mass accretion onto a massive black hole at a typical rate of (∼ M ⊙ y −1 ), reach a distance up to a million light years before disruption [13]. Highly collimated relativistic matter that is ejected sporadicly by microquasars (superluminal galactic sources such as GRS 1915+105 [14,15], GRO J165-40 [16]) and by X-ray sources such as SS 433 [17] and Cygnus X-3 [18], which are are close binary systems where mass is accreted onto a neutron star or a stellar black hole from a companion star, seem to reach hundreds of light years before disruption. The exact mechanism by which the gravitational and electromagnetic fields around these accreting and rotating compact objects produce the highly relativistic jets is still unknown. However, the instantaneous accretion rates (≫ M ⊙ s −1 ) and the strength of the magnetic fields which are involved in the final stage of merger/AIC of compact stellar objects in compact binary systems are probably many orders of magnitude larger than those encountered in AGN. Therefore, it is natural to expect that highly relativistic jets are also ejected in mergers/AIC of compact stellar objects, are highly collimated and reach hundreds of light years, or more, before disruption. These jets may produce the cosmological gamma ray bursts by internal radiation and/or interaction with the external medium. After disruption they are isotropized by the galactic magnetic field and form the galactic cosmic rays. If they hit an Earth-like planet before disruption, they can devastate all forms of life on it. Constraints From GRBs and Cosmic Rays Jets ejected by NS mergers/AIC in distant galaxies may explain cosmological GRBs [19]. In view of the uncertainties in modeling jet ejection in NS merger/AIC, rather than relying on numerical simulations we inferred [19] from the observed properties of GRBs that the ejected jets have typical Lorentz factors of Γ ∼ 10 3 , beaming angles ∆Ω ≤ 1/100 similar to those observed/estimated for AGN and microquasars, and ejected mass ∆M ∼ (dM/dΩ)∆Ω ≤ 10 −4 M ⊙ (i.e., released kinetic energy bounded by The finite life times of close binaries due to gravitational radiation emission have been estimated [12] from the observed binary period, orbital eccentricity and the masses of the pulsar and its companion. They have been used to estimate that the NS-NS merger rate in the Milky Way (MW) disc is [20] R MW ∼ 10 −4 − 10 −5 y −1 . Beamed ejection from NS mergers/AIC requires merger/AIC rate of compact objects in the entire Universe (at GRBs redshifts) of the order of 10 5 − 10 6 y −1 . It is larger by 1/∆Ω ∼ 10 2 than that required by spherical explosions. It is, however, in good agreement with the updated observational and theoretical estimates of the merger rates of compact objects in the Milky Way, which yield values in the range 10 5 − 10 6 y −1 mergers per Universe [21], instead of initial estimates [20] of ∼ 10 3 − 10 4 y −1 . Thus, the updated estimates of the NS-NS merger rate in the Universe are consistent with the observed rate [12] of GRBs (approximately one per day) and with the injection rate of cosmic rays in the MW that is required in order to maintain a constant cosmic ray flux in the MW: The escape rate of cosmic rays from the MW requires an average injection rate of Q CR ∼ 10 41 erg s −1 , in high energy cosmic rays above GeV, in order to maintain a constant energy density of cosmic rays in the MW [22]. This injection rate can be supplied by NS-NS mergers if the injected jet energy per merger is E K ∼ 10 53 erg and if the NS-NS merger rate in the disc of the MW is R MW ∼ 10 −4 − 10 −5 y −1 : Attenuation of CRJs The highly relativistic jets from quasars and microquasars do not seem to be attenuated efficiently in the interstellar or the intergalactic space. This is unlike the non relativistic ejecta in supernova (SN) explosions, which is attenuated by Coulomb collisions in the interstellar medium (ISM) over a distance of a few pc: Moderately energetic charged particles, other than electrons, lose energy in neutral interstellar gas primarily by ionization. The mean rate of energy loss (or stopping power) is given by the Bethe-Bloch formula where Ze is the charge of the energetic particle of mass M i , velocity βc and total energy E = ΓM i c 2 , n e is the number of electrons per unit volume in the medium in atoms with ionization potential I, and T max = 2m e c 2 β 2 Γ 2 /(1 + 2γm e /M i ). If the interstellar gas around the SN is ionized (by the initial UV flash), then I has to be replaced by e 2 /R D where R D = (kT /4πe 2 n e (Z + 1)) 1/2 is the Debye screening length. Thus, for SN ejecta with Z ∼ 1 and v ∼ 10000 km s −1 in an ionized interstellar medium with a typical density of n H ∼ 1 cm −3 , and kT ∼ 1 eV the stopping distance of the ejecta due to Coulomb interactions is x ∼ E/2(dE/dx) ∼ 6 pc. In fact, the stopping of the ejecta by Coulomb interactions is consistent with observations of SN remnants, like SN 1006 [23], while the assumption that the ionized interstellar medium is glued to the swept up magnetic field seems to be contradicted by some recent observations [24]. The range increases with energy like β 4 until nuclear collisions become the dominant loss mechanism. The range of nuclei with Γ ∼ 1000 in a typical interstellar density of n H ∼ 1 cm −3 , is approximately ∼ 10 25 cm, i.e., much larger than galactic distances. Although the galactic magnetic field, H ∼ 3 − 5 × 10 −6 Gauss, results in a Larmor radius of r L = βΓmc/qH ∼ 10 15 cm for protons with Γ = 1000, it does not deflect and disperse significantly jets from NS-NS mergers at distances smaller than ∼ 1 kpc from the explosion. That can be concluded from the fact that accretion jets from forming stars, microquasars and AGN reach distances of tens, hundreds, and million light years, respectively, without significant deflection or attenuation. Probably, because of their high particle and energy densities the jets produce internal magnetic fields which shield them from the interstellar magnetic field and allow them to follow almost free balistic trajectories in the interstellar medium. Mass Extinctions By CRJs We assume that the ambient interstellar gas is not swept up with the jet. If it were, then the jet would not reach even ∼ 10 pc. In SN explosions collective modes are invoked as the source of the coupling of the ejecta to the interstellar medium, required in order to attenuate the SN debris. As mentioned above, binary Coulomb interactions are sufficient to produce the observed coupling in SN explosions, and coupling through collective modes is not necessarily present. Due to internal magnetic fields the jets are highly collimated, not deflected and probably reach distances of D ∼ 1 kpc, where the internal energy density becomes compareble to the external (magnetic and radiation) energy density. Unattenuated jets from NS-NS mergers can be devastating to life on nearby planets: At a distance of 1 kpc their duration is for typical values of Γ between 1000 and 100, respectively. The time integrated energy flux of the jet at D ∼ 1 kpc is, typically, ∼ 10 12 T eV cm −2 . Thus, the energy deposition in the atmosphere by the jet is equivalent to the total energy deposition of galactic cosmic rays in the atmosphere over ∼ 10 7 y. However, the typical energy of the cosmic rays in the CRJ is ∼ 1 T eV per nucleon, compared with ∼ 1 GeV per nucleon for ordinary cosmic ray nuclei. Collisions of such particles in the atmosphere generate atmospheric cascades where a significant fraction of the CRJ energy is converted into "atmospheric muons" through leptonic decay modes of the produced mesons. Most of these muons do not decay in the atmosphere because of their high energy, unlike most of the atmospheric muons which are produced by ordinary cosmic rays. The average number of high energy muons produced by nucleons of primary energy E p , which do not decay in the atmosphere and reach sea level with energy > E µ at zenith angle θ < π/2, is given approximately by [25]: Thus a jet with energy of about 1 T eV per nucleon at a distance of 1 kpc produces at sea level a flux of atmospheric muons of Such muons deposit energy in matter via ionization. Their energy deposition rate is [26] −dE/dx ≥ 2 M eV g −1 cm −1 . The whole-body lethal dose from penetrating ionizing radiation resulting in 50% mortality of human beings in 30 days [26] is ≤ 300 rad ∼ 2 × 10 10 /(dE/dx) ∼ 10 10 cm −2 where dE/dx is in rate is in M eV g −1 cm −1 units. The lethal dosages for other vertebrates can be a few times larger while for insects they can be as much as a factor 20 larger. Hence, a CRJ at D ∼ 1 kpc which is not significantly dispersed by the galactic magnetic field produces a highly lethal burst of atmospheric muons. Because of muon penetration, the large muon flux is lethal for most species even deep (hundreds of meters) underwater and underground, if the cosmic rays arrive from well above the horizon. Thus, unlike the other suggested extraterrestrial extinction mechanisms, a CRJ which produces a lethal burst of atmospheric muons can explain also the massive extinction deep underwater and why extinction is higher in shallow waters. Although half of the planet is in the shade of the CRJ, planet rotation exposes a larger fraction of the planet surface to the CRJ. Additional effects increase the lethality of the CRJ over the whole planet. They include: (a) The pollution of the environment by radioactive nuclei, produced by spallation of atmospheric and surface nuclei by shower particles. Using the analytical methods of [27], we estimate that for an Earth-like atmosphere, the flux of energetic nucleons which reaches the surface is also considerable, Global winds spread radioactive gases in a relatively short time over the whole planet. (b) Depletion of stratospheric ozone by the reaction of ozone with nitric oxide, generated by the cosmic ray produced electrons in the atmosphere (massive destruction of stratospheric ozone has been observed during large solar flares which produced energetic protons [28]). (c) Extensive damage to the food chain by radioactive pollution and massive extinction of vegetation and living organisms by ionizing radiations (the lethal radiation dosages for trees and plants are slightly higher than those for animals but still less than the flux given by eq. 5 for all except the most resilient species). Signatures of CRJ Extinction The biological extinction pattern: The biological extinction pattern due to a CRJ depends on the exposure and the vulnerability of the different species to the primary and secondary effects of the CRJ. The exposure of the living organisms to the muon burst depends on the intensity and duration of the CRJ, on its direction relative to the rotation axis of Earth (Earth shadowing), on the local sheltering provided by terrain (canyons, mountain shades) and by underwater and underground habitats, and on the risk sensing/assessment and mobility of the various species. The lethality of the CRJ depends as well on the vulnerability of the various living species and vegetation to the primary ionizing radiation, to the drastic changes in the environment (e.g., radioactive pollution and destruction of the ozone layer) and to the massive damage and radioactive poisoning of the food chain. Although the exact biological signature may be quite complicated, and somewhat obscured in fossil records (due to poor or limited sampling, deterioration of the rocks with time and dating and interpretation uncertainties because of bioturbational smearing) it may show the general pattern expected from a CRJ extinction. Indeed, a first examination of the fossil records suggest that there is a clear correlation between the extinction pattern of different species, their vulnerability to ionizing radiation and the sheltering provided by their habitats and the environment they live in. For instance, insects which are less vulnerable to radiation, were extinct only in the greatest extinction -the end-Permian extinction 251 M y ago. Even then only 8 out of 27 orders were extinct compared with a global species extinction that ranged between 80% to 95% [4]. Also plants which are less vulnerable to ionizing radiation suffered lower level of extinction. Terrain, underground and underwater sheltering against a complete extinction on land and in deep waters may explain why certain families on land and in deep waters were not extinct even in the great extinctions, while most of the species in shallow waters and on the surface were extinct [4]. Mountain shadowing, canyons, caves, underground habitats, deep underwater habitats and high mobility may also explain why many species like crocodiles, turtles, frogs, (and most freshwater vertebrates), snakes, deep sea organisms and birds were little affected in the Cretaceous/Tertiary (K/T) boundary extinction which claimed the life of the big dinosaurs and pterosaurs. In particular, fresh underground waters in rivers and lakes are less polluted with radioisotopes and poisons produced by the CRJ than sea waters and may explain the survival of freshwater amphibians. Geological Signatures: The terrestrial deposition of the primary CRJ nuclides or the production of stable nuclides in the atmosphere or in the surface by the CRJ is too small to be detectable. In particular, the proposed mechanism cannot explain the surface enrichment at the K/T boundary by about 3 × 10 5 tons of iridium. Alvarez et al. [3] suggested that the impact of extraterrestrial asteroid, with an Ir abundance similar to that observed in early solar system Chondritic meteorites whose Ir abundance is larger than crustal Ir abundance by ∼ 10 4 , could cause the Ir anomaly and explain the mass extinction at the K/T boundary. However, no significant Ir enrichment was found in all other mass extinctions. Moreover, other isotopic anomalies due to meteoritic origin have not been found around the K/T boundary; in particular the As/Ir and Sb/Ir ratios are three orders of magnitude greater than chondritic values but are in accord with a mantle origin [29]. Extensive iridium measurements showed that the anomaly does not appear as a single spike in the record, indicative of an instantaneous event, but rather occur over a measurable time interval of 10 to 100 ky or possibly longer [30]. That and the high abundance of Ir in eruptive magma led to the suggestion that the iridium anomaly is due to a global volcanic activity over 10 to 100 kyr at the K/T boundary [30,31] which also caused the K/T extinction: Eruptions have a variety of short term effects, including cooling from both dust and sulfates ejected into the stratosphere, acid rain, wildfires, release of poisonous elements and increase in ultraviolet radiation from ozone-layer depletion. But, examination of major volcanic eruptions in the past 100 M y have shown that none of them greatly affected the diversity of regional and global life on land or in the oceans [4]. A CRJ will enhance the abundance of stable cosmogenic isotopes in the geological layer corresponding to the CRJ event, but, the enrichment may be negligible compared to their accumulation through long terrestrial exposure of the geological layers to galactic cosmic rays prior to the CRJ. However, CRJ enrichment of sediments with unstable radioisotopes of mean lifetimes much shorter than the age of the solar system, τ ≪ t ⊙ ≈ 4570M y, but comparable to the extinction times, may be detectable through low traces mass spectrometry. In particular, fission of long lived terrestrial nuclei, such as 238 U and 232 Th, by shower particles, and capture of shower particles by such nuclei, may lead to terrestrial production of, e.g., 129 I with τ = 15 M y, 146 Sm with τ = 146 M y, 205 Pb with τ = 43 M y and 244 Pu with τ = 118 M y, respectively. These radioisotopes may have been buried in underwater sediments and underground rocks which were protected from further exposure to cosmic rays. The main background to such a CRJ signature is the continuous deposition by cosmic rays and by meteoritic impacts on land and sea. Cosmic rays may include these trace radioisotopes due to nearby sources (e.g., supernova explosions) and because of spallation of stable cosmic ray nuclei in collisions with interstellar gas. Meteorites may include these trace elements due to a long exposure in space to cosmic rays. Finally, large enhancement of TeV cosmic ray tracks in magma from volcanic eruptions coincident with extinctions may also provide fingerprints for CRJ extinctions. Rate of Mass Extinctions Assuming that the spatial distribution of NS binaries and NS mergers in the MW follow the distribution of single pulsars [], with a disc scale length, R 0 ∼ 4.8 kpc, and a scale height, h > 0.5 kpc perpendicular to the disc and independent of disc position, we find that the average rate of CRJs from NS-NS mergers that reach planet Earth from distances ≤ 1 kpc, is ∼ 10 −8 y −1 . It is consistent with the 5 big extinctions which have occurred during the last 600 M y in the Paleozoic and Mesozoic eras. The relative strengths of these extinctions may reflect mainly different distances from the CRJs. Beyond ∼ 1 kpc from the explosion the galactic magnetic field begins to disperse the CRJ and suppresses its lethality. Such CRJs, if not too far, can still cause partial extinctions at a higher rate and induce biological mutations which may lead to the appearance of new species. The galactic rate of SN explosions is ∼ 100 y −1 . The range of debris from SN explosions in the interstellar medium is shorter than 10 pc. The rate of SN explosions within a distance of 10 pc from Earth which follows from eq. 7 is R SN (< 10 pc) ≈ 10 −10 y −1 . High energy cosmic rays which, perhaps, are produced in the SN remnant by shock acceleration, carry only a small fraction of the total explosion energy and arrive spread in time due to their diffusive propagation in the interstellar magnetic field. Also neutrino and light emissions similar to those observed in SN1987A, at a distance of a few pc cannot cause a mass extinction. Conclusions Cosmic Ray Bursts from neutron star mergers may have caused the massive continental and marine life extinctions which interrupted the diversification of life on our planet. Their rate is consistent with the observed rate of mass extinctions in the past 570 M y. They may be able to explain the complicated biological and geographical extinction patterns. Biological mutations induced by the ionizing radiations which are produced by the CRJs may explain the appearance of completely new species after extinctions. A first examination suggests a significant correlation between the biological extinction pattern of different species and their exposure and vulnerability to the ionizing radiation produced by a CRJ. The iridium enrichment around the Cretaceous/Tertiary extinction that claimed the life of the dinosaurs and pterosaurs cannot be due to a CRJ. It may have been caused by intense volcanic eruptions around that extinction. Isotopic anomaly signatures of CRJ extinctions may be present in the geological layers which recorded the extinctions. Elaborate investigations of the effects of CRJs from relatively nearby neutron star mergers and their biological, radiological and geological fingerprints are needed before reaching a firm conclusion whether the massive extinctions during the long history of planet Earth were caused by CRJs from neutron stars mergers. If nearby neutron star mergers are responsible for mass extinctions, then an early warning of future extinctions due to neutron star mergers can be obtained by identifying, mapping and timing all the nearby binary neutron stars systems. A final warning for an approaching CRJ from a nearby neutron-stars merger will be provided few days before its arrival by a gamma ray burst produced by the approaching CRJ.
2014-10-01T00:00:00.000Z
1997-05-02T00:00:00.000
{ "year": 1997, "sha1": "5b7440e26bf36037dcb599a3a0825904fa96ccdb", "oa_license": null, "oa_url": "http://arxiv.org/pdf/astro-ph/9705008", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "5b7440e26bf36037dcb599a3a0825904fa96ccdb", "s2fieldsofstudy": [ "Environmental Science", "Physics" ], "extfieldsofstudy": [ "Physics" ] }
255703536
pes2o/s2orc
v3-fos-license
Error Analysis on Noun Phrase in Students’ Undergraduate Theses The aim of this research is (1) to analyze the components of noun phrase errors that are often made by students in the introduction parts of their undergraduate theses, (2) to analyze the types of noun phrase errors that are often made by students in the introduction parts of their undergraduate theses, (3) to find out the differences between noun phrase errors made students in the introduction parts of their undergraduate theses. The method used in this research was mix method. The instrument comprised observation and documentation. The result of this research was as follows: (1) the most dominant component of noun phrase error made by UPI and Unib students in the introduction parts was a head error, (2) the most dominant type of error in noun phrases made by UPI was addition error; however, the most dominant type of noun phrase errors among Unib students was omission error, (3) the number of noun phrase errors in Unib was higher than those in UPI. It is recommended that students improve their mastery of noun phrases, that lecturers teach students how to write the introduction well, specifically on noun phrases, and that next researchers investigate additional aspects of noun phrases. INTRODUCTION One of the most important elements of writing a thesis is the introduction section because this chapter talks about the foundation of the problem and the reason why the researcher wants to do the research. According to Hidayat (2015), the first chapter in the structure of a thesis is the introduction, which is divided into three sections: introduction and identification of study difficulties, discussion of previously done and relevant research, and discussion of data obtained during pre-research. According to Pardede (2012), the introduction section provides background information about the study problem or what the researcher wants to communicate about their research. Thus, the introduction section is one of the essential parts of the thesis but many undergraduate students in English Education, who generally master English, sometimes make an error in their thesis writing, including in the introduction. Even though grammar learning, especially on noun phrases, has been taught to English Education students since the early stages of their education, noun phrase errors could still be discovered in many of their language activities, especially in writing. Generally, the noun phrase errors that students develop are due to a lack of mastery of basic writing mechanics and noun phrase rules. The errors could appear minor and non-significant, and yet they could have a major impact on the quality of their writing. This is also supported by Hidayat (2015), who stated that an error occurred in the introduction section of the thesis because students did not understand basic writing structure and grammar structure. The students can make errors in their writing, such as thesis, including chapter 1 until chapter 5. In chapter 1 there is the introduction. When the students write the introduction, they ought to make it by using good structures suitable to grammatical rules in a foreign language, especially on the noun phrase. Otherwise, one of the grammatical errors that the students usually do in their thesis writing is noun phrase error. This is probably because students are influenced by their first language or mother tongue, carelessness, or translation factors (Norrish, in Rinata, 2018). The way to know about students' errors is through error analysis. According to Richard, in Situmorang (2019), error analysis is the analysis of errors made by second and foreign-language speakers. Thus, the reason that the researcher analyzed the noun phrase errors in students' undergraduate theses, especially in the introduction sections, is because this research and other articles have differences, namely, most previous studies focused on the error analysis of noun phrases in descriptive texts. However, this research focuses on noun phrase forms of students' undergraduate theses, especially in the introduction sections. Besides that, the previous studies used descriptive methods to analyze texts, but this research used the comparative method to investigate the noun phrases written by the students of the Indonesia University of Education, which has an "A" accreditation, especially for the English study program, and the students of Bengkulu University, which have a "B" accreditation, particularly for the English study program. The similarity of this research to the previous studies, it mostly uses the same theories that are from Dulay (1982) and Greenbaum & Nelson (2015). On the other hand, the reason that the researcher analyzed students' undergraduate theses only in the introduction is that, in the chapter, the students write the introduction by combining their ideas with preliminary data and supporting theories. Errors are more likely to occur in this section because they must build it by writing their thoughts and explanations (Hidayat, 2015). In the introduction, the students tell about all of the basics of research, such as the foundation of the problem or the reason why they do the research, in their own words in English. Because of that, sometimes they make errors, especially noun phrase errors, in a thesis introduction. Three research objectives of this research, namely to analyze the components of noun phrase errors that are often made by the Indonesia University of Education and Bengkulu University students in the introduction parts of their undergraduate theses, to analyze types of noun phrase errors that are often made by the Indonesia University of Education and Bengkulu University students in the introduction parts of their undergraduate theses, and to find out the differences between noun phrase errors made by Indonesia University of Education and Bengkulu University students in the introduction parts of their undergraduate theses. THEORETICAL FRAMEWORK Error analysis Divsar and Heydari (2017:143) state that error analysis (EA) is a method for gathering errors identified in students' language, determining whether the problems are systematic or not, and clarifying the causes for errors that are found in students. Jabeen, Kazemian, and Mustafai (2015:53) contend that error analysis provides a comprehensive insight into the process of language learning that is performed by students. Thus, error analysis is the way to identify the errors in students' language; it is also to know the causes of students' errors in learning the language, and it can give a comprehensive insight into the process of language learning that is performed by students. Noun phrase According to Swierzbin (2014), English learners must understand nouns, but it is much more essential to remember noun phrases for establishing a more particular meaning than the noun itself. Moreover, the noun phrase could be used to express accurate information in a timely and precise way, because then the writing does not appear wordy in every sentence formed by the noun phrase. Abdurrahman (2018) suggests that students at the university level ought to be able to master a foreign language, particularly English noun phrase forms. The component of a noun phrase There are three components of noun phrases. They are: a. Head A noun is the most general head of a noun phrase. However, a noun phrase can be without any element and just consists of a noun; it is also called a bare noun phrase, such as "pencils", which is possible for plural nouns and mass. b. Premodifier Premodifier is the part of a noun phrase that occurs before the noun phrase or head. Besides that, the premodifier is an adjective phrase, such as a blue motorcycle. According to Jackson, in Junaid (2018), a premodifier contains several word classes, such as a noun modifier and a numeral identifier/quantifier adjective. c. Post-modifier Post-modifier consists of a clause, adverb phrase, prepositional phrase, and adjective phrase. Post-modifier has the function in English noun phrases as adjunct or complement. Thesis Introduction According to Hidayat (2015), the first chapter in the structure of a thesis is the introduction. It is divided into three sections: introduction and identification of study difficulties, discussion of previously done and relevant research, and discussion of data obtained during the research. The introduction is one of the most important things in the thesis because it gives information to the readers about the thesis clearly and comprehensively. In the introduction, the students write the introduction by combining their ideas with preliminary data and supporting theories. Errors are more likely to occur in this section because they must build it by writing their thoughts and explanations (Hidayat, 2015). The students also tell about all of the basics of research in the introduction, such as the foundation of the problem or the reason why they do the research, in their own words in English. Noun phrase in a thesis introduction According to Kusuma, Sujoko, and Sulistyowati (2014), a noun phrase is the structure of the head and its modifiers. A noun phrase also has the function to describe a person, a thing, or a place specifically. The introduction is one of the most difficult portions of writing a paper or thesis; here, writers need to focus on how to begin and what they precisely need to say. The introduction should be brief and compelling, and it should explain why the writer decided to conduct the research. Noun phrase in the introduction is important because, if the students or writers want to explain specifically about something, they can use noun phrase to make the readers understand clearly what they want to deliver in the introduction section, without misleading or misunderstanding the meaning of sentences. RESEARCH METHODOLOGY The research method that was used in this research was a mixed method of quantitative and qualitative methods. In this research, the type of mixed method that the researcher used was an explanatory sequential design. According to Creswell & Clark (2017), an explanatory sequential design is composed of initially collecting quantitative data and afterward gathering qualitative data to further interpret or elaborate on the quantitative results. The reason was that the researcher focused on a quantitative method to answer the research question about the component and type of noun phrase errors that were most often made by students in their undergraduate thesis, and the researcher used the quantitative method because the data were also shown in the percentage of occurrence to answer the first and second research questions. On the other hand, the researcher used a comparative study because the researcher wanted to know the differences and similarities in noun phrase errors between students' undergraduate theses at Bengkulu University and the Indonesia University of Education. The subjects of the research were the introduction parts (the backgrounds of the studies) of the theses of undergraduate students at English Education in Indonesia University of Education and Bengkulu University students in 2019. For the number of words in the introduction sections (backgrounds of the studies) that were analyzed by the researcher, there were just 1000 words of the introduction sections (backgrounds of the studies) to analyze the noun phrase errors here. It was because the introduction section (background of the study) in the Indonesia University of Education is mostly simple and brief, namely, around 1000 words or even less than 1000 words. However, at Bengkulu University, the introduction section (background of the study) is mostly around 1000 words or more than that. To collect the data, absolutely the researcher required several instruments to help the researcher. In this research, the researcher used observation and documentation. There were several steps in the procedure of the research, based on Creswell (2014) that are identifying a research problem, reviewing the literature, specifying a purpose for research, collecting the data, evaluating and analyzing the data, and making the report. In the quantitative method, the researcher used Cohen Kappa (statistical analysis) to validate the data with the co-rater. However, in the qualitative method, the researcher used triangulation. The researcher analyzed the data by using Dulay's (1982) and Nelson & Greenbaum's (2015) theories to find out the components and types of noun phrase errors. RESULTS & DISCUSSION There were 51 noun phrase errors in 10 thesis introductions of students at the Indonesia University of Education and 65 noun phrase errors in 10 thesis introductions of students at Bengkulu University. Thus, the total was 116 noun phrase errors. The researcher used Cohen Kappa theory to minimize subjectivity, where the researcher and a co-rater determined the agreement of the findings of components and types errors of noun phrases in students' theses in the introduction parts (backgrounds of the studies). Total of a noun phrase 1,299 1,067 Table 1 shows that Indonesian University of Education students (UPI) made errors with the noun phrase components in the head, which amounted to 28 items (54.9%) of noun phrases in students' introductions, in which the frequency was often. 1,002 1,067 94% Excellent Based on the table above, the quality of components of noun phrases that Bengkulu University students made in their introduction sections was excellent, which was 94%. Table 4 shows that Indonesia University of Education students (UPI) made the type of error (omission error) in the noun phrase, which amounted to 19 items (37.3%), out of 1,299 noun phrases in students' introductions, while the frequency was sometimes. However, Bengkulu University students made the type of error (omission error) in the noun phrase, which amounted to 32 items (49.2%), out of 1,067 noun phrases in students' introductions, while the frequency was often. However, the total of types of noun phrase errors in Indonesia University of Education and Bengkulu University students had differences in frequency and percentage, which for Indonesia University of Education students was 3.9% and for Bengkulu University students was 6.1%. 1,002 1,067 94% Excellent Based on the table above, the quality of the types of noun phrases that Bengkulu University students made in their introduction sections were excellent, which was 94%. Figure 1 (Bar Chart 1) The differences between the number of noun phrase errors in UPI and Unib students' introduction sections Thus, from the bar chart, the students at the Indonesia University of Education and Bengkulu University had differences in the number of noun phrase errors. The Indonesia University of Education students had 51 items, out of 1299 noun phrases, in components and types of noun phrase errors (3.9%), of which the frequency was rare; however, Bengkulu University students had 65 items, out of 1,067 noun phrases in components and types of noun phrase errors (6.1%), of which the frequency was rarely. The result showed that the most frequent component of noun phrase errors made by Indonesia University of Education and Bengkulu University students in the introduction parts of their undergraduate theses was a head error. This was probably because the students were still influenced by their first language (interlingual error). The differences in the systems of both languages make the learning process complicated and contribute to students' errors in learning languages. This was supported by the data when the students did not use suffixes in plural nouns; it meant that the students were still influenced by their first language, which is Bahasa Indonesia, in which Bahasa does not use suffixes in the plural noun. This finding was in line with the finding by Hmouma (2014), in which the component of noun phrase error that students often did was in the noun (head). In addition, the finding was also supported by the finding from Novianti (2018), in which the result of her research was that the students often wrote noun phrases that contained head errors. Although the students made some noun phrase errors, the quality of the components of noun phrases in their introduction sections was in the excellent category. The researcher discovered just 3 out of 4 noun phrase error types, based on Dulay's theory, in students' undergraduate thesis introductions written by Indonesia University of Education students and Bengkulu University students, in which they made the errors of the types of noun phrases in omission error, addition error, and misformation error. The most frequent type of noun phrase errors in introductions among Indonesia University of Education and Bengkulu University students had a difference which, in the case of the Indonesia University of Education, was an addition error. This happened probably because the students who learned English as a foreign language were still confused or made errors in noun phrase form. This statement was also supported by Kurniawati, Fauziati, & Sutopo (2015), who said that students were still confused and did not master noun phrase forms because of the differences between Indonesian and English forms in noun phrases. This is also known as intralingual error, based on James's theory (2001), and this happens when students are unfamiliar with a target language pattern at any standard or in any category. However, at Bengkulu University it was an omission error. It was probably because of an Interlingual error. The students are affected by their mother language's persistence when using the new language. Based on the result of analyzing students' undergraduate thesis introductions, the researcher discovered the omission of "s" for a plural noun. This was in line with Erlangga, Suarnajaya, and Juniata (2019), who found that the omission of "s" for a plural noun was an interlingual error. This finding was supported by Kusuma, Sujoko, and Sulistyowati (2014), whose research result showed that the most frequent type of noun phrase error was omission error. In addition, the finding was also supported by the result from Sitorus and Sipayung (2016), in which the most frequent type of noun phrase error was omission error. Thus, students at the Indonesia University of Education and Bengkulu University were influenced by intralingual and interlingual errors. Although the students made some noun phrase errors, the quality of the types of students' noun phrases in their introduction sections was in the excellent category. The differences between noun phrase errors made by Indonesia University of Education and Bengkulu University students in the introduction parts of their undergraduate theses, in which there were differences, such as the most frequent type of noun phrase error in the student's undergraduate thesis introductions. In the case of the Indonesia University of Education, the error, based on Dulay's theory, was an addition error. However, in the student's undergraduate thesis introductions at Bengkulu University, concerning the noun phrase error, based on Dulay's theory, the most frequent type was omission error. Besides `that, the number of noun phrase errors that occurred in students' undergraduate theses, in introduction sections, at Bengkulu University was higher than at the Indonesia University of Education. This happened probably because the accreditation of the University, especially the study program, also influences students' outcomes; for example, in a university that has an "A" accreditation for its study program, its students' undergraduate theses have fewer errors than those in a University which has a "B" or "C" accreditation for its study program. This is in line with Kumar, Shukla, & Passey (2020), who stated that the development and application of curriculum, as well as the achievement of academic results, were heavily reliant on qualified faculty. Volkwein, in Kumar, Shukla, and Passey (2020), discovered that accreditation was a major driving force in such a set of convergent variables that impacted academic programs and learning. Ulker and Bakioglu (2019) also supported that the accreditation had the greatest impact on the emphasis placed on academic results and also had an impact on the number of students graduating from a program. Furthermore, another reason why Indonesia University of Education students had few noun phrase errors than Bengkulu University students was probably that students' competitiveness at the Indonesia University of Education was higher than that in Bengkulu University, because the Indonesia University of Education has stricter filtering, in accepting the students for its university or study program than Bengkulu University. Besides that, there were also several similarities between Indonesia University of Education and Bengkulu University in students' undergraduate thesis introductions, in which the most-frequent component of noun phrase errors in students' introductions in Indonesia University of Education and Bengkulu University was a head error. Then, the quality of the components of students' noun phrases in students' introduction sections at Indonesia University of Education and Bengkulu University was in the excellent category. Finally, there was no misordering error as one of the noun phrase error types in students' undergraduate thesis introductions at Indonesia University of Education and Bengkulu University. CONCLUSION Based on 20 students' thesis introductions, which consisted of 10 students' thesis introductions from the Indonesia University of Education and 10 students' thesis introductions from Bengkulu University, the researcher found the differences between noun phrase errors made by the Indonesia University of Education and Bengkulu University students in the introduction parts of their undergraduate theses, the researcher found some differences and similarities. Concerning the differences in the number of noun phrase errors, students at Bengkulu University had a higher number than those at the Indonesia University of Education, in which there were 51 noun phrase errors in 10 theses' introductions of students in the Indonesia University of Education and 65 noun phrase errors in 10 theses' introductions of students in Bengkulu University. The students at the Indonesia University of Education had the most frequent type of noun phrase error in addition error; however, students at Bengkulu University had the most frequent type of noun phrase error in omission error. Even though they had a different results in the type of noun phrase error, students at Indonesia University of Education and Bengkulu University still had a similarity in that the most frequent component of noun phrase error in these two universities was the head error.
2023-01-12T18:38:19.033Z
2022-11-30T00:00:00.000
{ "year": 2022, "sha1": "6e26cd11d5bd1d9f27ba58f0c436a990255a711f", "oa_license": "CCBYNCSA", "oa_url": "http://journal.iaincurup.ac.id/index.php/english/article/download/4659/pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "22a4b0d53bb933539d351a70c6684ab5ae2cb9fd", "s2fieldsofstudy": [ "Education", "Linguistics" ], "extfieldsofstudy": [] }
218727817
pes2o/s2orc
v3-fos-license
Impact of suboptimal dosimetric coverage of pretherapeutic 18F-FDG PET/CT hotspots on outcome in patients with locally advanced cervical cancer treated with chemoradiotherapy followed by brachytherapy Highlights • Hotspots can be easily identified on the pretherapeutic PET in patients with cervical cancer.• Registration of PET with planning CT allows for the dosimetric coverage evaluation of these hotspots.• The initial hotspot was not entirely included in the CTV-high risk in 40% of patients who recur during the follow-up, compared to 7% in patients without recurrence.• Hotspot was not entirely included in the CTV-high risk in 40% of patients who recur-Hotspots-guided radiotherapy could be applied easily in daily routine. Introduction Cervical cancer (CC) is the third most common malignancy and the fourth most common cause of cancer-related deaths in women worldwide [1]. Seventy to eighty percents of patients are diagnosed with locally advanced disease [2]. In these patients, the standard of care consists of pelvic external beam radiotherapy (EBRT) in combination with cisplatin-based chemotherapy, and subsequent brachytherapy (BT). However, despite recent advances in cervical cancer management especially in image-guided radiotherapy and image-guided brachytherapy, approximately 40% of patients present a recurrence after curative intent treatment, and eventually die of disease [2]. 18F-fluorodeoxyglucose (18F-FDG) positron emission tomography/computed tomography (PET/CT) is recommended for initial staging [3] or at the time of recurrence of CC and could also have an interesting role in evaluating response to treatment [3,4]. In addition, the intensity of 18F-FDG uptake is a valuable prognostic factor: pre-treatment maximum standardized uptake value https://doi.org/10.1016/j.ctro.2020.05.004 2405-6308/Ó 2020 The Author(s). Published by Elsevier B.V. on behalf of European Society for Radiotherapy and Oncology. This is an open access article under the CC BY license (http://creativecommons.org/licenses/by/4.0/). (SUVmax) has been shown to predict the presence of lymph nodes metastasis at diagnosis, and an independent predictor of recurrence and survival [5]. For brachytherapy, a residual gross tumor volume (GTV), a high-risk clinical target volume (CTV-HR) and an intermediate-risk clinical target volume (CTV-IR) are defined according to (i) gynecological examination and (ii) MRI at diagnosis and at the time of brachytherapy [6]. However, the role of PET in radiotherapy planning and particularly in volume delineation has been poorly studied in patients with CC [7][8][9][10][11][12]. Pre-treatment high 18F-FDG uptake areas on PET/CT denoted as ''hotspots" have been identified as preferential sites of local relapse after chemoradiotherapy (CRT) in many tumor subtypes including non-small cell lung cancer [13], rectal cancer [14], head and neck cancer [15] and esophageal cancer [16]. We previously found that there is a good correlation between the site of recurrent disease and the more active intratumoral region on the baseline PET/CT in locally advanced CC [17]. The identification of these hotspots on the staging PET/CT could guide the dose distribution of the BT plan and improve outcome whilst minimizing irradiation of surrounding tissues. We therefore investigated the dosimetric coverage of these hotspots during the brachytherapy procedure, in patients with or without recurrent disease. Patients All patients with histologically proven locally advanced CC, staged IB1-IVA (FIGO 2009 definition [18], (The classification of patients according to FIGO 2018 staging is available in table 1, supplementary material) and treated at our institution with definitive curative CRT and subsequent BT from September 2012 to December 2017 and who developed recurrence during follow-up were included in this retrospective study. A control group of patients with the same clinical-(age, tumor volume, FIGO staging, lymph nodes status) and treatment-(type of EBRT, BT dose, chemotherapy) -related characteristics who did not develop any recurrence during follow-up was also identified for the purpose of matching. A minimum follow-up of 6 months was mandatory. Patients with a history of previous chemotherapy or RT and/or metastatic disease were excluded. None of the patients received adjuvant chemotherapy. All patients underwent an initial pre-treatment 18F-FDG PET/CT as part of the initial staging (PET1) and at the time of recurrence. Collected data included age and date of diagnosis, histology, FIGO stage, presence of positive lymph nodes on 18F-FDG PET/ CT, tumor size as measured on MRI, EBRT and BT dose, date and site of recurrence, as well as date and status at last follow-up. Recurrences were considered as local (vaginal and/or cervical), regional (pelvic/para-aortic), or distant (upper abdominal and/or extraabdominal) [19]. All patients provided signed informed consent for the use of their clinical data for scientific purposes and for the anonymous publication of data. Our Institutional Review Board approved this study (29BRC18.0015). PET/CT acquisition Scans were performed on a Biograph mCT S64 TM (Siemens Ò Healthineers Medical Solutions, Knoxville, TN, United States) for all patients. Standard preparation included at least 4 h of fasting and a serum blood glucose level <7 mmol/L before tracer adminis-tration. PET acquisitions were carried out approximately 60 min after injection of 3 MBq/kg of 18F-FDG. The Biograph scanner consisted of a 64-slice multidetector-row spiral CT with a transverse field of view of 700 mm. Standard CT parameters were used: collimation of 16 Â 1.2 mm 2 , pitch 1, tube voltage of 120 kV, and effective tube current of 80 mAs. 3D PET data were reconstructed using an ordered subsets expectation-m aximization (OSEM) algorithm (true X 5 point spread function + time of flight OSEM-three dimensional [3D]). Treatment Consortium guidelines were applied to outline the GTV, the CTV, the planning target volume (PTV) and organs-at-risk [20]. Treatment consisted of three-dimensional conformal radiotherapy (3DRT) (n = 30 in each group) or intensity-modulated radiotherapy (IMRT) (n = 12 in each group) delivered using a linear accelerator (ONCOR TM Digital Medical Linear Accelerator from Siemens Ò Medical Solutions, Inc. or a TrueBeam STx Novalis Linear Accelerator), followed by high-dose-rate (HDR) intracavitary BT. All patients received pelvic EBRT or extended-field RT to the para-aortic area using high energy photons (18 MV), at a dose of 45 Gy using standard fractionation. For patients with positive pelvic or para-aortic lymph nodes, an image-guided targeted boost was delivered sequentially up to a total dose of 54 Gy to the involved nodes. However, for 6 patients (3 in each group) a dose of 50.4 Gy only was delivered due to OAR constraints ( Table 2, supplementary material). Patients received 3-4 fractions of MRI-guided HDR intracavitary BT every 4 days ( Table 2, supplementary material), which commenced one week after EBRT completion. The prescribed dose was 6-7 Gy to the high-risk CTV. The following dose constraints were applied: CTV-HR D90 (EQD2 10 ) ! 85 Gy, CTV-IR D90 (EQD2 10 ) ! 65 Gy, GTV D98 (EQD2 10 ) ! 90 Gy, D2cm 3 of bladder <90 Gy, D2 cm 3 of rectum <75 Gy, and D2 cm 3 of sigmoid/bowel <75 Gy [6]. The reference ICRU (International Commission of Radiation Units) recto-vaginal point dose had to receive less than 75 Gy. A new plan was performed for each brachytherapy fraction. The delineation of the volumes of interest was performed on the planning CT with help of the MRI. Follow-up Clinical follow-up consisted of physical examination every three months until 2 years after diagnosis, every 6 months up to 5 years, annually thereafter, and was done alternatively by the radiation oncologist and gynaecologist. Follow-up imaging studies consisted of MRI and 18F-FDG PET/CT at 3 months after treatment and annually until 2 years after treatment completion, CT every 6 months until 2 years after treatment completion and if clinically indicated thereafter, and/or 18F-FDG PET/CT if clinically indicated. PET/CT and CT planning registrations For each patient, a rigid registration of the CT component of the pretherapeutic PET/CT with the radiation planning CT was performed using the 3D Slicer TM Expert Automated Registration module [21] optimized with the Mattes mutual information metric [22]. The transform was initialized with a registration of the two centers of mass of the images with a box centered on the cervix. The obtained transform was then applied to the corresponding PET. In cases of obvious misalignments (in case of tumor response and/or deformation by the applicator at the time of BT), manual adjustments by translation were allowed. Volumes determination We exploited PET images only. The fuzzy locally adaptive Bayesian (FLAB) algorithm previously validated for automatic tumor volume delineation [23,24] was used. Indeed, in the absence of ground-truth and based on previous results, FLAB was assumed to provide more accurate and robust volumes compared to fixed thresholds [25,26]. FLAB was applied using 3 classes (one for background and the other two for tumor) to simultaneously define an overall tumor volume and the high-uptake sub-volume, referred to V1 [27]. Coverage analysis First, we converted the CTV-HR into a volume and then evaluated the inclusion of the segmented high-uptake sub-volumes (V1) in the CTV-HR for each brachytherapy session. This was done using the ''Dose Volume Histogram" module in 3D Slicer. The average of the 3-4 BT sessions was reported. We also evaluated the coverage of the V1 by different isodose lines (from the 85 Gy isodose line to the isodose that allowed 100% coverage of the CTV-HR). To evaluate the spatial distribution of the hotspots, all PET sets with delineated hotpots were registered with a reference planning scan from a standard patient after automatic deformable registration using the ATLAS option of the MIM Vista Ò 6.5.2 Software. Statistical analysis Patients in the recurrence cohort were individually matched to patients in the non recurrence cohort. Matching criteria used were tumor volume, stage (FIGO IB1 to IVA), lymph nodes involvement and age. Where an exact match according to the above criteria was not possible, the method of minimization was used to restrict differences between patients. In cases of more than one exact Abbreviations: FIGO = International Federation of Gynecology and Obstetrics, V1: high-uptake sub-volumes, MTV: metabolic tumor volume, TLG: total lesion glycolysis, 3D-RT = three-dimensional conformal radiotherapy, IMRT = intensity-modulated photon radiotherapy, EBRT = external beam radiotherapy, BT = brachytherapy, D98 GTV res: dose of 98Gy a/b = 10 to the Residual Gross Tumour Volume of the primary Tumour. match to a ''recurrence patient", one ''non recurrence patient" was randomly selected from all possible exact matches. When patients with local relapse or distant relapse specifically were analyzed, the corresponding subgroup of patients was considered. Patient and tumor characteristics Eighty four patients were included, forty-two in each group. Patient characteristics are shown in Table 1. The mean ± SD follow-up was 26 ± 11 months. No patient experienced delays or breaks in EBRT due to short-term toxicity (median RT duration, 49 days; range, 47-51 days). Among the 42 patients with recurrence, 20 patients were still alive and 22 had died from the disease at the time of analysis. Eight patients had an isolated local recurrence, 8 had local and nodal recurrences, 5 had local and distant recurrences, 4 had an isolated regional recurrence, 3 had regional and distant recurrences, and 14 had an isolated distant recurrence (Fig. 1). As new biopsies were not performed in the 4 patients having isolated regional recurrence and in 6 patients having distant recurrence, pathological confirmation of recurrence was available in only 32 patients (76.2%). Registration procedure and tumor volumes Amongst patients who recurred during the follow-up, a manual correction by translation and deformation after rigid registration was required in 28 women (10 pelvic recurrences/18 distant recurrences) due to significant anatomical variations of bladder and/or rectum filling, tumor response and/or deformation by the applicator at time of BT. According to FLAB, the mean entire tumor volume on PET was 42.6 ± 32.3 cm 3 and was significantly larger than the mean high-uptake sub-volumes (V1) of 15.3 ± 11.8 cm 3 (p < 0.01). The PET tumor characteristics in women with either pelvic or distant relapse are presented in Table 2. For patients without recurrences, a manual correction was required in 27 patients by translation and deformation after rigid registration due to significant anatomical variations in bladder and/or rectum filling, tumor response and deformation by the applicator at time of brachytherapy. The PET tumors characteristic of these patients matched with patients with pelvic and with distant relapse are presented in Table 2. For patients in the non recurrence group, V1 was not included in the CTV-HR and not covered by the 85 Gy isodose in 7.1% patients. The mean inclusion of V1 in the CTV-HR was 99.1 ± 3.4% (Fig. 3) and the mean coverage of V1 by the 85 Gy (Fig. s1B, supplementary material), 80 Gy (Fig. s2B, supplementary material), and 78 Gy (Fig. s3B, supplementary material) isodoses were 99.4 ± 2.2%, 100% and 100%, respectively. There was a significant difference between inclusion in the CTV HR and coverage by isodose 85 Gy in patients without recurrence compared to patients with distant recurrence (p < 0.0001) (Fig. 3). All patients had 100% coverage of V1 by CTV-IR. The spatial distribution of the different hotspots is illustrated in Fig. 4. Discussion The use of 18F-FDG PET/CT has already become a standard for radiotherapy planning and target volumes outlining in some tumors such as lymphoma or lung cancer [28]. However, in patients with cervical cancers PET/CT is only currently used for detection of lymph node metastases. To our knowledge, this is the first study showing that FDG hotspots on baseline 18F-FDG PET/CT could guide the CTV delineation during BT in locally advanced CC. Our study showed that the dosimetric coverage of the initial high-uptake PET sub-volume by the 80% isodose was not achieved in more than 40% of patients who relapsed during follow-up, and dosimetric coverage of this subregion was significantly lower compared to patients without recurrence. Therefore, we recommend including these areas in the CTV HR to ensure adequate coverage during planning. Surprisingly, patients with suboptimal coverage of hotspots seemed to develop more distant recurrence compared to patients Fig. 2. Examples of brachytherapy planning in 2 different patients who experienced distant relapse. In these 2 cases, the entire V1 (green) was not included in the CTV-HR (magenta volume) and not covered by the D 2eq = 85 Gy (red), neither by the 80 Gy isodose (orange). It was however included in the CTV IR (blue volume) and covered by the 78 Gy (cyan) and 65 Gy isodoses (green). (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.) with better coverage, when we would have expected more local failures. Insufficient dose may allow growth of radioresistant tumor clones responsible for subsequent metastatic recurrence [29]. Moreover, we can't exclude that microscopic local recurrences were initially associated with distant recurrences in these patients, but not visible on imaging because of the action of systemic treatments. A strong dose-effect relationship for local control exists in the treatment of CC: in these two series of 156 and 592 patients, dose to the HR-CTV of at least 86 Gy and 92 Gy were associated with local control rates higher than 90% and 95%, respectively [30,31]. However, recently, the patterns of failure from the RetroEM-BRACE Study showed that, with the use of image-guided adaptive brachytherapy, the patterns of relapse after chemoradiation have changed, with the predominant failure being systemic, whereas the predominant failure with conventional brachytherapy was pelvic [32]. Our findings are thus particularly interesting in this context, and in line with another study reported by Chargari et al. in 109 patients treated for a LACC by CRT and BT. This study showed that a lower ability to reach the target D90 to the HR-CTV planning and an HR-CTV volume ! 40 cm 3 lead to a high propensity of distant relapse [33]. Some studies have already investigated the predictive value of PET in addition to detecting lymph node involvement and metastasis. A meta-analysis of 12 studies highlighted the prognostic value of volume-based 18F-FDG PET/CT parameters (metabolic tumor volume (MTV) and TLG) [34] in 660 patients, although conflicting results were reported by another study [35]. A study including 50 patients evaluated MRI-and PET-guided BT in patients with vaginal recurrence of CC after surgery [36] and reported that MTV > 15 cc (using 35% of the SUVmax by the gradient tumor segmentation method) resulted in inferior local control and a trend towards poorer overall survival. However, these studies only evaluated the prognostic value of MTV and did not propose how MTV could be incorporated in the delineation process. Another report on 29 patients evaluated the concordance of the anatomical tumour volume defined on T2-weighted MRI (considered as the gold standard) with the MTV measured on 18F-FDG PET/CT [37]. A very good correlation was found between MTV30 (30% threshold of SUVmax) and anatomical tumor volume. Despite the low number of patients, these results are interesting and support the use of functional 18F-FDG PET/CT imaging as a surrogate of MRI for radiotherapy and image-guided brachytherapy planning. The incorporation of functional imaging could go further than simple target definition, as PET may help to identify hypermetabolic subvolumes and guide dose-painting in LACC. Our study has limitations. Firstly, that is a monocentric and retrospective series, with a small number of patients. Additionally, only primary tumors were analyzed. Moreover, despite the use of a validated automatic registration method, manual correction was required in some patients due to significant anatomical variations related to variations in bladder and/or rectum repletion. These corrections can lead to intra-or inter-observer variabilities. Finally, our findings are based on the hypothesis that V1 locations are not influenced by tumor shrinkage. Indeed, V1 are extracted from the pretherapeutic PET whereas CTV HR is delineated on the planning CT (with help of MRI) performed after EBRT, which usually allows a tumor volume decrease of 50% [38]. This could possibly over/underestimate the amount of V1 inclusion in the CTV HR. Despite these potential sources of bias, identifying 18F-FDG hotspots on initial 18F-FDG PET/CT is a promising approach for personalized treatment in patients undergoing CRT with inherent limitations that will need to be addressed before it can be used in clinical practice. Amongst these, the repeatability and robustness of the procedure has to be improved. Accurate segmentation is the first important step in identifying 18F-FDG hotspots. This is, however, straightforward, taking approximately one minute with FLAB. Although FLAB is not freely available, other efficient PET segmentation tools are available in clinical practice, such as adaptive thresholding or gradient-based method [25]. Therefore, our results should be reproducible by others. The registration between the planning CT and the CT component of the PET/CT is also a crucial step which requires 2-3 min, but does not require additional tools. Indeed, radiation oncologists deal now with functional and molecular imaging to increase the definition of target volumes in daily practice, and are increasingly familiar with registration between various imaging modalities, especially with the emergence of stereotactic radiotherapy [39,40]. To accommodate this, radiation oncology planning softwares now provide registration functionalities. The registration parameters are then easily and rapidly applied to the PET. Finally, hotspot outlining is exported to the brachytherapy software and its inclusion in the CTV HR is evaluated (one minute step). In total, we calculated a workload of less than 10 min is required for the entire procedure, which is certainly achievable in daily clinical practice. Our previous results suggested that 2 radiomics features in 18F-FDG PET and in ADC maps from Diffusion-weighted MRI (DWI MRI) are powerful predictors of the efficacy of CRT in the treatment of CC [35]. Higher values of these parameters are associated with worse outcome, confirming that more heterogeneous tumors have a poor prognosis. These findings can be acted upon to tailor treatment. Further work could analyse the correlation of the different radiomics features with V1. Two other aspects could also be investigated inside clinical trials, namely (i) BT dose escalation to pretherapeutic identified hotpots in patients at high risk of isolated loco-regional relapse and (ii) a better coverage of the initial hotspot by the CTV HR in patients at high risk of metastatic relapse. Given the distribution of hotspots which are in part located within the parameters (Fig. 4), it is possible the use of interstitial brachytherapy would improve the hotpots dosimetric coverage, Fig. 4. Spatial distribution of all hotspots obtained after deformable registration of the PET imaging sets with a single planning CT. Volumes in red representing the presence of > 75% hotspots, orange 50-75%, yellow 25-50% and green < 25%. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.) especially in patients with parametrial involvement or large tumors [6,41]. These aspects have already been investigated in other tumor sites such as prostate cancer. Indeed, MRI has been studied to boost the predominant intra prostatic lesion (DIL) [42,43] whereas the value of prostate-specific membrane antigen (PSMA) PET/CT has been investigated in a planning study to allow a boost on the DIL in external radiotherapy [44,45]. Conclusion Suboptimal dosimetric coverage of areas of high FDG uptake on pretherapeutic PET could be associated with an increased risk of CC recurrence. The identification of these hotspots on PET could guide the BT procedure in patients with CC. Further large prospective studies are needed to confirm and externally validate these observations.
2020-05-21T00:08:19.247Z
2020-05-11T00:00:00.000
{ "year": 2020, "sha1": "93775ec8b1bf4fb655d23bb1159024231eaf7663", "oa_license": "CCBY", "oa_url": "http://www.ctro.science/article/S2405630820300392/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "14a346b80ac3ae44073924556940d68380857fa3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
145388783
pes2o/s2orc
v3-fos-license
Globalizing the democratic community This article discusses the problem of global democracy, and why democratic legitimacy seems so difficult to attain at the global level. I start by arguing that the difficulties we experience when we try to widen the scope of democratic governance beyond the boundaries of individual states have nothing to do with the characteristics of global society, but result from the underlying assumption that a political community has to be bounded and based on consent in order for democratic legitimacy to be possible. I then explore how this view came into being, arguing that the perennial paradoxes of democratic legitimacy are little but results of earlier and successful attempts to make the concept of political community coextensive with that of the nation. Finally, I argue that once we let go of the idea that political communities have to be bounded and based on consent in order to qualify as democratic ones, the paradox of democratic legitimacy will look like a category mistake rather than an inescapable obstacle to global democracy. Keywords: cosmopolitanism; democratic paradox; legitimacy; global governance Today leading political theorists believe that globalization constitutes a threat to modern democracy by undermining its foundations: state sovereignty and national identity. 1 Since most of these theorists would like to save democratic institutions and practices without sliding back into nationalist nostalgia, they have explored a variety of ways to widen the scope of democratic governance beyond the boundaries of the state. Yet these efforts have been constantly compromised by what looks like an insurmountable problem. If democratic governance presupposes a community in order to be legitimate, global governance cannot be democratically legitimate since there is no corresponding community at the global level that could bestow it with legitimacy. But this problem is neither new nor specific to the global level: In order for any political authority to be legitimate in democratic terms, it must be based on the actual or hypothetical consent of people or a community. But since the identity of that people or community is difficult to account for in terms themselves democratic, most theories of democratic legitimacy issue in paradoxes that cannot be satisfactorily resolved by modern political theory. 2 As Van Roermund eloquently has put this problem, 'self-representation never seems to capture the self that is representing itself'. 3 Since the people cannot decide on their own composition, many political theorists have assumed that democratic community and its boundaries are the outcome of historical accidents and therefore cannot be judged according to any standard of legitimacy. 4 While being essential to democratic legitimacy, the political community and its boundaries are themselves outside the purview of normative reasoning. This insight has led to despair among those who argue in favor of global democracy, sometimes to the point of conceding that talk of democratic legitimacy at the global level is pointless in the absence of a world government that first could reconstitute mankind into one single political community. Since such a world government presently seems to be out of reach, global democracy therefore looks equally utopian. Others have responded more optimistically to this lack of democratic legitimacy by trying to find viable substitutes for a global demos, such as a global civil society or an increased responsiveness and accountability among global governance institutions. 5 But these latter solutions presuppose that the global realm display some features that could permit common norms to emerge and become institutionalized independently of what goes on at the domestic level. So, while there is no demos to be found at the global level, it is reasonable to expect a global society based on democratic values to emerge from the interplay of global political institutions and those affected by their decisions. But why is it so hard to make coherent sense of the concept of a global demos? Answering this question is the task of the present article. But instead of trying to solve the problem of democratic legitimacy, I will try to explain how this problem came into being, and, by implication, why it might be less of a problem for global democracy. As I shall argue, the difficulties we experience when we try to widen the scope of democratic governance beyond the boundaries of individual states have nothing to do with the characteristics of global society, but result from the underlying assumption that a political community has to be bounded and based on consent in order for democratic legitimacy to be possible. From this point of view, the perennial paradoxes of democratic legitimacy look like little but residuals of earlier attempts to nationalize the concept of political community by making this concept semantically equivalent to that of the nation, and taking the hypothetical consent of its members to be the source of its legitimacy. This is to say that claims to particularity cannot be justified in universalistic terms, only in terms themselves particularistic. Accordingly, once we let go of the idea that political communities have to be bounded and based on consent in order to qualify as democratic ones, the paradox of democratic legitimacy will look like a category mistake rather than an inescapable obstacle to democratic governance. Pursuing this argument, I will proceed in three steps. First, I will take a critical look at some contemporary attempts to widen the scope of democratic governance beyond the boundaries of individual states. Second, I will briefly describe how democratic legitimacy became a problem of modern political theory, and why the conventional solutions to this problem issue in paradoxes that have resisted resolution. Third, I will suggest a way of redefining political community that makes it possible to dissolve the paradoxes of democratic legitimacy by suggesting that the only prima facie legitimate demos must be coextensive with mankind as a whole. I The idea that globalization constitutes a threat to modern democracy can be formulated in at least two ways. First, if we take globalization to bring a virtually unrestricted flow of capital and the reign of market forces at a transnational level, it becomes tempting to focus on the corrosive effects of this on state autonomy. As Held states, '[m]odern democratic theory and practice was constructed upon Westphalian foundations. National communities, and theories of national communities, were based on the presupposition that political communities could, in principle, control their destinies.' 6 When domestic politicians seek to regain control over national economies, they do so by ceding at least some autonomy to supranational institutions lest they want to loose out completely to the corporate world. Yet such ceding of autonomy comes at a price, since they then effectively move formal authority as well as control over outcomes outside the scope of domestic democratic institutions. What once was within the purview of due democratic deliberation is now more a matter of multilateral agreements between government officials at different levels. 7 Deprived of any real power, domestic democratic institutions become increasingly hollow. From this follows two strategic options for the democratically minded: either to increase the independence both from global forces and supranational institutions, or, to opt for democratization of those supranational institutions in order to tame these forces and restore some consensual legitimacy to their decisions. Otherwise nobody is in charge and no one is accountable, and we will have no way left to influence our destiny as citizens. 8 Second, if we take globalization to bring a virtually unrestricted flow of information and people at the transnational level, it becomes tempting to focus on the corrosive effects on the identity of national political communities. Transnational flows might compromise the cultural homogeneity of a people, and since it takes a people to constitute the demos necessary for democratic institutions to be legitimate, those transnational flows might subvert the foundations of democracy by pushing cultural plurality to an intolerable limit. In order for a people to govern itself, its members need to know who they are: a people and not just a multitude of strangers. 9 The democratically minded again have two options at their disposal: they can either move in a nationalist direction by trying to preserve the uniqueness of their own community against the onslaught of global mobility, or, they can move in a cosmopolitan direction by trying to extend the scope of democracy beyond the boundaries of particular political communities while trying to become as tolerant as possible within each of them. 10 Let us disregard the nationalistic option for a moment, and focus on current attempts to globalize democratic governance. Theories of cosmopolitan democracy Globalizing the democratic community usually buy into some version of the above diagnosis, and then proceed by rethinking political community in light of these challenges. 11 They frequently begin their argument by pointing out that democracy*in the shape we know it*has been closely associated with the nation state. They then argue that if the nation state indeed is about to lose its status as the predominant locus of political authority and community, then the only way to save democracy is by redefining political community in such a way that it can include people irrespective of their citizenship in particular communities. Instead of several mutually exclusive demoi, we need to create one inclusive demos to cater to the demands of popular sovereignty in an increasingly globalized world. 12 But as Buchanan and Keohane have argued, 'the most obvious difficulty with this view is that the social and political conditions for democracy are not met at the global level.' 13 This being so, since there is 'no worldwide political community constituted by a broad consensus recognizing a common domain as the proper subject of global collective decision-making'. But nevertheless, in the absence of any kind of community at the global level, the very aspiration to democratic legitimacy would be rather pointless. But how, then, can a global demos be constructed and justified? Two main ways of constructing a global demos compete in the literature. First, we find the idea that a global demos ought to include all human beings. Each human being should have an equal voice since each serious political concern is likely to be of global scope. 14 Second, we find the idea that those who are affected by a particular decision should be included in the demos, so what constitutes the scope of the demos in question will vary with the nature of the decision. Each issue should therefore be settled by those affected by the outcome in each particular case, not by mankind as a whole. 15 But as both Näsströ m and Wendt have shown, justifying these solutions is very difficult, since the transition from our present situation in which political communities are bounded to an unbounded global community of all mankind has to take the present situation into consideration: in order for this new community to enjoy democratic legitimacy, it has to be considered legitimate by its prospective citizens. 16 That is, it must be democratically constituted, rather than forced upon them by some global political authority. But this merely begs the question who these citizens are, a decision that itself cannot be settled by any democratic process, since that process then would presuppose exactly what it is supposed to yield. The second solution is equally problematic, since we then have to face the question of how to determine who is affected and who is not by a particular decision, and this might of course lead to divergent interpretations in each individual case. But if democratic legitimacy is wanted, who is affected and who is not should be settled in ways themselves democratic, that is, by those affected. Ergo: who is affected should be decided by those affected. Thus, both ways of justifying a global democratic community in terms themselves democratic presuppose the prior existence of that community, trapping these attempts to construct a global demos in a vicious circularity. In response to these difficulties, some authors have tried to find routes to global democracy that rely on other sources of democratic legitimacy, such as deliberation and contestation. One of these goes through transnational institutions. The relocation of political authority to the transnational level might yield decentralized forms of authority that eventually will chime well with a world of territorially unbounded communities. Hopefully, the collective allegiance to the procedures of deliberative democracy would then generate overlapping and constantly fluctuating demoi, each being relative to the issue area at hand. 17 From this perspective, we would not need any demos to keep democracy alive at the global level, only a proper differentiation into different spheres of social and political action, and the maintenance of democratic conduct within each of them from the bottom up. 18 Another solution would be to accept the existence of a multiplicity of different demoi, and opt for the gradual democratization of the relations between them by strengthening the transnational public sphere and its institutions. 19 Yet in both cases, it is difficult to see how and why the allegiance to democratic values and procedures could be safeguarded through the transnational dispersion of political authority, since the warrant of democratic deliberation seems to be some normative authority prior to the structure of authority legitimized by means of the same set of procedures. So rather than finding ourselves lost when it comes to justifying a global demos, we are lost when it comes to justifying the authority necessary to uphold standards of democratic deliberation within as well as across different demoi in democratic terms. The second route goes through negotiating the paradox of democratic legitimacy. Even grated that not all communities are the outcome of popular consent, the democratic paradox nevertheless becomes inescapable whenever we want to justify these communities and their boundaries in democratic terms. To Benhabib, the way to negotiate the resulting paradoxes is by means of iterations of democratic practice which could allow a given demos to redefine itself in the face of ongoing 'political contestation in which the meaning of rights and other fundamental principles are reposited, resignified, and reappropriated by new and excluded groups.' 20 To Honig, the paradoxes of democratic legitimacy are integral to the possibility of democratic governance and productive of its widening scope beyond the boundaries of individual communities. To her, 'democracy is always about living with strangers under a law that is therefore alien [and] about being mobilized into action periodically with and on behalf of people who are surely opaque to us and often unknown to us'. 21 Thus, the paradox of democratic legitimacy is 'the condition in which we find ourselves when we think and act politically'. 22 But if peoples and political communities owe their existence to the contingencies of history, why should we bother justifying them at all? Worse still, why should democratic practices then be confined to bounded communities thus constituted? Answering these questions will force us to take a closer look at the problem of democratic legitimacy and the paradoxes its solutions give rise to. II How and why did we end up with the problem of democratic legitimacy? Before answering this question, I think it is important to note that this problem presupposes Globalizing the democratic community that democratic communities have to be bounded and based on consent. In a world without boundaries, the boundary problem would not be a problem. In a community without consent, legitimacy would have to be derived from other sources. Thus, if we want to understand why democratic legitimacy is a problem, we should start by asking how the concepts of boundaries and consent once were married in political theory. As I would like to suggest, this particular union was the outcome of a broader trend to nationalize socio-political concepts that went hand in hand with efforts to justify the modern sovereign state. This nationalization implied that the range of reference of socio-political concepts was brought to coincide with the boundaries of the sovereign state, and that their meaningful usage was equally restricted by the imagined necessity of such boundaries: all I can offer is a brief sketch of how this happened in political thought. To the ancients, democratic legitimacy had been less of a problem. They could largely take the existence of the polis for granted, and if ever in doubt, they could point to the founding authority of a Solon or a Lycurgus, or appeal to the conventions embodied in ancient customs and institutions. 23 When democratic forms of government later fell into disrepute, this was largely because of the intrinsic difficulty in determining the scope of the relevant demos without thereby inviting its corruption in a world in which boundless and universal forms of community constituted the given starting point for most attempts to justify political authority. But when democratic ideals started to resurface during the Enlightenment, however, these ancient roads to legitimacy had been blocked by the secular and revolutionary aspirations of that age, and the pitfalls of democratic government well forgotten. 24 Without the cityÁstate as the given point of reference and with a universalistic framework still in operation, it was also hard to come up with reasons why democratic governance should be restricted to particular communities, rather than applied to mankind as a whole, irrespective of its division into distinct communities. Hence, to writers like Diderot, Raynal, Paine and Condorcet, the global spread of popular sovereignty was seen as a way of overcoming what they saw as a tragic division of mankind into distinct communities of unequal standing, and the immoral impact this had on their intercourse. As Diderot argued, the general will is universal and 'forms the rule binding the conduct of an individual towards another in the same society, together with the conduct of an individual towards the whole society to which he belongs, and of that society itself towards other societies . . . submission to the general will is the bond which holds all societies together'. 25 But as Robert Wokler has shown in admirable detail, the paradox of democratic legitimacy arises the moment Rousseau tries to restrict the scope of this general will to a particular community by demanding that the community in question ought to be based on the consent of its members. 26 Taking such consent as his starting point, Rousseau discovered that the boundaries of a democratic community cannot be justified in terms themselves democratic, since the people cannot constitute itself ex nihilo. If sovereignty has to derive from the people, and if the people by definition cannot be defined by itself*that is, democratically*then how is it possible for a people to be both ruler and ruled all at once? As he stated the resulting paradox: For a young people to be able to relish sound principles of political theory and follow the fundamental rules of statecraft, the effect would have to become the cause; the social spirit, which should be created by those institutions, would have to preside over their very foundation; and men would have to be before the law what they should become by means of law. The legislator therefore, being unable to appeal to either force or reason, must have recourse to an authority of a different order, capable of constraining without violence and persuading without convincing. 27 While the city republic continued to evoke nostalgic pangs in his imagination, Rousseau had to make his case for democracy from scratch. 28 In order for democracy to be possible, there has to be a people united by means of common laws, yet these laws would have to derive their legitimacy from the same people. But how could the people ever be constituted in the absence of a founding authority, and how could the proper boundaries of the political community be drawn without thereby presupposing the existence of that people? Since the above problem could not be solved by logical means, it quickly became a matter of finding a pragmatic solution that catered to the political agenda of the Revolution while concealing its paradoxical character. What Emmanuel de Sieyès did in this respect may seem self-evident to us who have been accustomed to take parts of his solution for granted: he introduced the concept of the nation in order to define the proper boundaries of the political community, thereby also justifying the exercise of popular sovereignty within it. As he asked rhetorically: [h]ow can one believe that a constituted body may itself decide on its own constitution ( . . .) Power belongs only to the whole . . . From this it follows that the constitution of a country would cease to exist at the slightest difficulty arising between its component parts, if it were not that the nation existed independently of any rule or any constitutional form. 29 To Sieyès, the nation is logically prior both to sovereign authority and to the corresponding demos. As he explains, '[t]he nation is prior to everything. It is the source of everything. Its will is always legal; indeed, it is the law itself'. 30 By conceptualizing the nation as the original source of political authority, Sieyès could brush the paradox of democratic legitimacy under the carpet. As Näsströ m has summarized this move, it was a matter of placing the nation rather than the individual in an imaginary state of nature, and spell out the consequences for the inner workings of the political community. 31 And as Wokler remarked on the end result, 'in addition to superimposing undivided rule upon its subjects, the genuinely modern state further requires that those who fall under its authority be united themselves*that they form one people, one nation, morally bound together by a common identity . . . the modern state generally requires that the represented be a moral person as well, national unity going hand in hand with the political unity of the state.' 32 In the French context, it was then left to the next generation to bring the nation into existence through an array of clever propagandistic measures. 33 Globalizing the democratic community But in the guise it first emerged during the Revolution, the concept of the nation did not presuppose cultural homogeneity or a common identity. To Sieyès, as to many liberals after him, what makes it possible for the people to appear as a unity is not the sameness of the citizens, but rather the fact that the nation is something more than the sum of its parts, irrespective of the individual characteristics of the citizens. 34 Not only did this way of defining the political community circumvent the problem of democratic legitimacy as it had been posed by Rousseau, but it had obvious practical advantages compared to competing definitions, since it made it possible to articulate a view of popular sovereignty based on representation rather than on the participation of all citizens. 35 Later, in those times and places where the legitimacy of a political community was in doubt, the link between state and nation could be reinforced by appealing to a common culture or the common historical memory of a people. 36 Consequently, in many cases, ethnos and demos have become inseparable expressions of the same quest for popular sovereignty and democratic legitimacy. 37 At this point, it is common to point out that this transition was greatly facilitated by the fact that writers like Bodin and Hobbes already had justified the principle of indivisible sovereignty and that the territorial framework of its exercise already had been established a long time ago. All that Rousseau and Sieyès had to do was to replace the King with the people as the ultimate source and locus of that indivisible sovereign authority within an already territorially demarcated community. But how was this transition from kings to people carried out within political thought? I think important clues can be found in the ways the concept of a general will was defined and used before Rousseau made it the touchstone of popular sovereignty. For when he distinguishes between a general will and the will of all, he does so by identifying the former with the formal sovereignty of the people as a whole, and the latter with an aggregate expression of individual wills. The general will is never wrong, since 'the people is never corrupted, but it is often deceived', while particular will often easily become misguided. 38 Now these different wills can only be brought to coincide if individual wills are considered in their individuality, that is, in strict isolation from any other association than the state itself, since such 'partial societies' are potent sources of corruption. As Rousseau rephrases Machiavelli's warning against factionalism, 'if groups, sectional associations are formed at the expense of the larger association, the will of each of these groups will become general in relation to its own members and private in relation to the state'. 39 Thus, a viable political community requires that the people consist of nothing but individuals, each standing in an equal relationship to the indivisible authority of their totality. Only then can the differences between individual wills be cancelled out and ultimately be reconciled with the general will through representation or deliberation. Thus the very concept of a general will presupposes that the people is categorically distinct from the individuals that compose it, and hence also that the people thus constituted can act wholly independently of their individual wills, however combined. Now this assumption is hard to reconcile with the idea that the people itself is constituted by a prior contract between its individual members to enter as free and equal members into the political community before they can consent to any sovereign authority, even granted that this authority emanates from their wills at the very same moment they enter into the political community. The tension between the general will and the will of all therefore remains unresolved within the contractual framework as long as the latter is supposed to be a precondition of the former. But what if the general will actually is a precondition of the will of all? As Foucault has reminded us, before the triumph of modern democracy, there was an art of government that took its object of governance to be a population, and which regarded the happiness of its members as a means to the survival and smooth functioning of the state. 40 If we step outside the contractual framework for a moment and unpack some of its underlying assumptions, I think that some clues to how this transition was carried out can be found in the theory of political will which Rousseau borrows from his absolutist predecessor d'Argenson. In fact, d'Argenson furnishes the missing link between the concept of population as an object of governance and the idea of a people able to govern itself. By breaking down the people into individual wills, d'Argenson is able to argue that there is no basic difference between the will of the sovereign and the will of the people, only a numerical distinction between different wills that only can be handled through the use of political arithmetic by the sovereign. Through this investigation, writes the Marquis solemnly, 'I hope to show that popular administration can be exercised under the authority of the Sovereign, without diminishing the public power which it instead increases, and that this is the source of happiness of the people'. 41 In order to bring about this outcome, the sovereign must learn how to control the manifestations of particular wills at different levels of society, this royal control sometimes includes giving people latitude to deliberate and act independently whenever it is suitable from the perspective of the sovereign. As a consequence, the sovereign will strengthen his power, benefit the community, as well as get an edge in international affairs. 42 So perhaps we are forced to admit that modern democracy is a manifestation of a prior will to govern, a will that first constitutes the people as an object of government and then turns it into a subject of government in order to legitimize itself. Now this foray into the prehistory of modern democratic theory does nothing to solve the problem of democratic legitimacy, but it does helps us understand a few things better. First, it makes us aware that the problems faced by democratic communities today cannot be blamed on globalization, but rather are to be found at the very origin of modern democratic theory. The paradox of democratic legitimacy has been around since democracy was nationalized, and the paradigmatic way of handling this problem since then has been to use the concept of the nation*however defined*in order to square the circle and brush the question of what makes nations legitimate under the carpet. The revolutionary concept of the nation was created precisely in order to furnish democratic governance with legitimacy, to the same extent as popular rule itself was necessary in order to justify the existence of indivisible sovereign authority within bounded political communities. Second, the above account helps us understand why emancipation from this state of affairs today is perceived as so urgent by so many. If the revolutionary solution consisted in Globalizing the democratic community substituting the nation for the individual in an imaginary state of nature, this move had the inevitable side effect of actually realizing that nasty state of affairs among political communities. In order to enjoy the benefits of democracy domestically, we have had to accept that sovereign authority ultimately is justified with reference to the state of exception prevailing in the international realm precisely as a consequence of democratic legitimacy. Therefore, it seems as if the revolutionary solution to the problem of democratic legitimacy has backfired, since the cash value of the above arrangement is that mankind now oppresses itself*in a perversely democratic way*by consenting to remain confined into particular communities whose bounded character also is the very condition of possible warfare between them. Hence, democratic governance at the domestic level is an obstacle to be overcome if we want to emancipate ourselves from the costly illusion of being human by virtue of being members of a 'people', as well as from the corollary and even costlier reality of being stuck in an international state of war. III But how can we escape this predicament? Ironically, our situation with regard to the problem of democratic legitimacy is not unlike that of Rousseau, insofar as his solution is as irrelevant to us as those of the ancients were to him. We no longer live in a world in which bounded communities remain the predominant loci of political authority and the ultimate sources of human belonging, and the way in which these once were justified today only makes sense as a source of nationalist nostalgia. The way in which boundaries are drawn and peoples defined cannot be justified with reference to theories of democracy, since they presuppose exactly that which stand in need of justification. This insight has led to a widespread cynicism with regard to the possibility of justifying democratic governance at any level, since it is tempting to argue that all communities ultimately owe their existence to more or less violent relocations of political power rather than to the consent of their members. If this is the case, political authority is nothing more than power having been around long enough to become taken for granted by those subjected to it, and peoples and boundaries are but outcomes of successful efforts to homogenize an arbitrary multitude of human beings into citizens. By implication, our theories of democratic legitimacy are but ideologies designed to conceal the violence of such founding acts and their consequences. 43 Many people believe that this is what is happening today at the global level as well. If this pattern were to repeat itself at the global level, this would entail that questions of legitimacy only could be meaningfully posed if and when a global structure of authority has been firmly established. 44 This implies that the creation of a global demos would require a prior concentration of power at the global level in order to be possible. Since there is no global culture or common historical memory which could provide the symbolic foundations of a global political identity or citizenship, the creation of a single demos of a planetary scope would require allegiances to particular J. Bartelson political communities to be gradually undone through a global process of homogenization. Only when this process has been completed, global political institutions can start to enjoy legitimacy by commanding consent among its members. But such domestic analogies merely make us forget what made them possible, namely conceptual nationalization. Through these processes, the meaningful usage of the concept of democracy was restricted to bounded political communities, and democratic legitimacy within such communities was supposed to derive from the consent of their members. But would it be possible to make coherent sense of democracy in the absence of both boundaries and consent? I think an affirmative answer to this question becomes possible when we realize what makes both boundaries and consent possible. Such claims to particularity are only possible against the backdrop of characteristics that are universally shared by human beings, yet these characteristics themselves do not confer any automatic legitimacy upon such claims. That claims to particularity have to be justified in universalistic terms also make them reversible, since these claims equally well could be contested on the same universalistic grounds. The same set of reasons used to legitimize a particular people or community in terms of consent could equally well be used to dispute its legitimacy on grounds of its boundless contestability. In fact, before the process of nationalization gained momentum, the predominant way of understanding political community in Western political thought was by regarding mankind as one immanent and universal community, by virtue of the sociability of its members. The idea that consent ought to constitute the only legitimate foundation of authority was closely connected to the ambition to nationalize the concept of political community, insofar as such consent also was essential to the identity of the political community. But the idea of consent derived from the very same sources as human sociability: the universal human capacity to form social bonds by means of the use of language and reason. But as long as human sociability provided the foundation of most attempts to legitimize political authority, it was hard to come up with reasons why political communities should be bounded or based on consent, given the fact that the capacity to form social bonds by means of language and reason are innate characteristics of all members of the species. All the way from the Stoics to Kant, such assumptions of a universal community of all mankind provided the starting point of much Western political and legal thought, as well as for subsequent critiques of despotism, imperial expansion, and colonial exploitation. 45 While many of those universalistic conceptions of human community are difficult to defend in scientific and secular terms, they provided the conceptual foundations for subsequent attempts to legitimize particular peoples and communities in terms of consent, as well as for most attempts to contest the legitimacy of the same peoples and communities in universalistic terms. Such universalistic conceptions of political community might contain some of the things we need in order to make coherent sense of democracy in the absence of a unifying global authority or a common global culture, without reducing humanity to a mere multitude of unencumbered selves. This being so, since universalistic conceptions of community understand mankind to be a unity categorically distinct Globalizing the democratic community from the mere sum total of its individual members, constituted not by their sameness but rather by their radical diversity. However different each people or community might be from each other, they nevertheless share the attribute of being different in common, which is the condition of their basic unity. From this point of view, communities of lesser scope are but instantiations of a shared capacity among human beings to form social bonds by means of the use of language and reason, rather than manifestations of particular principles or reason or expressions of particular linguistic communities. This entails that the basic modern requirement of democratic legitimacy*the existence of bounded communities based on the consent of their members*must be seen as the outcome of a prior differentiation of mankind that is essentially contestable since it is not based on the universal consent of all mankind but on historical contingencies alone. Thus, what has to be justified in democratic terms is not the existence of this or that particular people and the boundaries they have drawn around themselves, but the very division of mankind that has made such claims to particularity possible in the first place. So instead of asking under what conditions globalization might bring about a transition to global democracy, and how the desired end result of such a transition might be justified, I think we should reverse the thrust of the entire argument. Such a reversal might help us understand why the existence of particular communities and their boundaries has been so hard to justify in democratic terms, once we realize that the members of different demoi share some characteristics in common that are essential for the formation of any political community of whatever scope and size. If we are willing to admit that mankind as a whole ought to be considered the ultimate source of legitimacy by virtue of these shared capacities for social life, the burden of proof would no longer rest with those who argue in favor of considering mankind as a single demos. Rather it would rest with those who argue that any people or community could enjoy legitimacy independently of the rest of mankind, not only since each member of the former necessarily also is a member of the latter, but also since these capacities themselves are universal. Thus it also becomes plain why democratic governance must be global in scope in order for democratic legitimacy to be possible, and why the paradox of democratic legitimacy is a category mistake rather than a genuine logical paradox. For democratic governance to be legitimate in terms themselves democratic, all claims to particularity must be open to contestation at the global level before democratic communities of lesser scope and size can be considered democratically legitimate. Those issues that must be settled either by or with reference to mankind as a whole if the outcome is to be legitimate in democratic terms thus concern whether this or that particular people or community are legitimate sources of political authority and hence also entitled to sovereignty. All such claims would ultimately therefore have to be settled with reference to the contestability of the community in question. For a community and its boundaries to be contestable in practice means that barriers Á legal as well as cultural Á to entry and exit are low or non-existent. Thus, the easier it is for members to exit and non-members to enter and remain within a given community, the more democratically legitimate it is, as well as conversely. This implies that the existence of a global demos at least has to be assumed before claims to legitimacy by any people or community can be conclusively settled in terms themselves democratic. And this leads to the conclusion that all particular claims to democratic legitimacy must be considered invalid in principle until they have been opened to such contestation. Until then the legitimacy of each particular people or community will remain wholly contingent on the historical accidents that brought them into being, and they will therefore remain wholly provisional sources of democratic legitimacy. Now most of those who have wrestled with the paradox of democratic legitimacy have resisted this obvious conclusion. The logical difficulties we encounter when we try to justify particular claims to democratic legitimacy indicate that these claims simply are invalid on their own terms, and are based on lingering nationalist intuitions rather than on logical analysis. This is not to say that all particular peoples or communities necessarily lack democratic legitimacy, only to say that such claims will have to be evaluated against a framework that takes mankind as a whole into consideration, since a global demos is the only demos that could enjoy prima facie democratic legitimacy. Nor is this to say that all boundaries between communities necessarily are illegitimate, only to say that the question of their legitimacy can only be settled democratically with reference to the wider community of all mankind. Hence, to put it simply, democracy has first to become global before it can be a democratically legitimate form of governance at any other level. ACKNOWLEDGEMENT I would like to thank Nicholas Greenwood Onuf, Sofia Näsströ m, Anne-Kathrine Nyborg, Eva Erman, Hans Agné, Ulrika Mö rth and Anders Uhlin for their valuable comments on earlier drafts of this article.
2019-01-20T18:07:10.668Z
2008-01-01T00:00:00.000
{ "year": 2008, "sha1": "04c482e8d516642879ed1c42aa39ab1ada0a3851", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.3402/egp.v1i4.1858", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "a164ab08b9a651c37584a77c7212be1bb2c796ea", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [ "Sociology" ] }
208316519
pes2o/s2orc
v3-fos-license
Epidemiology, clinical features, and management of severe hypercalcemia in critically ill patients Background Severe hypercalcemia (HCM) is a common reason for admission in intensive-care unit (ICU). This case series aims to describe the clinical and biological features, etiologies, treatments, and outcome associated with severe HCM. This study included all patients with a total calcemia above 12 mg/dL (3 mmol/L) admitted in two ICUs from January 2007 to February 2017. Results 131 patients with HCM were included. HCM was related to hematologic malignancy in 58 (44.3%), solid tumors in 29 (22.1%), endocrinopathies in 16 (12.2%), and other causes in 28 (21.3%) patients. 108 (82.4%) patients fulfilled acute kidney injury (AKI) criteria. Among them, 25 (19%) patients required renal replacement therapy (RRT). 51 (38.9%) patients presented with neurological symptoms, 73 (55.7%) patients had cardiovascular manifestations, and 50 (38.1%) patients had digestive manifestations. The use of bisphosphonates (HR, 0.42; 95% CI, 0.27–0.67; P < 0.001) was the only treatment significantly associated with a decrease of total calcemia below 12 mg/dL (3 mmol/L) at day 5. ICU and Hospital mortality rates were, respectively, 9.9% and 21.3%. Simplified Acute Physiologic Score (SAPS II) (OR, 1.05; 95% CI 1.01–1.1; P = 0.03) and an underlying solid tumor (OR, 13.83; 95% CI 2.24–141.25; P = 0.01) were two independent factors associated with hospital mortality in multivariate analysis. Conclusions HCM is associated with high mortality rates, mainly due to underlying malignancies. The course of HCM may be complicated by organ failures which are most of the time reversible with early ICU management. Early ICU admission and prompt HCM management are crucial, especially in patients with an underlying solid tumor presenting with neurological symptoms. Background Hypercalcemia (HCM) is commonly defined by a serum calcium level above 2.6 mmol/L or 10.5 mg/dL, 40% of which is ionized calcium [1]. Despite the lack of a clear definition of a severe HCM, a calcemia above 12 mg/dL (3 mmol/L) is frequently retained. Because HCM may lead to life-threatening complications, patients often require Intensive-Care Unit (ICU) admission. Clinical symptoms are non-specific, depending on calcium levels and rapidity of onset. They include renal complications, ranging from polyuria to acute kidney injury (AKI) [2]; cardiovascular complications, including sinus tachycardia, hypertension, and infarctlike ST segment elevation [3]; digestive events, from abdominal pain to acute pancreatitis [4], and neurological impairments, including seizures [5], delirium, and coma. Epidemiological data focusing on HCM in ICU are scarce. In a multicentric retrospective study, severe hypercalcemia (defined by a ionized calcemia > 2.9 mEq/L Open Access *Correspondence: mousseaux.cyril@gmail.com or 1.45 mmol/L) occurred in 2% of critically ill patients [6]. Few studies found an independent association between HCM and hospital or ICU mortality [7,8]. Among all causes of HCM, primary hyperparathyroidism and malignancies (including solids tumor and hematologic malignancies) are predominant. Recent epidemiological studies estimate that HCM affects between 0.65 and 3% of all oncology patients [9,10]. Within tumor-related etiologies, multiple myeloma, breast, lung, and kidney cancers are the most frequent. In this context, HCM often occurs at a metastatic stage [9], which is no more per se a contraindication for ICU admission [11,12]. However, there are no data available on etiologies responsible for HCM in ICU patients. Due to the scarcity of epidemiological and clinical data of patients suffering from HCM in ICU, we conducted a retrospective study of 131 patients in two different French ICUs. Our objectives were to characterize clinical and biological features of HCM, to identify risk factors for complications of HCM and risk factors of mortality. Methods This present work is a retrospective study performed in two distinct ICUs. The medical ICU of the Saint-Louis University Hospital, Paris, France, is a 12-bed medical unit that admits 850 patients per year, of whom about one-third have hematologic malignancies. The nephrology ICU of the Tenon University Hospital, Paris, France is a 17-bed medical unit that admits 900 patients per year. The study was approved by the ethical committee of the "Société de reanimation de langue française" (n°18-34). Patients We included all adult patients admitted in ICU with severe HCM from January 2007 through February 2017. We defined severe HCM by a total calcemia above 12 mg/ dL (3 mmol/L) [13,14]. When needed, calcemia was corrected by serum albumin level, and in default of, by total protein levels. Definitions Definition and staging of AKI were defined according to the 2012 KDIGO (Kidney Disease: Improving Global Outcomes) guideline [15]. Decisions regarding the initiation, discontinuation, and modalities of renal replacement therapy (RRT) were made by senior nephrologists based on the guidelines from Bellomo and Ronco [16]. We used the Glasgow scale score [18] for evaluation of mental status at admission. Bisphosphonate safety was assessed on creatinine variation 3 months after admission in ICU. Patient's characteristics Demographic parameters, medical history, presenting symptoms, and treatments were collected. All laboratory data were recorded at admission. Serum creatinine level was recorded 3 months before ICU admission when possible, at ICU admission, ICU discharge, hospital discharge, 3 and 6 months after ICU and at last follow-up. Sequential Organ Failure Assessment (SOFA) and Simplified Acute Physiology Score (SAPS II) parameters were collected on day 1 [19,20]. ECOG/ WHO performance status [21], and Charlson comorbidity index [22] were evaluated based on precedent medical records. Serum PTH level were measured using a radioimmunologic assay (CIS-Bio-Radio-ImmunoAnalysis ® ) with a normal range between 8 and 76 ng/L. Serum 25-dihydroxyvitamin D was measured using a radioimmunologic method (Diasorin ® ) that recognized both 25-hydroxyvitamin D2 and 25-hydroxyvitamin D3. An excess of 25-dihydroxyvitamin D was arbitrary defined by a level above 100 ng/mL. Serum 1,25-dihydroxyvitamin D was measured using a radio-immunologic method (Diasorin ® ) with reference values ranging from 20 to 60 pg/mL. Plasma phosphate was measured using colorimetry (phosphomolybdate assay) and serum ionized calcium, using a specific electrode. Vital status at ICU discharge, hospital discharge, and last follow-up were determined from medical records and the outpatient clinic electronic database. Treatment All patients received the standard of care. The period and duration of renal replacement therapy were collected. Bisphosphonate treatment dose and duration were recorded. Type and daily volume of hydration, corticosteroid treatment, use of salmon calcitonin, furosemide in hypocalcemic purpose, and calcimimetic were collected. Statistical analyses Patients' characteristics at ICU admission are described as median and interquartile range (IQR) for quantitative variables and frequencies and percentages for qualitative variables. Distribution of baseline variables, life-supporting treatments, and specific treatments of HCM were compared between patients alive or not at leaving the ICU using the Wilcoxon test for quantitative variables, and Fisher's test for qualitative variables. Cumulative incidence of mortality in ICU and hospital was estimated, taking ICU or hospital discharge as competing risk. Univariate analysis for complications was performed in logistic regression to identify factors associated with complications. For mortality risk in ICU, univariate analysis with logistic regression was adjusted on SAPSII scale score and Charlson morbidity index. Multivariate models were adjusted on age, SAPSII for mortality, pre-existing cardiopathy for cardiovascular complications, CKD for AKI stage 3, and variables that were significant at 0.15 level. The endpoint of treatment analysis was the diminution of total calcemia below 3 mmol/L (12 mg/dL) between day 0 and day 5. Univariate analyses were performed in Cox proportional hazards regression models. Final multivariate model was adjusted on etiologies and significant treatments at 0.15 levels. Patients with stage V-CKD were excluded from this analysis. All tests were two sided with a 5% type I error. All calculations were performed with the R software (version 3.4.4) ® . Clinical features Patients' clinical features are reported in Fig. 1 and renal manifestations are reported in Table 2 Treatment characteristics and outcomes Treatment characteristics and outcomes are detailed in Table 4. Twenty-five (19% of the total population and 23.1% of the population with AKI) patients needed RRT during their ICU stay. The median duration time was 5 days (IQR, 2.5; 7). Tumor lysis syndrome was the main indication for dialysis in 11 (8.4%) patients, followed by hypercalcemia in 6 (4.6%) patients and pulmonary edema in 4 (3%) patients. Vasopressors were used in 6 (4.5%) patients and mechanical ventilation needed in 10 (7.6%) patients. ICU mortality and hospital mortality rate were 9.9% and 21.3%, respectively. Risk factors for hypercalcemia-induced complications Univariate analysis of risk factors associated with hypercalcemia-induced complications is shown in Additional file 1: Table 3. By multivariate analysis ( Figure 2 is the cumulative incidence curve of hospital mortality based on solid tumor etiology. In univariate analysis (Additional file 1: Table S3) adjusted for SAPSII scale score and Charlson morbidity index, solid tumors and neurological complications were associated with higher hospital mortality. We did not found an association between hospital mortality and bisphosphonate administration. Two factors were independently associated with higher mortality in multivariate analysis ( Impact of treatment We next analyzed the effectiveness of therapies (i.e., bisphosphonates, corticosteroids, furosemide, and calcitonin) to decrease hypercalcemia below 12 mg/dL (3 mmol/L) at day 5 (Additional file 1: Discussion The present study is the first to describe etiological investigations and clinical course of ICU patients hospitalized for severe HCM. Indeed, previous studies have focused on the association between ionized calcemia and ICU mortality. Egi et al. [6] found an association between severe HCM and ICU mortality. Conversely, Zhang et al. [23] in a large multicentric cohort did not confirm this association. Beyond these epidemiological studies, there are no data focusing on clinical and biological characterization of these patients. During HCM, one of the main reasons for ICU admission is the risk of cardiac rhythm disorders and Second infusion, n (%) 13 ( Brugada-like electrocardiographic pattern. A recent study has effectively shown an association between HCM and shorter QT interval, longer PR interval, and J point elevation (mimicking a Brugada syndrome) regardless of the HCM etiology [24]. Brugada-type EKG is associated with increased risk of fatal ventricular arrhythmias and sudden death [25]. Only a few previous case reports studies described ventricular tachycardia and fibrillation in HCM patients [26,27]. In our study, one patient presented with an ST segment elevation, and another patient (with concomitant hypokalemia) presented with a ventricular tachycardia. The scarcity of severe cardiac rhythm disorder found in our study may be explained by an early ICU admission policy in the two centers and an early treatment of HCM (with a median delay of bisphosphonate therapy on the day of admission). Beside cardiovascular events, AKI was frequent and often severe (19% of AKI patients required renal replacement therapy). Multiple mechanisms may be involved in HCM-induced AKI, including a decrease of glomerular ultrafiltration coefficient [28], the induction of nephrogenic diabetes insipidus via down-regulation of aquaporin-2, disruption of countercurrent multiplier system [29,30] and a loop diuretic-like effect [31], that participate to polyuria and volume depletion. Accordingly, almost two-third of HCM-induced AKI patients had a pre-renal AKI phenotype and 11% where polyuric prior ICU admission. Other factors, such as nephrotoxic drugs, tumor lysis syndrome, and an underlying hematological malignancy with renal involvement have participated to AKI episodes in our study. Our study was underpowered to show an impact of AKI on mortality. However, as small changes in serum creatinine have been shown to be associated with increased mortality [32,33], prolonged hospital stay, and decrease of complete remission of the underlying malignancy in hematologic patients [34], we believe that prompt treatment of HCM to prevent AKI is of utmost importance in HCM patients. In multivariate analysis, an underlying solid tumor was independently associated with hospital mortality. One explanation is that HCM is often a late complication in the course of solid tumors appearing in our study in 24% of cases in metastatic stage. Second, HCM patients with underlying solid tumors experienced more neurological complications. Delirium is known to be associated with longer hospital stay, higher ICU, and hospital mortality [35,36]. We then believe that HCM patients with neurological symptoms require aggressive treatment of HCM. ICU mortality in our cohort was 9.9%, which is consistent with the previous studies [37]. However, in the onco-hematology subgroup, 17.3% of ICU survivors died during hospitalization, after the correction of HCM. Causes of death are often multifactorial in these patients and HCM is mainly a marker of an advanced disease. These rates are far lower than usually reported in patients with onco-hematological malignancies hospitalized in ICU. Indeed, a prospective multicentric study grouping 1011 patients with hematological malignancies found an ICU and hospital mortality rate at 27.6% and 38.3%, respectively [38]. Even worse results were found in a Korean monocentric study gathering onco-hematological patients. ICU and hospital mortality rate were, respectively, 32.2% and 56% [39]. This discrepancy is explained by two reasons: first, the nature of ICU admission in our study. While the most common ICU admissions reasons are usually sepsis and acute respiratory failure, the main reason of admission in our study was the HCM itself without a significant rate of multivisceral failure, as suggested by the low median levels of SOFA scores at admission. Second, early ICU admission of these patients allowed prompt HCM aggressive therapy. Indeed, almost all patients received an adequate hyperhydration; 80% of them receiving bisphosphonate infusion at day 0. Only the use of bisphosphonate has been linked with a significant decrease of calcemia on day 5. We believe that this medication should be the cornerstone of the treatment of severe HCM, regardless of renal function. Our study has several limitations. First, because of the retrospective design of the study, unidentified confounding factors may have been overlooked in the multivariable analysis. Second, due to the small number of patients in our cohort, our study may have been underpowered to show any relationship between HCMinduced AKI and mortality. Moreover, the results of our multivariate analysis show that our confidence intervals are wide, suggesting statistic instability. Third, the high prevalence of onco-hematological malignancies in our cohort may have introduced selection bias in our results. Indeed, this high prevalence may be explained by the specialized recruitment of onco-hematological patients in one center. However, this is consistent with the previous studies that have found malignancies as the main causes of HCM in emergency department [40]. Conclusion Finally, patients with severe HCM are at high risk of organ failures that are most of the time reversible with the early ICU management. An early aggressive therapy of HCM may prevent these complications, mainly AKI. A special attention should be paid to patients with oncohematological malignancies to detect neurological complications associated with HCM. Prospective studies are needed to finely evaluate the existence of a threshold of HCM beyond which hospitalization in intensive care is necessary or to identify the most effective treatments. Additional file 1: Figure S1. Calcemia (mg/dL) course between day 0 and day 9. Table S1. Patients characteristics with suspected vitamin D intoxication. Table S2. Extra-renal manifestations. Table S3. Univariate analysis of determinants of complications of HCM. Table S4. Impact of therapies on total calcemia at day 5.
2019-11-28T12:23:48.342Z
2019-11-27T00:00:00.000
{ "year": 2019, "sha1": "bf62b7b607b42dd2a756a6ca1ebfefe0ef1b3673", "oa_license": "CCBY", "oa_url": "https://annalsofintensivecare.springeropen.com/track/pdf/10.1186/s13613-019-0606-8", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2c8b547d83468bf194eb6f8c32b64b8dfc1ea55d", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
236547305
pes2o/s2orc
v3-fos-license
A Rare Case of Hemoglobin Bart’s Hydrops Fetalis due to Uniparental Disomy of Chromosome 16 Hemoglobin (Hb) Bart’s hydrops fetalis is the most severe form of α-thalassemia and is usually inherited in an autosomal recessive manner. We report a case of Hb Bart’s hydrops fetalis due to uniparental disomy of chromosome 16. Antenatal screening showed a low maternal mean corpuscular volume (MCV), while paternal MCV was normal. The fetus was found to have a thickened nuchal translucency during first trimester screening for Down’s syndrome. Mid-trimester fetal anomaly ultrasound scan showed fetal cardiomegaly with pericardial effusion, scalp edema, ascites and an elevated middle cerebral arterial peak systolic velocity (MCA PSV). Multiplex polymerase chain reaction (PCR) on DNA from amniocentesis showed that the fetus was homozygous for South East Asian (SEA) type 2 α-globin gene deletion. Chromosome microarray (CMA) showed two regions of absence of heterozygosity (AOH) on the terminal p and q arm of chromosome 16. The rare occurrence of Hb Bart’s hydrops fetalis caused by maternal uniparental disomy should be considered in cases of fetal hydrops even in cases where paternal MCV is normal. Introduction Thalassemia is a single gene disorder involving inherited defects in globin chain biosynthesis [1]. It can be classified into alpha (α)-and beta (β)-thalassemia. Alpha-thalassemia occurs when there is a deletion of one or more α-globin genes (located on chromosome 16), leading to reduced or absent synthesis of α-globin chains [2]. Hemoglobin (Hb) Bart's disease involves the complete loss of all four α-globin genes (-/-), resulting in no α-globin chain production [1]. It is the most severe form of α-thalassemia, usually leading to death in utero or shortly after birth [3,4]. Thalassemia is inherited in a simple autosomal recessive Mendelian pattern. Hb Bart's disease is usually due to inheritance of α-thalassemia cis deletion (-/αα) from each parent [3] and couples who are both carriers of this deletion have a 25% chance of having a child affected by Hb Bart's hydrops fetalis syndrome [4]. We present a case of Hb Bart's hydrops fetalis syndrome due to uniparental disomy. Investigations The patient was a 32-year-old gravida 2 para 0 Chinese woman with a previous history of a right tubal ectopic pregnancy and a family history of thalassemia. She had an early dating scan at 8 weeks of gestation. First trimester Down's syndrome screening was performed at 12 + 2 weeks of gestation, with ultrasound scan showing that fetal nasal bone was present, but fetal nuchal translucency was thickened at 5 mm (Fig. 1). Adjusted risks of trisomy 21 (Down's syndrome), trisomy 18 (Edwards' syndrome) and trisomy 13 (Patau's syndrome) were 1 in 30, 1 in 25, and 1 in 352, respectively. Non-invasive prenatal testing revealed a low probability of trisomy 21, 18 and 13, low probability of sex chromosome aneuploidy and no evidence of Digeorge syndrome. Antenatal blood tests done showed that the patient had an Hb level of 9.6 g/dL with low MCV of 72.9 fL. Hb electrophoresis did not reveal any abnormal Hb band and blood group and antibody screen showed that her blood group was O+ with no significant red cell antibodies. Her husband had a normal Hb (15.6 g/dL) and MCV (86.0 fL). His Hb electrophoresis did not reveal any abnormal Hb band. The patient underwent amniocentesis at 16 + 1 weeks of gestation. Amniotic fluid sent for karyotyping showed a female karyotype (46XX) with no apparent chromosomal abnormalities. Chromosome microarray (CMA) was also sent. Diagnosis Mid-trimester fetal anomaly ultrasound scan at 19 weeks showed signs of hydrops fetalis. An enlarged fetal heart with pericardial effusion (Fig. 2) and suspicion of possible atrial septal defect (ASD) was noted. Scalp edema (Fig. 3) and as- Treatment and follow-up The patient subsequently underwent termination of pregnancy at 22 weeks of gestation. Discussion Alpha-thalassemia is the most prevalent single gene disorder [4], with a high prevalence of 22.6% in Southeast Asia [5]. Hemoglobin Bart's Hydrops Fetalis J Med Cases. 2021;12 (7):275-279 In Singapore, a multiracial Southeast Asian country, 6.4% of Chinese, 4.8% of Malays and 5.2% of Indians were found to be carriers of α-globin gene mutations [6]. Routine antenatal screening for thalassemia is conducted for all pregnant women in Singapore by measurement of maternal MCV and Hb electrophoresis. A low measurement of maternal MCV will warrant a measurement of paternal MCV. Genetic testing for α-thalassemia will also be offered. A low maternal MCV and normal paternal MCV usually do not trigger further investigation since the child should not be affected by thalassemia major syndromes if we follow the typical Mendelian autosomal recessive inheritance pattern. We present a rare case of Hb Bart's hydrops fetalis syndrome due to maternal uniparental disomy at chromosome 16. Uniparental disomy refers to the inheritance of a pair of chromosomes from only one parent [7]. Although maternal uniparental disomy at chromosome 16 is commonly associated with intrauterine growth restriction [8], it has been associated with unmasking of recessive conditions, such as α-thalassemia major and Fanconi anemia [9]. Several cases of maternal uniparental disomy leading to Bart's hydrops fetalis syndrome have been reported by Kou et al, Au et al and Wattanasirichaigoon et al [10][11][12]. This case highlights the limitations of thalassemia screening with parental MCV. Firstly, using the threshold of MCV < 80 fL may not detect all patients who are thalassemia carriers, and there are cases of α-globin gene deletions found in patients with MCV > 80 fL [13,14]. Moreover, similar to our case, in the cases reported by both Kou et al and Au et al, only one parent was a carrier of α-thalassemia. This suggests that a normal MCV and negative thalassemia screen in a single parent may not rule out fetal Hb Bart's disease due to the rare occurrence of maternal uniparental disomy. In our case, signs of fetal hydrops were noted on midtrimester ultrasound screening. Further investigation of the cause of fetal hydrops led to the discovery that the fetus was affected by Hb Bart's hydrops fetalis syndrome, which is the commonest cause of hydrops fetalis in Southeast Asia [15,16]. The fetus was found to be homozygous for SEA type 2 α-globin gene deletion, which is the main cause of the Hb Bart's hydrops fetalis in Asia [17]. This highlights the importance of a mid-trimester screening ultrasound even if only one parent is a thalassemia carrier. Although not available in all Asian countries, a mid-trimester ultrasound screening for fetal anomalies is performed for all antenatal patients in Singapore from 18 to 20 + 6 weeks as recommended by the NICE guidelines [18]. In conclusion, uniparental disomy of chromosome 16 is a rare but possible cause of Hb Bart's hydrops fetalis and should be considered in cases with sonographic features of fetal hydrops, even in cases where only one parent is an α-thalassemia carrier.
2021-08-02T00:05:40.732Z
2021-05-13T00:00:00.000
{ "year": 2021, "sha1": "45870abcd21f00c2006939203648899b8920ad08", "oa_license": "CCBYNC", "oa_url": "https://www.journalmc.org/index.php/JMC/article/download/3693/3068", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2b75cfa92ded65b2508e8fbd4e604604b12ca1e0", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
83176894
pes2o/s2orc
v3-fos-license
Characterization and Technological Properties of Bifidobacterium Strains Isolated from Breast-fed Infants : Bifidobacteria represent the largest group of human intestinal bacteria. They have an important place in human health and represent the dominant group microflora of the newborn breast-fed. Following the behavior of strains of Bifidobacteria isolated from the breast-fed infants and from saline rehydration solution was considered in order to develop therapeutic fermented milk. Samples from newborn infants aged 10 months, or from a saline rehydration solution (Celia/Develop ORS) containing Bifidobacteria sold was used and isolated strains belonged to breve and longum species. Those strains showed preferences to neutral pH. They are mesophilic and tolerate high temperatures (42 °C). Glucose was commonly carbohydrate used in selective media for Bifidobacteria . Production of titratable acidity and therefore lowering the pH varies from one strain to another. Introduction Bifidobacteria form the largest group of human intestinal bacteria, especially in children [1]. They occupy an important place in human health. The first species of this genus isolated in 1899 from a healthy infant breast-fed by Henry Tissierwhich was classified later as Bifidobacterium bifidum. They settle in a short time after birth and become the dominant group of bacteria, 92% of the microflora of the newborn breast-fed consists of Bifidobacteria. However, the rate of these bacteria is reduced in favor of Lactobacilli, Enterobacteriaceae, Streptococci and Clostridia throughout life [2]. Some properties of Bifidobacteria have promoted their use in food products called probiotics [3] such as fermented milks, cheese and milk powder [4]. The effect of probiotic Bifidobacteria depends on their survival not only in food but also in the gastrointestinal tract [5]. For this reason, it becomes necessary to identify and assess the population of Bifidobacteria in fermented products to ensure a sufficient intake of probiotics for the expected benefits. Origin of Samples Samples were obtained from newborn infants aged 10 months, or from a saline rehydration solution (Celia/Develop ORS) containing Bifidobacteria sold, citrate, lactodextrane, sucrose, minerals, and traces of milk and banana aroma. Culture Media The isolation of Bifidobacteria was performed on MRS supplemented with 0.05% cysteine-chloride and 2 mg/L nalidixic acid at pH 6.8. Isolation requires strict anaerobic conditions (anaerobic jars with gas-packs). The purification is performed by successive transplanting from Petri plates containing selective medium (MRS medium supplemented with 0.05% cysteine-chloride and 2 mg/L nalidixic acid). For storage, the strains were frozen at -20 °C in skim milk containing 30% glycerol, 10% of yeast extract and 0.2% cysteine-HCl [6]. Biochemical and Physiological Testing The pre-identification begins with the observation of colonies on MRS medium supplemented with 0.05% cysteine-chloride and 2 mg/L nalidixic acid. A catalase test is carried out. Research the type fermentation, production of indole, citrate on middle Kempler and Mc Kay, the demonstration of urease are performed. The test of proteolysis of gelatin and by a growth test on bile which is an important criterion for the selection of probiotics was followed. The influence of pH was tested on strains selected at different pH (4, 5, 6.5, 8.5 and 8) and the influence of incubation temperature (25 °C, 30 °C, 45 °C) followed by a growth test in hyper saline environment at 4%, 6.5% NaCl. Fermentation of Carbohydrates The fermentation of sugar is examined on MRS medium containing bromocresol purple as pH indicator. 2% of various sugars (arabinose, glucose, fructose, galactose, lactose, maltose, mannitol, rhamnose, sucrose, xylose, esculin and sorbitol) are added. The preparations were covered with 1 mL of sterile paraffin oil for anaerobiosis. Incubation is carried out at 37 °C from 24 to 48 h The results of various tests morphological, physiological and biochemical are compared to those described by different authors [7]. Kinetics of Growth In pure culture, the kinetics growth of Bifidobacteria strains is followed by counting on MRS medium supplemented with 0.05% cysteine-chloride and 2 mg/L nalidixic acid) solid, and at different time (0 h, 2 h, 4 h, 6 h ...up to 72 h). Determination of Titratable Acidity The determination of acidity during growth in milk was performed as described by Accolas et al. [8], using NaOH (N/9) in the presence of phenolphthalein indicator (1% in alcohol). pH monitoring during the growth. The acidity produced in the milk is also followed by measuring the pH using a pH meter. Results and Discussion The colonies of Bifidobacteria developed on MRS medium supplemented with 0.05% cysteine-chloride and 2 mg/L nalidixic acid are Gram positive appearance varies. They are whitish and cream, regular contour, and of varying diameter. This macroscopic appearance is often found in Bifidobacteria. The purified strains were all catalase negative, they do not possess urease, do not produce indole and do not liquefy gelatin but they are resistant to 2% of bile. These characteristics are typical of the genus Bifidobacterium. The results showed that all strains ferment some sugars (glucose, fructose, maltose and lactose). Strains BL1, BL2 ferment arabinose. This property seems to distinguish the species from other species longum. Moreover, strains BR1, BR2 do not ferment arabinose, xylose or the cons they ferment sorbitol, all these characterstics were schowed in Table 1. With reference to the literature, the strains isolated from fecal of newborn infants belong to two species of Bifidobacterium. The two strains BL1 and BL2 belonged to the species longum and BR1 and BR2 (isolated from saline rehydration solution) to breve. The results of sensitivity or the resistance of the strains isolated from fecal or from saline rehydratation solution to the different antibiotics were regrouped in the Table 2. The growth kinetics results of the strains BL1, BL2, BR1 and BR2 in pure culture are illustratd in the Fig. 1A, the evolution of pH Fig. 1B and titrableacidity Fig. 1C. Bifidobacteria are commensal bacteria of humans, they are also found in animals [9]. The isolation of Bifidobacteria requires quite specific conditions, which involve systems such as anaerobic Anaerocult or gas pack and rich culture media such as TPY, Beerens medium, MRS cysteine, Columbia medium modified [7,10,11]. The pre-identification of strains of bifidobacteria on MRS medium supplemented with 0.05% cysteine chloride and 2 mg/L nalidixic acid showed an appearance of small colonies regular outline and variable aspect. Cells forming colonies are Gram positive, characterized by varying forms, but often bifid forms that are typical for Bifidobacteria. Pleomorphism observed in Bifidobacteria is often associated with the composition of culture medium. Studies have shown that the cellular morphology of Bifidobacteria is influenced by the nature of the carbon source present in the culture medium [2,13]. All strains belonging to the genus Bifidobacterium are catalase negative, which does not form indole, does not have a urease activity and does not liquefy gelatin. These biochemical characteristics are consistent with additional features specific to gender, reported by Mitsuoka [14]. All Bifidobacteria strains showed good growth on medium supplemented with 2% bile salts. Also it is found that the Bifidobacteria degrade bile salts, this criterion probiotic is due to an enzyme that hydrolyzes bile salts (BHS) [15], this enzyme was isolated from the stem Bifidobacterium longum BB536 [16], and in strain Bifidobacterium longum SBT2928 [17]. Adaptive mechanisms of tolerance to bile salts could lead to better adaptation to the environment and colonic available carbon sources and a persistent increase in the viability of Bifidobacterium in the intestinal environment [18,19]. The strains isolated from fecal of newborn infants are resistant to nalidixic acid, trimethoprim-sulfamethoxazole (cotrimoxazole). These antibiotics are used as selective agents in synthetic media for the isolation and enumeration of Bifidobacteria [20]. Differentiation of Bifidobacterial species is based on the fermentation of carbohydrates. Indeed, Bifidobacterium longum NCC 2705 has genes coding for fumarase, oxoglutarate dehydrogenase in, and malate dehydrogenase. These enzymes allow the degradation of several sugars (arabinose, xylose, ribose, cellobiose, melibiose, maltose, raffinose and mannose) [21]. The comparison with the profile described by fermentation Scardovi [7] and Tamime et al. [22] led to identifying two species of Bifidobacterium breve which includes strains BR1, BR2, the species longum BL1, BL2. The degradation of carbon substrates by Bifidobacteria leads to the formation of two acids (lactic and acetic) which lead to a lowering of pH of the medium. This shift in pH has no influence on the growth of these bacteria for 24 hours of incubation [23]. The effect of pH on microbial growth affects enzyme activity, cell permeability of certain nutrients which depends on ionic balance. The results showed that at pH 4, 4.5 and 5 no growth was observed. Bifidobacteria are generally sensitive to pH values below 4.6 [24]. Matsumoto et al. [25] indicated that tolerance of Bifidobacterium longum, Bifidobacterium adolescentis and Bifidobacterium pseudocatenulatum at acidic pH values is limited, a significant decrease in the viability of strains in a medium at pH 3 after only one hour of incubation. Moderate growth and a significant slowdown in growth of the strains isolated from fecal and saline rehydration solution is obtained in the medium at pH 8 and a total inhibition in the medium at pH 8.5. Optimum growth for different strains is obtained in the medium at pH 6.5. These results confirm that Bifidobacteria prefering neutral or slightly acidic pH was between 5 and 8 as was reported by several authors [26][27][28][29]. The incubation temperature is on transport systems through the membrane and therefore the disruption of cellular metabolism. The results of the growth of Bifidobacterium strains in MRS medium cysteine, incubated at various temperatures show variability in behavior among species and strains. Indeed at incubation temperature of 4 °C and 25 °C, there is complete inhibition of growth. Moreover, the growth of all strains is better at the incubation temperature of 30 °C and 37 °C. At a temperature of 45 °C, there is a total absence of growth of three strains of Bifidobacterium isolated from fecal newborn infants. This behavior can be explained by the fact that the strains are of human origin and cannot withstand high temperatures. The same results were reported by several authors [7,9,30,31]. While there is moderate growth of the strain BR2 isolated from saline rehydration solution. These observations show firstly that the incubation temperature is an important parameter that can affect the growth of Bifidobacteria, which, on the other hand, confirms that Bifidobacteria of human origin are mesophilic bacteria growing at optimal temperatures of 30-37 °C, but can adapt to higher temperatures. If the introduction of Bifidobacteria in dairy industry has made there over 20 years in technologically advanced countries, it is against, not yet possible in some other countries such as Algeria. This situation is related to the constraints posed by the genus Bifidobacterium which is very sensitive to acidity developed in milk and aerobically on prevailing there. Production of acidity depends on several parameters: incubation temperature, physiological state of bacteria, the inoculum concentration, and the milk used. Evaluation of titratable acidity, pH and the number of bacteria produced during growth of strains in pure culture as well reveal a significant difference between Bifidobacterium strains used (BL1, BL2, BR1, and BR2) of as much as Martinez-Villaluenga and Gomes [32] observed that the growth rate and generation time of bifidobacteria in the UHT milk vary among strains. After 6 h of incubation it is observed that the strains (BL1, BL2) belonging to Bifidobacteriumlongum are more acidifying than strains belonging to Bifidobacterium breve (BR1, BR2). Differences in acid production by strains of bifidobacteria have been reported by several authors [32][33][34]. These differences in behavior of Bifidobacterium strains in the different milks may be due partly to the composition of milk and also to the proteolytic activity that varies from strain to strain. Survival of Bifidobacteria remains low. However, this survival may also be significantly improved by the addition of indigestible substances commonly known as "prebiotics". Among prebiotics mainly used in the food industry, there are inulin and oligofructose, honey bee [35] On the other hand, another factor that appears to inhibit the growth of Bifidobacteria in milk medium is oxygen; however the degree of tolerance to this factor depends on the species and culture medium; to remedy this problem the adding a reducing agent such as cysteine hydrochloric seems to have its effect. Conclusions According to the results of the phenotypic characterization the strains (BL1, BL2, BR1 and BR2) were isolated from breast-fed infants and saline rehydratation solution belonged to two species of Bifidobacterium (longum, breve). The strains BL1 and BL2 have shown some technological properties such as the tolerance to the high temperature (42 °C), production of acidity, the resistance to the antibiotics and their kenitics of growth in skim milk, which suggests their possible use in the food industry. However, more studies are needed to test these strains for the human health.
2019-03-19T13:04:39.162Z
2012-10-28T00:00:00.000
{ "year": 2012, "sha1": "831cc183832165add0e0971a7f17dc9fa52e8218", "oa_license": "CCBYNC", "oa_url": "http://www.davidpublisher.org/Public/uploads/Contribute/568103e46c4d1.pdf", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "7f46808b0284907b33ed93ae3bc1c4e522c95083", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
236882712
pes2o/s2orc
v3-fos-license
Nonuniform-temperature effects on the phase transition in an Ising-like model In this study, we investigate the spatially nonuniform-temperature effects on the QCD chiral phase transition in the heavy-ion collisions. Since the QCD effective theory and the Ising model belong to the same universality class, we start our discussion by mimicking the QCD effective potential with an Ising-like effective potential. In contrast to the dynamical slowing down effects which delays the phase transition from quark-gluon-plasma to hadron gas, the spatially nonuniform-temperature effects show a possibility to lift the phase transition temperature. Besides, both the fluctuations and the correlation length are enhanced in the phase transition region. Furthermore, the critical phenomena is strongly suppressed like as the critical slowing down effects. The underlying mechanism is the nonzero-momentum mode fluctuations of the order parameter induced by the nonuniform temperature. Our study provides a method to evaluate the nonuniform-temperature effects, and illustrate its potential influence on analyzing the QCD phase transition signals at RHIC. I. INTRODUCTION Exploring the QCD phase boundary and the critical point (CP) is one of the main goals at the Relativistic Heavy Ion Collider (RHIC) [1][2][3][4][5]. In the collider, a fireball forms quickly and then cools down. The QCD matter inside undergoes a phase transition from quark-gluon-plasma (QGP) to the hadronic phase. These two phases are separated by a dynamical phase transition surface in the fireball [6][7][8][9][10]. Outside the surface, the hadrons and resonances scatter with each other and part of them decay. The inelastic collision between the hadronic matter finally ceases at a hypersurface named chemical freeze-out surface which is nested outside the dynamical phase transition surface [11]. The main experimental measurement related to the phase transition signals are event-byevent fluctuations of chemical freeze-out particle multiplicities [5]. Searching the phase boundary and the CP from the dynamical process at RHIC, we have to face two basic questions. Does the dynamical phase transition boundary coincide with the equilibrium phase transition boundary in the QCD phase diagram? Are the critical behaviors kept to identify the CP? Recent studies show that the chemical freeze-out line fitted from experimental data overlaps with the equilibrium phase transition boundary depicted by lattice calculation [12][13][14][15][16][17][18][19][20]. It strongly hints that the dynamical phase transition inside the fireball may happen at a temperature above the equilibrium phase transition temperature so that the hadrons have enough time to freeze out (see the sketch of an instantaneous fireball in Fig. 1). This cannot be predicted by the dynamical delay effects, where the dynamical phase transition follows and memorizes the behaviors of equilibrium phase transition [21][22][23][24][25]. On the other side, the fluctuations and the correlation length of the QCD order parameter (i.e., the σ field) have been broadly applied in calculating the fluctuation behaviors * lijia.jiang24@gmail.com FIG. 1. A sketch of an instantaneous fireball. The temperature decreases from inner to outer (red to blue). The black dashed line is the chemical freeze-out surface. The green brush line refers to the isothemal surface of the equilibrium phase transition (PT) temperature in temperature-uniform systems. These two lines overlap with each other according to the lattice results and experimental data. The red solid circle represents the dynamical phase transition surface at higher temperature. of observables such as net charge, baryon number and particle ratios [4,5,26,27]. The correlation length has been estimated to be about 3 fm near the CP by including the finite size effect and the critical slowing down effect [21,28]. Yet how the spatially nonuniform temperature affects the QCD phase transition at RHIC such as the phase transition point, the fluctuations, and the correlation length remains unclear. In this paper, we investigate the spatially nonuniformtemperature effects on the QCD phase transition in a fireball context. Note that both the position and the shape of the dynamical phase transition surface vary with time during the fireball evolution. As shown in Fig. 1, we take an instantaneous slender brick cell in the fireball with phase boundary located in the middle. In the brick cell, the temperature is spatially nonuniform. The phase transition region where the dynamical slowing effect is magnified is just a narrow part of arXiv:2102.11154v3 [nucl-th] 1 Aug 2021 the brick cell, therefore, we simplify our discussion by further supposing the relaxation of the σ field configurations in the whole brick cell approaches to zero. As a result, the σ field reaches its stationary distribution instantly. With this Markov assumption, the instantaneous dynamical phase transition surface turns into the stationary phase transition surface in a steady temperature-nonuniform system and no dynamical slowing effects are taken into account. For the brick cell, we calculate the stationary solution of the σ field, deduce and discuss the corresponding fluctuation strength and the correlation length by mapping the QCD effective potential to the Ising model. Remarkably, we find the phase transition temperature in such a temperaturenonuniform system is above the equilibrium phase transition temperature T c of a temperature-uniform system. It means if the nonuniform-temperature effects are dominant at RHIC, hadrons may form at temperature higher than the phase transition temperature determined by lattice calculations. Further, the fluctuations and the correlation length in the phase transition region is significantly increased compared to that in the periphery of the cell, making a signal of QCD phase transition. However, the CP cannot be identified from the phase transition scenarios due to the nonzero-momentum mode fluctuations of the σ field induced by nonuniform temperature. The rest of the paper is organized as follows. In Sec. II, we introduce a tanh-type nonuniform temperature profile to the brick cell, with finite temperature gradient in the phase transition region. In Sec. III, the probability distribution function of the order parameter field in the temperature-nonuniform system is developed. The Ising-like QCD effective potential is employed in the probability distribution function. In Sec. IV, the stablest order parameter profile with maximum probability is evaluated. In Sec. V and Sec. VI, the fluctuations around the stablest profile and the correlation length of the order parameter are calculated and analyzed, respectively. In Sec. VII, we show results with a more realistic temperature profile. In Sec.VIII, we summarize our main results and give further discussions and outlook. II. TEMPERATURE PROFILE First, we start the discussion by formalizing the temperature profile in the brick cell as shown in Fig. 1. For simplicity, we suppose the y-z plane (the cross-section of the brick cell) is isothermal, and the temperature function along the x-axis (the longitude direction of the brick cell) is spatially dependent, where δT is the temperature bias between the two ends of the cell. The width w refers to the range of the region near the equilibrium phase transition surface (x = 0) where a finite temperature gradient (∼ δT/2w) presents. Note that the real temperature profile is determined by the background matter fields like as quarks and gluons [29]. An example with a more realistic temperature profile is presented in Sec.VII, the results qualitatively agree with that from the tanh-type temperature profile. Nevertheless, the dynamical phase transition at RHIC is complicated, for example, the baryon chemical potential profile is also spatial nonuniform. In this study, we adopt the simplified temperature profile to focus our attention on the nonuniform-temperature effects in the phase transition region. In addition, we assume the baryon chemical potential is homogeneous in the cell. In our numerical simulation, the temperature bias for the temperature profile is set as δT = 40 MeV and the width is set to be w = 1 fm (or w = 0.5 fm). The corresponding temperature gradient in the phase transition region is about 20 MeV/fm (40 MeV/fm), which is comparable to the gradient in a real fireball. For example, in a fireball of radius 10 fm with a central temperature 200 MeV, the mean temperature gradient along the radial direction is 20 MeV/fm. III. PARTITION FUNCTION As the local equilibrium assumption is proved to be wellperformed in the relativistic hydrodynamics [29][30][31][32][33][34], we carry on this assumption in our calculation. Thus, the probability distribution function of the σ field in the temperaturenonuniform system is a product of the local probability distribution function [26] at different position r. In the continuous limit, the probability distribution function is with r = (x, y, z). The effective potential of the σ field can be obtained from different QCD-inspired models [17,[35][36][37][38][39][40][41][42][43][44]. For instance, in the linear sigma model coupled to constituent quarks, the effective potential (the grand canonical potential) of the σ field is obtained by integrating out quarks [43]. Generally, the QCD effective potential in the CP regime can be Taylor expanded, V[σ] = n z n (σ − σ 0 ) n , where σ 0 is the minimum point of the σ field at CP (µ c , T c ). Since the QCD effective theory and the Ising model belong to the same universality class, we assume that the effective potential can be parameterized as where r is the reduced temperature and h is the magnetic field in the Ising model [45,46]. In the simplest linear mapping between (T, µ) and the Ising variables (h, r) [45][46][47][48], we have h = a∆T and r = b∆µ, where ∆T = T − T c and ∆µ = µ − µ c . Consequently, we obtain where a > 0, b < 0 and c > 0 are free parameters which can be constrained by the QCD effective theories, lattice calculations or experimental data etc. Note that in a general linear mapping, the linear transformation between (h, r) and (∆T, ∆µ) contains two mixing angles [46]. We omit these angles in this article for simplicity. Within the simplest mapping, the phase transition temperature is µ independent. For ∆µ ≡ µ − µ c > 0 and ∆µ < 0, the effective potential describes the first-order phase transition and crossover respectively as the change of temperature. Throughout the article, we set a = 0.5 fm −2 , b = −0.25 fm −1 , and c = 3.6. These values are chosen by constraining the correlation length and the expectation values of the σ field in the reasonable ranges as is explained below. The phase transition temperature is set to T c = 160 MeV, which is close to the lattice simulation result [17,18,25]. In this parameter setting, for ∆µ = 0, ∆T = ±20 MeV, the minimum point of the σ field (i.e. the expectation value in the meanfield approximation) is σ = σ 0 ∓ (a∆T/4c) 1/3 = σ 0 ∓ 30 MeV and the correlation length of the σ field is about 1 fm (which is a natural value of correlation length away from the CP [21,49]). For ∆T = 0, ∆µ = 200 MeV, the expectation value is σ = σ 0 ± −b∆µ/2c = σ 0 ± 37 MeV and the correlation length of the σ field is 1 fm. Different choices of the value of (a, b) are equivalent through rescaling the magnitude of ∆T and ∆µ. The empirical value of σ 0 at CP is around 45 MeV [43,44]. Since the value of σ 0 will not influence our discussion on fluctuations and correlation length, we simply set σ 0 = 0 in the following. Then, in thermal equilibrium, σ < 0 and σ > 0 correspond to the QGP phase and the hadron phase, respectively. IV. THE STABLEST ORDER PARAMETER PROFILE In this section, we figure out the stablest order parameter profile which maximizes the probability. Since the temperature is spatially nonuniform, the local order parameter which maximizes the probability distribution function is never again determined by minimizing the effective potential ∂V[σ]/∂σ = 0, but satisfies the extreme value condition, δP[σ]/δσ = 0. Explicitly, we have . (4) The formula in the bracket vanishes in the extreme value condition δP[σ]/δσ = 0. Therefore, we have As we have supposed that the temperature distribution in the y-z plane is isothermal, the σ(r) that maximizes the weight function must be flat in this plane. Thus σ(r) depends only on x, and Eq. (5) reduces to a one-dimensional problem. The boundary condition is given by the local order parameters at the ends, i.e., σ(x = −L/2) = σ L and σ(x = L/2) = σ R , where σ L and σ R are the global minimum point of the potential V[σ] at x = ∓L/2 and L is the cell's length. Note that when L is sufficient large, i.e., L w, the magnitude of L will not influence the following results. The solution σ c (x) to Eq. (5) is presented in Fig. 2, with w = 1 fm and different ∆µ. A main information from these order parameter profiles is that σ c (x) changes its sign at x > 0, no matter the sign and magnitude of ∆µ. It is easy to check that, without the temperature gradient term (1/T )∇T · ∇σ, the solution σ c (x) is an odd function of x and vanishes at x = 0. As the (1/T )∇T · ∇σ term is always negative (∂ x σ < 0 and ∂ x T > 0), it will always contribute similar corrections to the solution σ c (x), and the sign change of σ c (x) will universally happens at x > 0. This result can be comprehended directly from the probability distribution function Eq.(2). In the brick cell, the hot part with high temperature is more easily fluctuated than the cold part. Therefore, σ c (x) will tend to the order parameter value of the cold part, and σ c (0) becomes positive. Like as the equilibrium phase transition of the Ising model, we identify the point of sign change of σ c as the phase transition point at different ∆µ. The phase transition point always locates at some position x PT > 0 (see Fig. 2) and the corresponding phase transition temperature T (x PT ) is generally higher than the equilibrium phase transition temperature T c = T (x = 0). Note that the phase transition temperature at the phase transition position x PT can be evaluated from the function of temperature profile (1). In Fig. 3, we show the phase transition temperature for the two widths w = 1 fm and w = 0.5 fm. The phase transition temperature is lifted about 3 MeV and 8 MeV from T c , respectively. A steeper temperature gradient leads to a higher phase transition temperature. Note that the lifted values of temperature is not universal and depend on the temperature profile. At RHIC, the spatial temperature profile usually is not a tanh-type, thus in Sec.VII, we consider a more realistic temperature profile fitted from the hydrodynamics' output. The phase transition temperature is also lifted, which qualitatively agrees with the result from the tanh-type temperature profile. We conclude that the nonuniform-temperature effects will change the phase transition temperature, and provide a possibility that the QCD phase transition happens at temperature higher than the lattice T c . In the following, we keep our discussion on the tanh-type profile and reveal how the temperature profile influences the fluctuations and correlation length. V. THERMAL FLUCTUATIONS In this section, we study the fluctuation behaviors of the σ field and show how it is influenced by the temperature profile. We express the σ field as the combination of the variational extremum solution and a small fluctuation, σ(r) = σ c (x) + δσ(r). Then, the probability distribution function P[σ], up to the fourth order of the fluctuation, becomes The first term is a finite number which depends on the profile σ c , the second term equals to 1 because δP/δσ vanishes for σ = σ c [see Eqs. (4) and (5)], and the last three terms are contributions from the fluctuations. For the Ising-like potential (3), we have where the mass term is spatially dependent. In this article, we mainly focus on the variance of the fluctuations, so we omit the cubic and quartic terms which are of higher order of δσ and can be neglected in the perturbation theory [26]. The cubic and quartic terms will be taken into account for the higher-order cumulants of the fluctuations [26,49,50]. Conventionally, we start the discussion from the mass term of the δσ field. Note that in a uniform system with temperature T , the correlation length is related to the mass of the σ field:ξ = 1/m, wherem 2 = ∂ 2 V/∂σ 2 σ=σ ≥ 0 and the expectation valueσ is determined by the condition ∂V/∂σ σ=σ = 0. Similarly, for the nonuniform case, we define a local correlation length: ξ local (x) = 1/ m 2 (x). We present the results of m 2 (x) in the brick cell in Fig. 4a). In the periphery, we have m 2 ≈ 1 fm −2 and thus ξ local ≈ 1 fm, which coincides with ξ at temperature T = T (±L/2). This is due to the fact that the temperature becomes flat when the position is far from the center (|x|/w 1). In the central part, m 2 (x) presents exotic behaviors for different phase transition scenarios. In the crossover regime (∆µ < 0), m 2 (x) > 0 everywhere. For the critical value (∆µ = 0), m 2 (x) vanishes at σ c = 0, and the local correlation length ξ local diverges. However, in the firstorder phase transition regime (∆µ > 0), m 2 (x) is negative in the phase transition region, which is in contrast to the positivē m 2 in a temperature-uniform system. Therefore, the current definition of the local correlation length is not appropriate in the phase transition region with a finite temperature gradient. As we will show below, the variance of the local fluctuation δσ(x) is always positive, and is better-suited for the description of the temperature-nonuniform system. In the following, we calculate the variance of the fluctuation. We presume the size along the y and z direction is much smaller than the unknown correlation length. Therefore, we can adopt the zero-momentum mode approximation for y and z directions and thus δσ(r) depends only on x. The cross-section of the brick cell is denoted as S . Discretizing the x-axis with spacing length ∆x, the probability distribution function becomes where the nonzero elements of the matrix M are Here, 'i' refers to the position x = i∆x. The matrix M must be positive-definite so that the solution σ c is guaranteed to maximize the probability distribution function. We would like to emphasize the necessity and importance of the kinetic energy in P[σ] (see Eq. (7)), which is nonzero and solves the negative m 2 (x) problem in the first-order phase transition scenario. This is because in the brick cell, m 2 (x) constructs a potential well as shown in Fig. 4a), and the kinetic term has to be finite due to the uncertainty principle. From the same reason, at ∆µ = 0, the fluctuations on the CP is not divergent due to a positive ground energy of M. The nonzero kinetic energy represents the contribution from the nonzero-momentum mode In both panels, the red, green and blue lines represents results in the crossover (∆µ < 0), CP (∆µ = 0) and the first-order phase transition (∆µ > 0) scenarios, respectively. Solid lines are results with w = 1 fm, and dotted lines are results with w = 0.5 fm. fluctuations of the σ field, which plays a crucial role in the temperature-nonuniform system. The variance of the local fluctuations is In Fig. 4b), we plot the results of the variance for different w and ∆µ. Note that the maximum point of the variance locates a little right of the minimum point of m 2 (x), because the fluctuations in the right of the cell is lifted due to a higher temperature compared to the left (see Eq. (7)). Interestingly, the fluctuations on the phase transition point monotonically increase from the crossover (∆µ < 0) to the first-order phase transition (∆µ > 0). There are no exotic behaviors to characterize the CP (∆µ = 0). In addition, the fluctuations near the phase transition point are enhanced as the increase of the width w for all the three scenarios. This can be understood in the extreme case that when w → ∞, the temperature is flat locally and the fluctuations near the CP become divergent. VI. CORRELATION LENGTH Now we calculate the correlation length near the phase transition point from the normalized nonlocal correlation, where x p = i 0 ∆x denotes the spatial location of the maximum point of the variance. Numerically, we have G(2 j∆x) = [ We plot the result in Fig. 5. The normalized nonlocal correlation does not exactly decay exponentially, so we determine the correlation length ξ by requiring G(ξ) = exp(−1). The ξ again smoothly increases from the crossover regime (∆µ < 0) to the first-order phase transition regime (∆µ > 0), and decreases as the increase of the temperature gradient. With the current parameter set, our estimation of the correlation length ξ is about 1.65fm − 1.9fm in the central part of the brick cell, which is significantly larger than ξ ≈ 1 fm in the periphery 1 . It's important to point out that for the critical value ∆µ = 0, the correlation length does not diverge and is strongly suppressed by the nonuniformtemperature effects. The suppression is comparable to that from the critical slowing down effects [21]. The magnitude of the correlation length will be further suppressed when the critical slowing down effects are included. VII. AN EXAMPLE WITH A MORE REALISTIC TEMPERATURE PROFILE In this section, we present the results with a more realistic temperature profile. The temperature profile is extracted from the hydrodynamic simulation on the fireball evolution at RHIC (after smoothening) [33]. We again set the temperature in the slender brick cell. The temperature profile is shown in Fig. 6, where the temperature at x = 0 is the phase transition temperature T (x = 0) = T c , the position x = 5 fm corresponds to the center of the fireball, and the position x = −5 fm represents the left boundary of the fireball. We keep all the other parameters unchanged, and further assume that the effective potential (3) is valid in the whole temperature region. For the current temperature profile, the corresponding extreme solution σ c (x) is shown in Fig.7. In this plot, we can find that the phase transition happens at x PT > 0, where T ≈ 168 MeV is about 8 MeV larger than T c . This qualitatively agrees with the result from the tanh-type temperature profile. In Fig. 8, we show the results of local mass square and the variance of the fluctuations. The local correlation lengths at x = ±5 fm are ξ local = 1/m ≈ 0.73 fm and 0.5 fm, respectively. The local correlation lengths also becomes ill-defined in the first-order phase transition region, since local mass square becomes negative when σ 2 c < −b∆µ/6c. The variance vanishes at x = −5 fm since T → 0 at the boundary of fireball. In Fig. 9, the normalized nonlocal correlation near the phase transition point is plotted. The correlation length at the phase transition point is about 1.45 fm, which is significant larger that ξ local at x = ±5 fm. The variance and correlation length with this temperature profile present similar behaviors as those in the case FIG. 8. Panel a) and b) present the local mass square and the variance of the fluctuating σ field, respectively. In both panels, the red, green and blue lines represents results in the crossover (∆µ < 0), CP (∆µ = 0) and the first-order phase transition (∆µ > 0) scenarios, respectively. of tanh-profile temperature. VIII. SUMMARY AND DISCUSSION In this article, we studied the nonuniform-temperature effects on the stablest order parameter profile, the fluctuations and the correlation length. Remarkably, we find that the phase transition temperature is generally ahead for different temperature gradients in our temperature profile settings. This hints at a possibility that if the nonuniform-temperature effects are manifest at RHIC, the hadrons may form at a tem-perature higher than the lattice T c . In addition, the phase transition region can be identified by the enhancements of both the fluctuations and the correlation length, and the enhancements decrease as the increase of temperature gradient. However, the uniqueness of the CP behaviors are wiped off. These novel phase transition behaviors inherit from the nonzeromomentum mode contribution of the order parameter induced by the nonuniform temperature distribution in space. Emphasize again that as the first attempt to discuss the nonuniform-temperature effects, we keep the model and parameter settings simple to manifest the main results from nonuniform-temperature effects. The real temperature profile as well as the baryon chemical potential profile at RHIC vary for different events at different time, and they are also affected by dynamical factors like the fluctuations, jet, and flow etc. Our use of the simplest Ising mapping for the QCD potential and the assumption of both uniform chemical potential profile and tanth-type temperature profile may be oversimplified for the fireball in RHIC. On the other hand, the higher order corrections of the order parameters in the QCD effective potential is also neglected (for example, the σ 5 term related to the h and r mixing as discussed in Ref. [46]). These approximations may induce uncertainty for our numerical results. Even in the Markov approximation within our assumption, statistical average over fluctuations in different temperature profiles is needed. Different parameter setting shows that both the phase transition temperature shift and the variance of the σ field qualitatively agree with each other. Therefore, the statistical average will not qualitatively change our conclusions within the current model setup. In the current treatment we simplify our model to the onedimension case by assuming the fluctuations in the cross sec-tion are frozen. For the spherical symmetry case, Eq.(5) can also be simplified to a one-dimensional differential equation by using the spherical coordinates. For a more complicated system related to the recent experimental data of net-proton fluctuations in Au+Au collisions [4], a full treatment of the three-dimensional differential Eq.(5) can be developed numerically. As for the phenomenological applications, we call attention to the nonuniform-temperature effects on modeling the dynamical phase transition during the fireball expansion at RHIC. The temperature gradients in the fireball is large, so the nonuniform-temperature effects should be significant but have been overlooked by now. Indeed, the nonuniform-temperature effects and the dynamical memory effects [22,45,[51][52][53][54][55][56][57] are two extreme cases corresponding to spatial correlation dominant and temporal correlation dominant, respectively. In our calculations, the enhancements of the fluctuations and correlations in the phase transition region show again the importance of the dynamical effects. The two effects are highly possible to interrelate with each other in the realistic fireball expansion. The combination of the nonuniform-temperature effects and the dynamical effects will provide a better description to the phase transition at RHIC. The inclusion of the nonuniformtemperature effects on the study of phase transition in the compact stars is also promising.
2021-02-23T02:15:35.809Z
2021-02-22T00:00:00.000
{ "year": 2021, "sha1": "d6ccec0310d5ceee30ba8577f2d4c27eb5793bcc", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2102.11154", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "d6ccec0310d5ceee30ba8577f2d4c27eb5793bcc", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
250928397
pes2o/s2orc
v3-fos-license
Effects of dietary protein levels on production performance, meat quality and flavor of fattening pigs This study aimed to evaluate the effects of dietary protein level on the production performance, slaughter performance, meat quality, and flavor of finishing pigs. Twenty-seven Duroc♂ × Bamei♀ binary cross-bred pigs (60.86 ± 2.52 kg body weight) were randomly assigned to three groups, each group has three replicates, and each replicate has three pigs. Three groups of finishing pigs were fed 16.0, 14.0, and 12.0% crude protein levels diets, and these low-protein diets were supplemented with four limiting amino acids (lysine, methionine, threonine and tryptophan). The results showed that the pigs fed low-protein diets increased (P < 0.05) loin eye muscle area, and reduced (P < 0.05) heart weight, lung weight. The feed-weight ratio of the 14.0% protein group was reduced (P > 0.05); Dietary protein levels significantly affected the luminance (L24h), yellowness (b45min and b24h) (P < 0.05), reduced shear stress, muscle water loss, drip loss, the levels of crude fat (P < 0.05), and increased marbling score (P < 0.05) in the muscle of finishing pigs; The low-protein diets improved PUFA/TFA, PUFA/SFA (P > 0.05), and increased hexanal, E-2-heptenal, 1-octen-3-ol, EAA/TAA in the muscle of finishing pigs (P < 0.05); The results indicated that reduced the crude protein levels of dietary by 2.0–4.0%, and supplementation with four balanced limiting amino acids had no significant effects on the production performance and slaughter performance of finishing pigs, and could effectively improve meat quality and flavor. Introduction Low protein amino acid balanced diet can meet the needs of livestock and poultry by reducing the protein level in the diet and adding free amino acids. It can improve the utilization rate of livestock and poultry feed protein, reduce production costs and reduce environmental pressure. Its application in the breeding industry is of great significance. Adding industrial synthetic amino acids to feed to appropriately reduce dietary protein . /fnut. . levels can effectively reduce feed costs, regulate gut microbiota structure, improve gut morphology, increase nitrogen utilization, reduce harmful gas emissions (ammonia), and can Improve gut health without compromising pig performance (1). Traditional corn-soybean meal diets typically have increased levels of soybean meal to meet the lysine requirements of pigs. This leads to excessively high dietary protein levels and low protein utilization. At the same time, animal undigested protein is excreted in large quantities through feces and urine, causing serious environmental pollution (2). In pig production, when the requirements of essential amino acids (EAA) and total nitrogen are met, the levels of crude protein in the diet could be reduced as protein requirements in pigs are essentially those of amino acids pig educed normal protein levels by 4% and balanced the dietary levels of EAA (3). Wang et al. found that a low-protein (13.5% CP) diet had no significant effect on growth performance, but significantly improved apparent nitrogen digestibility and nitrogen deposition rate, and significantly reduced nitrogen emissions in manure and urine in finishing pigs (4). Xu et al. found that neither a low-protein diet nor a high-protein diet had any significant effects on the backfat thickness and loin eye muscle area in finishing pigs, the low protein group has an increasing trend (5). Li et al. found that feeding a low-protein diet significantly increased the redness of muscle in finishing pigs, whereas feeding low-protein and very low-protein diets significantly reduced muscle shear force (6). Furthermore, lowprotein diets significantly improved the levels of intramuscular fat and monounsaturated fatty acids (MUFA) in the muscle of finishing pigs. Addition of alpha ketoglutarate to low-protein diets significantly increased the levels of intramuscular fat, oleic acid, and MUFA in the muscles of finishing pigs (7). Current literature on low-protein diets of finishing pigs has been focused mostly on growth performance, carcass traits, and meat quality indicators. However, only a few studies have evaluated the effects of low-protein, amino acid balanced diets on volatile compounds in the muscle of finishing pigs. Early research by our group found that reducing the dietary protein level of Du×Min crossbred finishing pigs can significantly affect the growth performance and slaughter performance, and have positive effects on improving meat quality and muscle nutrient composition (8). On the basis of previous study, this study evaluated the effects of various low-protein, amino acid balanced diets on production performance, slaughter performance, meat quality, and flavor in finishing pigs. The results provide a theoretical basis for the application of a low-protein, amino acid balanced diet in pig production. Ethics statement Experiments involving animals were carried out in accordance with regulations for the Administration of Affairs Experimental design and feeding management The experimental pigs were all selected from the parentgeneration pig farm of Gansu Agriculture and Animal Husbandry Fine Breeding Farm (Jingtai County, Gansu Province). The experimental animals were Duroc ♂ × Bamei ♀ binary cross-bred pigs that were close in parity, age, health status, and initial weight. A total of 27 Duroc heads were selected for the test ♂ × Bamei ♀ Binary cross bred pigs, which is 120 days old and weighs 60.86 ± 2.52 kg, and similar health status. They were randomly divided into the control group, group I, and group II. There were three replicates in each group and three pigs in each replicate. Conduct 5-day pre-test, and the formal test period is 60 days. The details of each experimental group are presented in Table 1. Dietary composition and nutrition level The reference index for finishing pigs with a body weight of 60.86 ± 2.52 kg was selected, based on the low-protein dietary recommendations in the "Piglet, Growing and Finishing Pig Compound Feed" standard, proposed by the China Feed Industry Association Group Standards Technical Committee in 2018. These recommendations were combined with the methods of Bai (9). Based on previously published data on low-protein diets with standardized ileal digestibility and supplemented amino acids; we designed three diets with crude protein levels . /fnut. . of 16.0, 14.0, and 12.0%, respectively. The composition and nutritional level of the basic diets for the experimental pigs are shown in Table 2. Feeding management The pigs were raised on the parent-generation pig farm of an agricultural and animal husbandry breeding farm in Gansu Province. The pig house was cleaned and disinfected before the study. During the study, pigs were dewormed and immunized, according to standard pig farm management protocols. The pigs had free access to feed and water. The pig house cleaned twice daily and disinfected once a week. During the test period, the amount of feed given and the amount remaining in each group were recorded every day, and the daily feed intake was calculated. After the experimental period, each pig was weighed on an empty stomach. Growth performance During the trial period, pigs were weighed before morning feeding and at the beginning and end of the trial. Body weight at the beginning and end of the trial were recorded, and the average daily weight gain was calculated. Furthermore, the daily feed volume and the remaining volume in the feed trough were recorded, and the average daily feed intake was calculated. The feed-to-weight ratio was calculated, based on the average daily gain and average daily feed intake, according to the following formulas: ADG = (final weight − initial weight)/no of days fed ADFI = TFC/(feeding days ×number of feeding) Flavor AA/total AA (%) 31.59 ± 0.11 a 31.54 ± 0.10 a 31.24 ± 0.14 b 0.020 #, umami amino acid; *, essential amino acid. Umami amino acids included glutamate, arginine, aspartic acid, and glycine. Sweet amino acids included glycine, serine, threonine, lysine, proline, and alanine. Acid amino acids included glutamate, aspartic acid, and histidine. Bitter amino acids included histidine, methionine, valine, arginine, leucine, phenylalanine, tryptophan, isoleucine, and tyrosine. Different letter superscripts in the same row indicate significant differences (P < 0.05). Values in the same row, either without letter superscripts or the same letter superscripts indicate no significant difference (P > 0.05). Crude protein levels of 16.0, 14.0, and 12.0% were fed to the control group, group I, and group II, respectively. Slaughter performance Before slaughter, pigs were fasted for 24 h and allowed to drink water freely. One pig was randomly selected from each repetition in each group, to be weighed and slaughtered. Three pigs were slaughtered in each group; thus, a total of nine pigs were slaughtered. Slaughtering and sampling were carried out in strict accordance with the "Good Practice for the Slaughtering of Livestock and Poultry -Pigs" (Chinese Standard GB/T 19479-2019). The warm carcass was weighed, and the carcass weight to live body weight ratio was considered to represent the slaughter rate. The average thickness of back-fat was measured, according to the Chinese "Feeding standard of swine" (Chinese Standard NY/T 65 -2004). The loin eye muscle area was measured as follows: the contour of the loin eye muscle was drawn on sulfite paper with a pencil, and the height and width of the muscle was measured with vernier caliper The area of the loin eye muscle was then calculated according to the following formula: height × width × 0.7 (10). The oblique length of the carcass was measured as follows: a hook was placed into the left hock of the carcass, which was then hung upside down. The length from the leading edge of the pubic symphysis to the leading edge of the first rib and sternum was measured with a meter rule, Meat quality analysis was also performed, and the levels of inosinic acid, amino acids, fatty acids, and volatile flavor substances were . Meat quality measurements The shear force tenderness was determined according to the methods outlined in "The Determination of Shear Force for Meat Tenderness" (Chinese Standard NY/T 1180-2006). Water loss rate (hydraulic) was determined according to the methods outlined in the "Determination of Meat Quality of Livestock and Poultry" (Chinese Standard NY/T 1333-2007). Cooking loss, marbling, and pH were determined by standard procedures, previously published (12). Meat color was determined by a CR-10 meat color tester. Muscle nutrient content The moisture content was determined by the electric oven drying method at 105 • C (Chinese Standard GB 5009.3-2016). The crude ash content was determined using a muffle furnace at 550 • C (Chinese Standard GB 5009.4-2016). The crude protein content was determined by the semi-trace Kjeldahl method (FOSS Kjeldahl apparatus, Chinese Standard GB 5009.5-2016). The crude fat content was determined by the Soxhlet extraction method (Chinese Standard GB 5009.6-2016). Muscle inosinic acid content High performance liquid chromatography was used to evaluate inosinic acid levels (13), under the following conditions: a C18 column (4.6 × 250 mm, 5 µm) was used with a 260 nm UV detector. The column temperature was 25 • C; the injection volume was 10 µL; the flow rate was 1 mL/min, and the retention time was 35 min. Amino acid contents in muscles The levels of 17 amino acids were determined according to the methods outlined in the "Determination of Amino Acids in Food" (Chinese Standard GB/T 5009.124-2003). Content of fatty acids in muscles The levels of each fatty acid were determined according to the methods outlined in the "Determination of Fatty Acids in Food" (Chinese Standard GB 5009.168-2016). Volatile compounds in muscles The volatile substances in fresh meat were separated and identified by gas chromatography-massspectrometry (GC-MS) on an Agilent GC analyzer. Samples were pretreated as follows: 3 g of each specimen was placed into 20 mL bottles (no more than 1/4 of the capacity of the bottle); the bottle was capped, and heated for 40 min at 100 • C. A solid-phase microextraction needle was then used for a total of 30 min for extraction and manual sampling (This was followed by agitation at 250 • C for 10 min; cooling to 20 • C; and successive washing with methanol, ethanol, ether, n-hexane, deionized water, and again with methanol). During extraction, the needle remained in the injection port for 5 min (14). The gas phase conditions were as follows: an hP-5 ms GC column was used (30 m × 0.25 mm × 0.25 µm). Helium was used as a carrier gas at a flow rate of 1.0 mL/min without diversion. The inlet temperature was 250 • C, and the column temperature was 40 • C. The initial temperature was set at 35 • C for 2 min, after which the temperature was increased to 230 • C, at a rate of 5 • C/min, and maintained for 5 min (15). The conditions for mass spectrometry were as follows: electron ionization source energy, 70 eV; and doubling voltage 1,400 V. The temperature of the ion source and interface was 250 • C, and the scanning mass range (M/Z) was = 20-500, with an interval of 0.3 s (16). Data processing and analysis The Excel 2016 software was used for preliminary statistics and sorting of experimental data, followed by the SPSS 22.0 software for further analysis. One-way anova analysis of variance was evaluated, and the Tukey's multiple range test was used to assess significant differences between various pairs of means. P < 0.05 was considered significant. The results were all expressed as the mean ± standard deviation (SD). E ects of dietary protein level on growth performance of finishing pigs Compared with the control group (fed 16% crude protein), the ADFI of the 12% protein group was reduced by 4.85% (P > 0.05) ( Table 3). The ADG in the 14% protein group was increased by 2.22% (P > 0.05), and that in the 12% protein group was reduced by 6.67% (P > 0.05). The ratio of feed to weight in group I was 2.34% lower (P > 0.05). Thus, a low-protein diet with a crude protein level of 14.0% could increase the ADG, reduce the feed-weight ratio, and improve the production performance of finishing pigs. E ects of dietary protein levels on slaughter performance, organ weight, and intestinal pH in finishing pigs Compared with the control group, the average thickness of back-fat of the 14 and 12% protein groups were reduced by 1.49 and 13.82%, respectively (P > 0.05) (Table 4). Furthermore, skin thickness was reduced by 9.70 and 9.03% in the 14 and 12% protein groups, respectively (P > 0.05). The area of the loin eye muscle was increased by 20.03% in the 14% protein group (P < 0.05%), and increased by 14.88% (P > 0.05%) in the 12% protein group. The heart weight in the 14% and 12% protein groups were reduced by 18.87 and 24.53%, respectively (P < 0.05), and the lung weight was reduced by 25.88% (P < 0.05) and 3.53% (P > 0.05), respectively. The pH values of the cecum in the 14 and 12% protein groups were reduced by 5.88 and 4.33%, respectively (P > 0.05%). A low-protein diet supplemented with amino acids had little effect on the slaughter performance of finishing pigs, but was able to significantly increase the area of the loin eye muscle (P < 0.05). Furthermore, a low-protein diet supplemented with amino acid can significantly reduce the heart weight of finishing pigs (P < 0.05). When the crude protein level was 14.0%, the lung weight of finishing pigs was significantly lower than that at a protein level of 16.0% (P < 0.05). No significant effects were observed on the weights of any other organs (P > 0.05). E ects of dietary protein levels on the muscle quality of finishing pigs Table 5 shows that compared with the control group, the shear force in the 14% protein group was increased by 10.66% (P > 0.05), whereas the shear stress in the 12% protein group was reduced by 15.85% (P < 0.05). In the 14% protein group, the filtration rate was significantly reduced (P < 0.05), and the cooking loss was reduced by 2.54% (P < 0.05). In the 12% protein group, the marbling score was significantly higher than that in the control group (P < 0.05). The drip loss in the 14% protein group was significantly reduced (P < 0.05). Compared with the control (16.0% protein group), the shear force of the muscle in finishing pigs was significantly reduced when the crude protein level was 12.0%, and the water loss rate and cooking loss were significantly reduced when the crude protein level was 14.0%. Therefore, a low-protein diet supplemented with amino acids could improve the marbling pattern and quality of meat. Compared with the control group, incarnadine L 45min levels in the 14% protein group were reduced by 9.30% (P > 0.05). The incarnadine L 24h levels in both the 14 and 12% protein groups were reduced by 15.33% (P < 0.05) and 6.62% (P < 0.05), respectively. The incarnadine b 45min values were reduced by 37.79 and 22.04% in the 14% protein group and 12% protein group, respectively (P < 0.05). Furthermore, both the 14 and 12% protein groups showed reductions in incarnadine b 24h values by 15.05 and 22.69, respectively (P < 0.05). Based on the pH curve, at 24, 48, and 72 h after slaughter, both the 14 and 12% protein groups showed a slower decline in pH than the control group (fed 16% protein). These findings show that less crude protein in the diet can cause muscle pH levels to decline more slowly. The low-protein diet supplemented with amino acids was able to reduce the luminance (L∼ * value) and yellowness (b∼ * value) of meat, slow the decline in pH, and improve meat quality. E ects of dietary protein level on muscle nutrients and inosine content in finishing pigs Compared with the control group, crude fat in the 14% protein group was 21.05% lower; and in the 12% protein group (P < 0.05), crude fat was 8.29% lower (P > 0.05) ( Table 6). However, muscle moisture, crude protein, crude ash, and inosinic acid content showed no significant differences among treatment groups (P > 0.05). Thus, a low-protein diet supplemented with amino acids can reduce the crude fat content of muscle, but has no significant effect on other nutrients in muscle. E ects of dietary protein level on muscle amino acids of finishing pigs Compared with the control group, the EAA levels in muscle were increased by 3.32 and 1.95% in the 14 and 12% protein groups, respectively (P > 0.05) ( Table 7). Furthermore, EAA/TAA values were significantly increased by 3.40 and 1.62%, in the 14% protein and 12% protein groups, respectively (P < 0.05). In the 14% protein group, cystine levels were significantly reduced (P < 0.05), whereas the low-protein diet with supplemented amino acids showed no significant effects on any of the other 16 amino acids (P > 0.05). Glutamate levels and Glu/TAA values in the experimental groups showed an increasing trend, but this difference was not significant (P > 0.05). These results show that a low-protein diet supplemented with amino acids could increase EAA and glutamate levels, and thereby improve the quality and flavor of meat. The amino acid values in Table 8 shows that in comparison with the control group, the 14% protein group showed significantly reduced methionine + cystine values in muscle (P < 0.05). The methionine + cystine values in the 12% protein group were increased by 6.68% (P > 0.05). Isoleucine values in the 14 and 12% protein groups were increased by 9.74 and 8.22%, respectively (P > 0.05). Essential amino acid values in the 14 and 12% protein groups were increased by 3.32 and 1.93%, respectively (P > 0.05). Therefore, a low-protein diet supplemented with amino acids can improve the EAA values, based on the Food and Agriculture Organization of E ects of dietary protein levels on muscle fatty acids in finishing pigs Compared with the control group, the 14 and 12% protein groups showed significantly reduced levels of myristic acid and 15 carbonic acids (P < 0.05) ( Table 9). In the 14% protein group, the palmitic acid content was significantly reduced (P < 0.05). However, no significant difference was observed in the 12% protein group (P > 0.05). Fatty acids in both the 14% protein and 12% protein groups were significantly reduced (P < 0.05). Oleic acid content was significantly reduced in the 14% protein group (P < 0.05); however, the 12% protein group showed no significant difference (P > 0.05). In the 14% protein group, linolenic acid content was significantly reduced (P < 0.05); however, an increasing trend was noted in the 12% protein group (P > 0.05). In the 14% protein group, the levels of 20 carbon triene fatty acids were significantly increased (P < 0.05). In the 12% protein group, levels of docosahexaenoic acid were significantly increased (P < 0.05). In addition, TFA levels in the 14% protein group were significantly reduced (P < 0.05), but showed no significant difference in the 12% protein group (P > 0.05). In the 14% protein group, SFA/TFA values were reduced by 1.56% (P > 0.05). Furthermore, unsaturated fatty acid (UFA)/TFA values in the 14% protein group were increased by 1.09% (P > 0.05). The PUFA/TFA values in the 14% protein and 12% protein groups were increased by 15.53 and 4.45%, respectively (P > 0.05). Furthermore, the PUFA/SFA values in the 14% protein and 12% protein groups were 16.86 and 4.54% higher than the control group, respectively (P > 0.05). These results show that a low-protein diet supplemented with amino acids could reduce the SFA content in the muscles of finishing pigs, and increase UFA/TFA and PUFA/TFA values, reduce SFA/TFA values, and improve the quality and flavor of pork. The e ect of dietary protein level on the types and relative levels of volatile compounds in the muscle of finishing pigs Tables 10,11 show that the baseline numbers of compounds in the muscle of finishing pigs were reflected by the values in the control group. A total of 197 volatile compounds were detected in the control group, 206 were detected in the 14% protein group and 197 were detected in the 12% protein group. Among the control, 14% protein and 12% protein groups, 18, 24, and 27 aldehyde compounds, respectively were detected. A reduction in dietary crude protein levels increased the types of aldehyde compounds in the muscle. The relative levels of aldehyde compounds in the 12% protein group were increased by 31.60%, compared with the control group (P > 0.05). A total of 25, 33, and 26 alcohol compounds were detected in the control, 14% protein and 12% protein groups, respectively. A reduction in protein levels increased the types of alcohol compounds in the muscle. The relative contents of the 14 and 12% protein groups were increased by 11.06 and 20.09% respectively (P > 0.05). A total of 46, 38, and 15 ester compounds were detected in the control, 14% protein and 12% protein groups, respectively. A reduction in protein levels reduced the types of ester compounds in the muscle. The relative contents of the 14% protein group were increased by 38.38% compared with the control group (P > 0.05), while the relative contents of the 12% protein group were significantly reduced (P < 0.05). A total of 11, 10, and 16 ketone compounds were detected in the control, 14% protein and 12% protein groups, respectively. As the protein levels were reduced, the type and relative levels of ketone compounds in the 14% protein group were also reduced (P > 0.05), but significantly increased in the 12% protein group (P < 0.05). A total of 54, 45, and 66 types of hydrocarbons were detected in the control, 14% protein and 12% protein groups, respectively. As the dietary protein level was reduced, the relative levels of hydrocarbons in the muscles were first reduced and subsequently increased (P > 0.05). A total of 43, 56, and 47 other compounds were detected in the control, 14% protein and 12% protein groups, respectively. A reduction in protein levels increased the levels of other types of compounds, the relative contents of which were first reduced and then increased (P > 0.05). The e ect of dietary protein levels on the relative levels of major volatile compounds in the muscle of finishing pigs Table 12 shows that a low-protein diet supplemented with amino acids can significantly increase the hexanal content in the muscle of finishing pigs (P < 0.05). Neither E-2-heptenal nor E-2-octenal was detected in the control group. However, these compounds were detected in both the 14% protein and 12% protein groups. The levels of E-2-heptenal and E-2-octenal in the 12% protein group were higher than those in the 14% protein group. These findings indicate that reducing protein levels could increase the levels of E-2-heptenal and E-2-octenal. In both the 14% protein and 12% protein groups, levels of benzaldehyde and phenylacetaldehyde were lower than those in the control group (P > 0.05). Compared with the control group, the levels of 1-octen-3ol in the 14% protein and 12% protein groups were significantly increased (P < 0.05). Furthermore, . in both the 14% protein and 12% protein groups, levels of 2heptanone and 2,3-octanone were increased compared with the control group. Levels of 2,3-octanedione in the 12% protein group were significantly increased compared with both the control group and the 14% protein group (P < 0.05). Compared with the control group, 2-pentylfuran levels were reduced in the 14% protein group, but significantly increased in the 12% protein group (P < 0.05). Compared with the control group, levels of 7-methyl-7H-dibenzo [b,g]carbazole in both the 14% protein and 12% protein groups were increased. Discussion E ects of dietary protein level on the growth performance of finishing pigs Studies have shown that the growth performance of finishing pigs is not affected by reductions in dietary protein levels with appropriate supplementation of amino acids (17). They Adding 0.49% alanine and 1% tyrosine to the diet of duchangda three way hybrid finishing pigs with 12.52% protein level was beneficial to the growth performance of finishing pigs, but it would not significantly affect the ADFI, ADG and ratio of feed to weight (18). The crude protein level of growing pigs was reduced, and the addition of lysine had no significant effect on the performance of growing pigs (19). When the dietary protein level was reduced by 1-2% on the basis of 14.8%, there was no significant effect on the growth performance and apparent digestibility of nutrients of finishing pigs (20). The results of the current study are consistent with those of the aforementioned studies. The current study found that 14.0% crude protein could increase the average daily gain of finishing pigs and reduce the feed-to-weight ratio, compared with the control (16.0% crude protein); however the differences were not significant. The low-protein diets supplemented with amino acids had no significant effects on growth performance in finishing pigs. E ects of dietary protein level on slaughter performance, organ weight, and intestinal pH of finishing pigs Whether dietary protein levels can alter the carcass and meat quality is controversial, but most scholars believe that the low-protein diet has no significant effect on the slaughter performance of finishing pigs after balancing the nutritional requirements of supplementing the essential amino acids of the diet, with back fat and skin thickness reduced by 15.6 and 11.5%, respectively, while the proportion of lean meat increased (21). Chen et al. fed (12%, three different protein levels, 14%, 16%) to DuYueba ternary hybrid pigs and showed that different protein levels did not significantly affect the slaughter performance of finishing pigs (22). However, the carcass weight and lean meat percentage of the finishing pigs in their 14 and 16% protein groups were higher than those in the 12% protein group. Furthermore, they reported that the back fat thickness and fat percentage were reduced with increasing protein level. In the present study, feeding a low-protein diet with supplemented amino acids was able to reduce the back fat and skin thickness of finishing pigs. Differences in the results of various studies could be attributed to differences in the selection of pig breeds. Yang et al. found that appropriate reductions in dietary protein levels and supplementation of amino acids had no significant effect on dressing percentage, loin eye muscle area, lean meat percentage and fat percentage (23). Sobotka et al. found that the lean meat percentage of finishing pigs tended to increase after being fed a low-protein diet and reported an increase in the loin eye muscle area of 6.80% (24). Zhang et al. reported that the dressing percentage of their low-protein group was increased by 1.33% and the loin eye muscle area was increased by 5.72% (25). The current study showed that reducing the levels of dietary crude protein can increase the loin eye muscle area of finishing pigs. The results of Chen et al. (7) showed that low-protein diets have no significant effects on the weights of internal organs of finishing pigs. The current study found that a low-protein diet supplemented with amino acids can significantly reduce the weights of the heart and lungs but has no significant effects on the weights of other organs of finishing pigs. E ects of dietary protein level on the quality of meat in finishing pigs The results of studies on the effects of dietary crude protein levels on pork quality show wide variations. Zhu et al. found that low-protein diets supplemented with amino acids can significantly reduce the shear force of the longissimus dorsi of finishing pigs, without any significant effects on muscle pH, meat color, L * , a * or b * values, cooking loss, and drip loss (26). Zhang et al. reported no significant differences in flesh color (L * , a * and b * ), or muscle pH between the normal protein group and the low-protein group (27). Goerl et al. reported that lowprotein diets can increase marbling score and improve muscle tenderness (28). Bidner et al. found that dietary protein levels had no significant effect on muscle color, pH, or drip loss (29). Ruusunen et al. found that feeding low-protein diets can reduce the pH of muscle in finishing pigs within 45 min and increase drip loss but show no significant effects on muscle pH after 24 h (30). Li et al. fed finishing pigs diets with crude protein levels of 14 and 10% that low-protein diets showed no significant effects on the color of the longissimus dorsi muscle, pH 45min , pH 24h , and . /fnut. . drip loss (6). Zhang et al. found that the drip loss of finishing pigs fed a low-protein diet was significantly higher than that of pigs fed a diet with normal protein levels (27); however, neither the pH 45min nor pH 24h showed significant differences from the normal protein group. The current study showed that when the dietary crude protein level was 14.0% the muscle dehydration rate and drip loss were significantly reduced compared with the 16.0% protein group. When the protein level was 12.0% the reduction in muscle shear force and the improvement in muscle marbling score were both significant compared with the 16.0% protein group. The 12.0 and 14.0% protein groups showed significantly reduced muscle color L 24 , b 45min and b 24h which improved the muscle quality of finishing pigs. However, neither the 12.0% nor 14.0% protein groups showed any significant effects on muscle pH. E ects of dietary protein level on the nutritional composition and inosinic acid levels in the muscle of finishing pigs The nutrients contained in the meat of pigs determine the nutritional value of pork. Hu et al. found that the intermuscular fat contents of pigs fed different levels of crude protein were decreased with as protein level was increased while the crude protein content of muscle was increased with increasing dietary protein level (31). These findings indicated that low-protein diets can reduce the fat content and crude protein content of muscle. Huo et al. established three protein levels (11.96, 13.04, and 14.16%, respectively) in three groups, and reported no significant differences in the dry matter, crude ash, crude protein, or inosinic acid levels in any group; however, the crude fat and cholesterol levels in the low-protein group were significantly reduced (32). (33) found that with increasing dietary protein levels, the crude protein content in muscle is also increased, but the content of crude fat and inosinic acid shows a decline (33). Li et al. reported that dietary protein levels had no significant effects on inosinic acid levels in the muscle of different breeds of chickens (34); however, inosinic acid levels were increased with reduced dietary protein levels. The current study showed that when the crude protein level in the diet was 14.0%, the crude fat content in the muscle was significantly reduced, compared with the 16.0% protein group (control). The crude protein level in the diet had no significant effects on the water content of muscle, crude ash, crude protein, and inosinic acid. Amino acids are the basic units of protein molecules. The amino acid levels and composition have an important influence on meat quality and flavor. Among the various amino acids, EAA (lysine, phenylalanine, methionine, threonine, tryptophan, isoleucine, leucine and valine) and umami amino acids (glutamic acid, arginine, aspartic acidc and glycine) are often used as indicators in the evaluation of muscle quality and flavor (35). Chen reported that dietary protein levels had no significant effects on the amino acid content of Duyueba three-way crossbred pigs, and differences in the levels of EAA and savory amino acids were not significant (22). The current study found that low-protein diets supplemented with amino acids can significantly increase EAA/TAA values and increase the total amount of EAA. Compared with the 16.0% protein group when the crude protein level was 14.0%, no significant effects were observed on FAA/TAA values; however, FAA/TAA values were significantly decreased when the crude protein level was 12.0%. Glutamate plays an important role in improving the flavor of pork because of its buffering effect on unpleasant odors (36). The current study found that a reduced dietary crude protein levels can also increase the levels of glutamate in the muscle which can improve the flavor of pork. The amino acid score can be used as an index to determine the nutritional value of food. The higher the amino acid score, the better the nutritional value of the food (37). The current study showed that low-protein diets supplemented with amino acids can improve the levels of EAA in the muscle of finishing pig. These findings indicate that appropriate reductions in dietary crude protein levels can increase the nutritional value of pork and improve meat quality. E ects of dietary protein levels on fatty acids in the muscle of finishing pigs Fatty acids include saturated fatty acids and unsaturated fatty acids. Excessive intake of saturated fatty acids in humans can cause diseases such as arteriosclerosis. Unsaturated fatty acids can have positive effects on human health, such as anti-cancer, lipid-lowering effects, and the prevention of cardiovascular disease (38-40). Huo et al. found that low-protein diets can significantly reduce linolenic acid levels in muscle and increase arachidic acid levels, but show no significant effects on other fatty acids (32). The current study showed that low-protein diets supplemented with amino acids can reduce linolenic acid levels in muscle. However, much lower dietary protein levels can increase linolenic acid levels in muscle, while low-protein diets can increase arachidonic acid levels. These results are consistent with those of the study of Zhou et al. (41), who reported that low-protein diets can reduce the stearic acid, palmitic acid, and linoleic acid levels, and increase oleic acid levels in the longissimus dorsi muscle. The current study showed that low-protein diets can also reduce stearic acid, palmitic acid, and linoleic acid levels, as well as oleic acid levels in muscle. The differences in results between studies could be attributed to differences in the selected breeds of pigs. Teye et al. found that low-protein diets have a tendency to reduce palmitic acid and stearic acid levels in muscle, which is consistent with the results of the current study (42). Furthermore, low-protein diets have no significant effects on the levels of SFA, MUFA, and PUFA. Martinezaispuro et al. reported that SFA and MUFA levels in a low-protein group were higher than those in a comparatively high-protein group, while the PUFA levels were reduced. However, the results of the current study differ from those findings (43). The current study showed that low-protein diets supplemented with amino acids reduced SFA/TFA and MUFA/TFA values, but increased PUFA/TFA values. The differences in results could be attributed to differences in the pig breeds selected, diet composition, and feeding environment. The PUFA/SFA values are also commonly used as meat quality evaluation indicators. Regarding human nutrition, meat PUFA/SFA values should be 0.45 or slightly higher (44), and WHO recommends PUFA/SFA levels >0.4 (45). The current study showed that a low-protein diet supplemented with amino acids could improve the PUFA/SFA values in the muscle of finishing pigs and improve meat quality. E ects of dietary protein level on the types and relative content of volatile compounds in the muscle of finishing pigs Aldehydes are the most important volatile components in pork, and they have the greatest effects on pork flavor. The odor threshold of aldehyde compounds is low, and they have strong recognizable odors (46). The current study showed that compared with the 16.0% protein group, the dietary crude protein levels of 14.0 and 12.0% increased the types of aldehyde compounds in the muscles of finishing pigs. In addition, the crude protein level of 12.0% increased the relative levels of aldehyde compounds. The alcohol contents in pork is also high, which contributes to its flavor. The current study showed that when the dietary crude protein level was 14.0%, the types of alcohol compounds were more than those of the control diet (with a crude protein level of 16.0%). Furthermore, the relative levels of alcohol compounds in the 14.0 and 12.0% protein groups were higher than those in the 16.0% protein group. Ester compounds have little effect on the flavor of pork, and no such compoundis known to reflect the characteristic flavor of pork (47). The current study showed that in the 16.0% protein group, levels of ester compounds were higher than those in the 14.0 and 12.0% protein groups. Furthermore, and the relative contents of ester compounds in the 16.0 and 14.0% protein groups were significantly higher than those in the 12.0% protein group. Most ketone compounds have floral or fruity aromas, and the longer the carbon chain, the stronger the floral characteristics (48). The current study showed that the types and relative levels of ketones in the 12.0% protein group were higher than those in the 16.0% protein group. Hydrocarbons have a high threshold, and therefore have little direct effect on pork flavor. However, hydrocarbons are intermediates of heterocyclic compounds, and still have basic effects on pork flavor (49). The current study showed that as the dietary crude protein level was reduced, the types and relative content of hydrocarbons were first reduced and then subsequently increased. E ects of dietary protein level on the relative content of main volatile compounds in the muscle of finishing pigs Hexanal has a grassy odor, andis one of the most abundant aldehydes in meat (47). Other aldehyde compounds may have different flavors. The current study showed that a reduction in dietary crude protein levels increased the hexanal content in the muscles of finishing pigs, and increased the E-2-heptenal and E-2-octenal levels. Alcohol compounds also play an important role in pork flavor. Among them, levels of 1-octene-3-ol, which has the flavor of cooked mushrooms, were relatively higher in meat, and plays an important role in pork flavor (50). The levels of 2-heptanone and 2,3-octanedione in ketones are relatively high. Furthermore, 2-heptanone has a fruity aroma, and 2,3octanedione is a unique component of pork that imparts a characteristic aroma (51,52). The current study showed that the levels of 2-heptanone and 2,3-octanedione were increased as the protein level was reduced. Furans and nitrogen-containing compounds also play an important role in pork flavor. In addition, 2-pentylfuran has a bean and vegetable flavor (11, 53). The current study showed that as the level of dietary crude protein was reduced, the 2pentylfuran content was first reduced and then increased. The low-protein diets supplemented with amino acids increased the levels of the nitrogenous compound, 7-methyl-7H-dibenzo[b, g]carbazole. In addition, a low-protein diet supplemented with amino acids can increase the levels of characteristic flavor compounds. Conclusion In conclusion, compared with the existing commercial feed nutrition standards, the dietary protein level of finishing pigs is reduced by 2.0-4.0%, which has no significant impact on the growth performance and slaughtering performance of finishing pigs, but it can improve the slaughtering performance and meat quality to varying degrees, and increase the quantity and content . /fnut. . of volatile flavor substances in muscle, which has a positive impact on improving meat quality. Among them, the dietary protein level of 14.0% is the best. Data availability statement The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s.
2022-07-22T13:49:38.190Z
2022-07-22T00:00:00.000
{ "year": 2022, "sha1": "38d911e0c945e71d4539810593481317123fcdec", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Frontier", "pdf_hash": "38d911e0c945e71d4539810593481317123fcdec", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
247281158
pes2o/s2orc
v3-fos-license
Isobaric 4-Plex Tagging for Absolute Quantitation of Biological Acids in Diabetic Urine Using Capillary LC–MS/MS Isobaric labeling in mass spectrometry enables multiplexed absolute quantitation and high throughput, while minimizing full scan spectral complexity. Here, we use 4-plex isobaric labeling with a fixed positive charge tag to improve quantitation and throughput for polar carboxylic acid metabolites. The isobaric tag uses an isotope-encoded neutral loss to create mass-dependent reporters spaced 2 Da apart and was validated for both single- and double-tagged analytes. Tags were synthesized in-house using deuterated formaldehyde and methyl iodide in a total of four steps, producing cost-effective multiplexing. No chromatographic deuterium shifts were observed for single- or double-tagged analytes, producing consistent reporter ratios across each peak. Perfluoropentanoic acid was added to the sample to drastically increase retention of double-tagged analytes on a C18 column. Excess tag was scavenged and extracted using hexadecyl chloroformate after reaction completion. This allowed for removal of excess tag that typically causes ion suppression and column overloading. A total of 54 organic acids were investigated, producing an average linearity of 0.993, retention time relative standard deviation (RSD) of 0.58%, and intensity RSD of 12.1%. This method was used for absolute quantitation of acid metabolites comparing control and type 1 diabetic urine. Absolute quantitation of organic acids was achieved by using one isobaric lane for standards, thereby allowing for analysis of six urine samples in two injections. Quantified acids showed good agreement with previous work, and six significant changes were found. Overall, this method demonstrated 4-plex absolute quantitation of acids in a complex biological sample. ■ INTRODUCTION Absolute quantitation of metabolites across samples is hampered by ion suppression, instrument drift, and matrix effects. 1 Chemical derivatization attaches a tag with properties beneficial for mass spectrometry (MS) analysis to a targeted functional group, often increasing ionization efficiency and throughput. 2 While tagging improves limits of detection and quantitation, increased matrix effects and excess reagent can suppress ionization. 3 Post-reaction cleanup steps (e.g., extractions) can mitigate these effects if there is an advantageous difference between the properties of the excess tag and the tagged analytes. For proteomics, a C18 or ion exchange column separates the tag from the tagged analytes. 4 The hydrophobicity of small derivatized metabolites is often dominated by the tag, making these sample cleanup methods challenging in metabolomics. Isotope tagging alleviates the problems of quantitation, throughput, and instrument drift. Isotope tagging methods are designed to create mass shifts between the same analyte across different samples. This mass shift can appear in the full scan (MS 1 ) in the case of mass shift tagging or upon fragmentation (MS 2 ) in the case of isobaric tagging. Isobaric labeling reduces spectral complexity by collapsing all tag variants to the same nominal mass in the MS 1 scan. 5 Each tag contains a balancer group, a reactive group, and a reporter group. The total number of isotopes across the balancer and reporter groups are equal for an isobaric set of tags. This causes co-isolation of all multiplexed samples and fragmentation of labeled analytes to produce unique, isotope-encoded reporters. 6 The intensity of each reporter serves as an indicator of the relative amount of analyte in each sample. While the m/z of reporters produced by fragmentation are often independent of the precursor ion (TMT, 7 DiLeu, 8 iTRAQ, 9 and DiART 10 ), other tags produce characteristic neutral losses. 11,12 Pioneering work by the Reid group has proven the effectiveness of a stable cyclization to produce isotope-encoded reporters which remain attached to the analyte. 13−15 A trialkylsulfonium or quaternary alkylammonium neutral loss provides a site for simple isotope manipulation of the balancer group, 16,17 while the alkylamine chain allows for reporter isotopes to be synthetically incorporated ( Figure 1). 18,19 The energy requirements for fragmentation of the sulfonium group are lower than the quaternary amine in addition to being less dependent on analyte proton mobility. 16 This low energy barrier makes the sulfonium better suited for proteomics than metabolomics because the cyclization must compete with many amide bond fragmentations. Small polar metabolites rarely contain multiple amide bonds, which would reduce the penalty for using a quaternary amine over of a sulfonium moiety. Quaternary amines are synthetically desirable due to the simple attachment of isotopes using selective methylations of a primary amine with formaldehyde and/or methyl iodide. 20 The deuterated forms of these two reagents are widely available and provide a cost-effective alternative to 13 C and 15 N labeling. 21 These tags can be attached to metabolites using coupling reactions between a free amine on the tag and the acid group on metabolites. 22,23 Most isobaric labeling workflows have been developed for proteomics experiments, with limited examples of metabolomic analysis. 10,24−28 Differences in the hydrophobicity between peptides and polar metabolites require novel sample preparation workflows for tagging reactions. 29 Here, a synthetic method and labeling workflow are presented for a set of four isobaric tags. The MS 2 reporters are created through a neutral loss cyclization and are dependent on the tagged metabolite mass. Excess tag is scavenged and extracted to reduce ion suppression and column overloading, while a perfluoropentanoic acid sample modifier improves retention of double-tagged metabolites on a C18 column. This method is used for the multiplexed absolute quantitation of 37 acid metabolites in control and type 1 diabetic urine. Standard Stock Creation Metabolite standards were acquired from the MS metabolite library of standards (IROA Technologies, Sea Girt, NJ). Lactate, malate, succinate, fumarate, dimethyl glycine, benzoic acid, and 4formylbenzoic acid (4-FBA) were purchased from Sigma Aldrich (St. Louis, MO). Each metabolite was reconstituted to 100 μM in 5% MeOH, mixed, dried, and reconstituted to create a 1 μM mixed acid stock in dimethylformamide (DMF). All other reagents were purchased from Sigma Aldrich unless otherwise noted. Tag Synthesis Tert-butyl (3-aminopropyl)carbamate Isotopologues. 1,3-Diaminopropane (D 0 , D 2 , D 4 , or D 6 , CDN Isotopes, Point-Claire, Quebec, Canada) and tert-butyl phenyl carbonate were dissolved in ethanol, and the reaction mixture was refluxed overnight (Scheme 1). The reaction mixture was concentrated and reconstituted in water (25 mL), pH-adjusted to 3 with 2 M HCl, and washed with dichloromethane (DCM, 3× 40 mL). The aqueous phase pH was adjusted to 11 with 2 M NaOH and extracted with DCM in a separate collection flask. This organic layer was dried with Na 2 SO 4 and concentrated to afford the product as a yellow oil (30% yield). Tert-butyl (3-(dimethylamino)propyl)carbamate Isotopologues. tert-butyl (3-aminopropyl)carbamate (D 2 or D 4 ) was dissolved in acetonitrile, and 5 equivalents of formaldehyde (D 0 or D 2 ) (20% in H 2 O or D 2 O respectively) were added. 1.6 equivalents of sodium cyanoborohydride was added, and the reaction mixture was allowed to stir for 15 min before adjusting the pH to 7. The reaction mixture was stirred for 45 min, keeping the pH at 7 throughout. The mixture was then concentrated, and 4 mL of 2 M KOH was added and extracted with ether (3× 10 mL). The organic layer was washed once more with 4 mL of 0.5 M KOH, dried with Na 2 SO 4 , and concentrated to afford the product as a colorless oil (80% yield). 3-Amino-N,N,N-trimethylpropan-1-aminium Isotopomers. The D 6 3-((tert-butoxycarbonyl)amino)-N,N,N-trimethylpropan-1aminium isotopomers were dissolved in a 4:1 DCM:trifluoroacetic acid mixture and set to stir for 1 h, giving the desired products (100% yield). Each tag was observed in high isotopic purity using HRMS for exact mass analysis. All tags had an expected exact mass of 123.1763 Da and an observed m/z of 123.1764 ( Figure S1). The arrangement of deuterium atoms on each tag was verified by proton NMR ( Figure S2) and by tagging sarcosine and fragmenting to observe the corresponding reporter ions ( Figure S3). Expected exact masses for the D0, D2, D4, and D6 reporters of sarcosine are 129.10, 131.11, 133.13, and 135.14 Da, respectively. Observed m/z values for the respective reporters were 129.10, 131.11, 133.13, and 135.14. Liquid Chromatography LCMS grade water and acetonitrile were purchased from Fisher Scientific (Pittsburgh, PA). Capillary columns with photopolymerized integrated frits and emitter tips were fabricated in-house from fused silica capillary (Polymicro Technologies, Phoenix, AZ) with dimensions 17.5 cm × 50 μm and packed with 3 μm Atlantis T3 C18 particles (Milford, MA) as previously described. 30 Flow through the column was delivered by a Thermo Vanquish (Thermo Fisher Scientific, Waltham, MA) LC pump and autosampler connected to a stainless steel tee which acted as a flow splitter. The split was a 50 μm × 100 cm open capillary. The column was operated at a flow rate of 125 nL/min with an injection of 4 nL split from the bulk flow of 175 μL/min and injection of 6 μL. The flow rate was determined based on the experimental dead time and used to determine a split ratio of 1:1400. Mobile phase A was 0.1% formic acid in water. Mobile phase B was 0.1% formic acid in acetonitrile. The gradient was as follows: 0−1 min, 0% B; 12 min, 98% B; 14 min, 98% B; 14.1−25 min, 0% B. Creatinine Normalization Urine was obtained from Lee Biosolutions (Maryland Heights, MO) as single donor samples. All raw urine samples were centrifuged at 21,000 ×g for 10 min, and the supernatant was used for further analysis. Creatinine concentration was determined before performing the chemical tagging reaction for each urine sample using D 3 -labeled creatinine based on previous methods. 31 Urine samples were diluted 1000× and spiked to 9 μg/mL with D 3 creatinine. Separations of creatinine were performed on the same C18 capillary column as described for isobaric analysis. A parallel reaction monitoring (PRM) method was used to monitor the transitions of biological creatinine from m/z 114 to 86 and D 3 -labeled creatinine standard from m/z 117 to 89 ( Figure S4). Derivatization Urine was diluted with LC−MS water to match all creatinine concentrations as determined by isotope analysis (Table S1). All samples were diluted to 277 μg/mL creatinine and spiked with 1 μM of 4-FBA. Ten microliters of each sample was dried by vacuum centrifugation and reconstituted in 100 μL of DMF. Four microliters of one of the isobaric tag stocks (250 mM) was added to each vial, followed by 1 μL of 500 mM HATU and HOBt with 2 μL of triethylamine. Separately, mixed standards were reacted with isobaric tag D2 (Scheme 1 and Figure 2). The reaction was shaken at room temperature for 70 min as previously described. 32 Two microliters of hexadecyl chloroformate was added to the vials to scavenge excess tag and biological primary amines. The samples were then mixed and spiked with 1 μM D2-tagged standards, as shown in Figure 2. The tagged analytes were Folch extracted by adding 2 eq of 2:1 CHCl 3 :MeOH, then 0.3 eq water to induce phase separation. The aqueous phase was removed and dried in a vacuum centrifuge and then reconstituted to 10 μL with water containing 0.1% formic acid and 30 mM perfluoropentanoic acid (PFPeA). MS Analysis All samples were analyzed on a Q-Exactive Orbitrap (Thermo Fisher Scientific, Waltham, MA) coupled to a nanospray flex ion source operating in positive ionization mode. Spray voltage was set to 1.75 kV and capillary temperature to 200°C. MS fragmentation runs were performed with an inclusion list at MS 1 resolution of 35 K and automatic gain control (AGC) of 1e 6 with a maximum injection time of 50 ms and a scan range of 140−800 m/z. The top 5 peaks triggered MS 2 events at a resolution of 17.5 K, AGC of 1e 5 , maximum injection time of 50 ms, isolation of 0.7 m/z, dynamic exclusion of 5 s, and normalized collision induced dissociation energy (nCID) of 35. Reporter consistency across each peak was determined by a PRM method with the same resolution, AGC, fragmentation energy, and isolation width as the data-dependent method. Data Analysis All data analyses were performed in R (version 4.0.5). Thermo .raw files were converted to .mzXML using MSConvert. 33 The findNL function from CluMSID was used to search for the expected isotopeencoded neutral losses using a 10 ppm mass window. 34 The intensity ratios between each biological sample and the internal standard reporter were taken at the peak of each precursor for quantitation. All concentrations in urine are reported as μM/mmol creatinine, and significant differences were determined using a two-tailed t test with p < 0.05 indicating significance. ■ RESULTS AND DISCUSSION The tag structure employs simple synthetic routes and costefficient reagents to produce four isobaric variants (Scheme 1). The quaternary amine serves as a site for isotope manipulation and provides a fixed positive charge to enhance signal response in positive mode analysis. High reporter isotope purity was maintained by minimizing the synthetic steps to complete synthesis ( Figure S3). The addition of many synthetic precursors with isotopic purity <100% would have produced degenerate signals. This reduces signals attributed to the expected tagged analyte and complicates data analysis. While coupling reactions have been optimized for fatty acid derivatization in nonpolar solvents with similar tags, 17,23 the primary amine reactive group must outcompete biological amines when extracting polar solvents. The reaction was investigated in pooled urine to determine the concentration of tag required to produce reaction completion. A data-dependent analysis method (DDA) was used to monitor both analyte intensity and the number of precursor ions which produced the expected neutral loss. Both factors reached a maximum at 100 mM of tag in solution ( Figure 3A,B). Reporters from the other three tags were searched to assess isotope overlap in a complex sample. The number of neutral losses attributed to tags that were not added was minimal, indicating high reporter purity ( Figure 3C). These data confirm that the reaction is complete, with minimal isotope reporter overlap. Neutral loss of the trimethylamine group produced efficient reporters for most analytes ( Figure 4A). Analyte-dependent fragmentation was observed for some metabolites with multiple amide bonds or tags attached ( Figure 4B). Doubletagged analytes produce a mix of single and double ring formation. This produces two sets of reporters: one from neutral losses of 59−65 (double ring formation) and one from 29.5 to 32.5 (single ring formation). All neutral losses show similar ratios ( Figure S5), which is consistent with reports of multiple reporter quantitation in proteomics. 35 For pantothenate, we observe multiple isotope-encoded reporters ( Figure S6) due to competing fragmentation from the native amide bond and nearby mobile proton. Despite this mixed fragmentation, the reporter is observed at an acceptable intensity for quantitation (relative standard deviation, RSD = 4.8%, R 2 = 0.997). Collision energy optimization found an nCID value of 35% produced acceptable reporter intensities across a range of analytes ( Figure 4C,D). Incorporation of deuterium is often avoided due to the potential for retention time shifts across tags. This is caused by the increase in polarity of deuterium compared to hydrogen, which causes earlier elution on reverse phase columns. 36 Previous work has shown that incorporation of the deuterium around a quaternary amine drastically reduces retention time shifts, allowing for the synthesis of cost-efficient tags. 37 Similarly, retention time shifts from these tags are not observed ( Figure 4E,F). This provides consistent reporter ratios across each peak for tagged analytes mixed 10:5:2:1 ( Figure 4G,H). The average ratio variance of the D0, D2, and D4 reporters across the top 50% of each peak was 1.6% for lactate and 3.9% for adipate. This indicates strong quantitative potential of this system. Separation of tagged analytes on a reverse phase column was hampered by the extreme polarity of the quaternary amine tag. We have previously shown the ability of PFPeA to aid in the retention of quaternary amine tagged compounds. 37 PFPeA was added to the sample vial at a higher concentration (30 mM) in contrast to its previous use as a mobile phase additive. This resulted in dramatically increased retention of doubletagged acids (Table 1 and Figure 5), while using minimal PFPeA to avoid introduction of excess perfluorinated acids into the environment. Of note, separation and isolation of monomethylgutarate from its isomer 2-methylgutarate is aided by our tagging scheme. Monomethylglutarate is singly tagged, resulting in better retention and a larger m/z (10.8 min, 251 m/z) compared to double-tagged 2-methylglutarate (7.5 min, 178 m/z). Reaction and extraction of the excess tag using hexadecyl chloroformate improved analyte intensity by 23% on average (data not shown), and minimal hexadecyl-tag is observed in the extracted samples ( Figure S7). While a small amount of unreacted tag (m/z 123.2) is observed, it is typically excluded by the quadrupole to minimize nonproductive charges entering the C-trap in both MS 1 and MS 2 scans. Tagged acids were mixed 1:2:5:10 (500 nM to 5 μM) to assess the analytical performance of the developed method. Mixing ratios were repeated four times with varying concentrations attributed to each tag to ensure that analytical performance was not dependent on the isotopic variant of the tag. A linear response was observed across an order of magnitude, with an average linearity of 0.993 (Table 1). Samples were mixed 1:1:1:1 to determine the reproducibility across each isotope lane and produced an average signal intensity RSD of 12%. Additional sample handling and reactions with differentially synthesized tags can reduce reproducibility, but this is often recovered by improvements to quantitation and reduced batch effects. Intersample retention time repeatability was determined by triplicate injections of pooled urine tagged and spiked with a 1 μM analyte mix. Retention times across injections were extremely reproducible with an average RSD of 0.58%. These consistent retention times, in addition to the 2 Da reporter spacing, could enable robust MRM analysis on triple quadrupole systems with scheduled retention time windows for improved sensitivity. Taken as a whole, this method allows for the absolute quantitation of organic acids. This workflow was used to quantitate changes of acid metabolites in control and type 1 diabetic urine. All urine was normalized to creatinine prior to tagging, and quantitation was performed using the isotope ratio of a spiked standard reporter. Acquiring many untagged, isotopic metabolites is often unfeasible due to price and availability constraints. Here, all 54 nonisotopic standards were reacted with an isobaric tag, providing an isotope-encoded standard peak for every analyte. This large standard set minimizes sample to sample variation by accounting for matrix effects and instrument drift across injections. Quantitative data for 6 biological samples were obtained in two injections in 50 min of instrument time. All acid concentrations were compared to values from the human urine metabolome database, 38 with all but dimethyl glycine showing good agreement. A complete view of the quantified acids is presented in Table S2. Significant changes in six acids were observed from diabetic urine ( Figure 6). Multiple metabolites related to glomerular filtration rate are significantly altered, including dimethylglycine 39 and N-acetylphenylalanine. 40 In addition, decreases in medium chain fatty acids are associated with altered lipid metabolism. 41 P-hydroxyphenylacetic acid produced a large fold change outside our validated linearity and is considered to be largely dependent on gut microbiota and diet, 42 requiring further investigation. To this end, further study on the microbiome in diabetes is warranted. Monomethylglutaric acid was detected at similar concentrations as previous studies, 38 but has not been widely linked to diabetes. ■ CONCLUSIONS Here, we presented a synthetic route and LC−MS method for the 4-plex absolute quantitation of polar metabolites in urine. Each of the four tags was synthesized in minimal steps, producing isotopically pure reporters. Mindful tag design neutralized retention time shifts from deuterium incorporation, producing excellent quantitation. While some analyte-dependent fragmentation was observed for metabolites with a native amide bond, reporter generation was sufficient for quantitation. Retention of the polar, double-tagged analytes was drastically improved by adding PFPeA to the sample. Thirty-seven metabolites were quantified across six samples in under 50 min of instrument time. Six significant changes were observed, and quantified metabolites agreed well with previously published work. While isobaric labeling provides many benefits to throughput and quantitation, its adoption has been limited for metabolomics workflows. This may be due to the stochastic nature of data-dependent methods, limited MS 2 structural information, or simply a lack of cost-efficient methods. The presented tags provide a template for extremely cost-efficient multiplexing, but minimal metabolite identification in the MS 2 spectra due to their efficient reporter formation. These qualities made it ideal for targeted, high throughput methods. While the presented tags allow for simultaneous analysis of up to four samples, further 13 C or 15 N incorporation could produce additional reporters for higher levels of multiplexing on both low-and high-resolution instruments. Isotope modification around the quaternary amine is trivial, as commercially available variants of formaldehyde and methyl iodide are widespread. Additional synthesis or coupling reactions could also expand the number of organic acids and targeted functional groups to cover a wide range of metabolites and account for the structural heterogeneity of the metabolome. Characterization of the synthesized tags by LC-HRMS and NMR; HRMS 2 of tagged sarcosine, creatinine, tagged adipic acid, and tagged pantothenic acid; hexadecyl-tag extraction efficiency; creatinine analysis results, and all quantified acids in control and T1D urine (PDF)
2022-03-08T16:50:38.819Z
2022-03-03T00:00:00.000
{ "year": 2022, "sha1": "491adef926be682053bd2082930173fb1d86d0b1", "oa_license": "CCBY", "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9204807", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "a6c05704bf5790f823095b57574fcd007fe1f4c5", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine" ] }
11885301
pes2o/s2orc
v3-fos-license
Implementation of a Screening Program for Patients at Risk for Posttraumatic Stress Disorder INTRODUCTION Implantable cardioverter defibrillator (ICD) recipients who suffer from posttraumatic stress disorder (PTSD) are known to be associated with significant cardiac-specific mortality. Clinical observations suggest that PTSD is frequently undetected in ICD recipients followed up at electrophysiology (EP) outpatient clinics. Early recognition of PTSD is important to reduce the risk of serious manifestations on patient outcomes. METHODS All ICD recipients aged 19 years or older at the Washington University School of Medicine (WASHU) EP clinic, a large urban EP clinic, were invited to participate in the project. An informed consent letter with an attached primary care: posttraumatic stress disorder (PC: PTSD) survey was offered to the participants who met the inclusion criteria. Those who completed the survey were included in the project. Individuals with positive survey result were offered a referral to mental health services. Comparisons between PTSD and non-PTSD patients were done using a two-sample t-test for continuous variables. Using Fisher’s exact test, PTSD prevalence was compared to the study by Ladwig et al in which prevalence was determined as the proportion of patients with positive findings of PTSD (n = 38/147). All analyses were conducted using SAS v9.4. The proportion of patients having PTSD was determined and an exact 95% confidence interval was evaluated based on the binomial distribution. RESULTS Using a convenience sample, 50 ICD recipients (33 males and 17 females) were enrolled. The project had a 30-day outcome period. Nine (18%) of the 50 participants had positive PC: PTSD findings and all these nine participants were referred to a mental health specialist. The current project demonstrated an 18% (9/50) PTSD prevalence rate when compared to a 26% (38/147) prevalence rate in the study by Ladwig et al (P = 0.34). Although this project did not demonstrate 20% PTSD prevalence rate, as hypothesized, the 18% PTSD prevalence rate is consistent with previous research. CONCLUSION The prevalence of PTSD noted in the current project is consistent with previous research and validates underrecognition of PTSD in ICD patients. Offering a referral to all ICD recipients at EP clinic visits with a positive PC: PTSD screening to a mental health specialist is an important step in reducing the risk of serious manifestations on patient outcomes. Introduction Sudden cardiac death (SCD) in the United States is linked to more than 450,000 deaths annually 1 and contributes to more than 30% (17.1 million) of all cardiovascular mortality worldwide. 2 Malignant dysrhythmias such as ventricular tachycardia evolving into ventricular fibrillation cause two-thirds of SCDs. 3 An implantable cardioverter defibrillator (ICD) is the recommended intervention for this high-risk patient population due to its advantage over pharmacologic treatment, resulting in a significant surge of ICDs implanted. 4 Therapy has focused on the identification of high-risk individuals for SCD including those with low ejection fraction or history of malignant dysrhythmias. 5 The ICD is designed to detect and treat these malignant ventricular dysrhythmias through antitachycardia pacing or shock therapy restoring a sinus rhythm. 6 Survival rates have improved over the past 20 years with ICD therapy, 7 noting a 30%-50% decrease in mortality. 6,8 The Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition (DSM-V), states that posttraumatic stress disorder (PTSD) is a condition that occurs in people who are unprotected against extreme stress or a traumatic lifethreatening event, resulting in fear, helplessness, or horror. 9 A single ICD shock or an ICD storm (multiple consecutive ICD shocks) may lead to PTSD. 10 Patients who develop PTSD attempt to avoid reminders and have frequent thoughts of the event. 11 They have consistent adverse views and expectations about themselves or their environment and demonstrate a hyperarousal syndrome. 11 Symptoms that persist for more than 30 days causing impairment in day-to-day functioning have been classified as PTSD. 11 The DSM-V 11 reported the US lifetime PTSD prevalence rate of 8.7%. 12 The mechanism underlying the development of PTSD in ICD patients is not well documented. 9 It is not clear if life-threatening dysrhythmias provoke PTSD or whether ICD shocks trigger and maintain PTSD. 9 With improved SCD survival rates, a higher postrecovery PTSD potential exists for ICD patients noting life-threatening events. 13,14 As a result, there is a relationship between adverse cardiac events and subsequent traumatic symptoms. 15 Postsurvival patients are susceptible to reexperience the cardiac incident or ICD shocks. PTSD sufferers commonly encounter flashbacks of medical interventions and dreams of cardiac arrest and surgical procedures. In addition, episode remin ders result in avoidance of situations causing tachycardia such as sexual activity, exercise, and arousal symptoms triggering obsessions with heart rate, chest pain, or insomnia. 12 Many studies have suggested that an ICD increases the quality of life (QoL) for most recipients. 16 Although ICDs improve survival rates, 7 the severity of disease, comorbidities, underlying cardiac disease, life-threatening dysrhythmia, frequent ICD shocks or electrical storms, poor social support, younger age at implantation, gender, and/or a poor understanding of the therapy may increase anxiety, depression, and posttraumatic stress symptoms. 6,9,17 The incidence of PTSD in ICD recipients is approximately 20% with Type D personality, with comorbidities, and frequent shock therapy. 9 According to Shiga et al 9 , ICD patients with Type D personality, known as a highly reactive stress disorder, may have an increased risk for developing anxiety. Ventricular dysrhythmias result in the provocation of anxiety among ICD recipients. 9,18 In addition, depression has been observed in approximately 30% of ICD recipients, and shock therapy may contribute to the persistence of depression. 9 While the recipient may perceive the ICD shock (particularly ICD storm) as traumatic, it is important to note that the ICD population seems to differ from other PTSD populations in the development of PTSD symptoms. In non-ICD recipients, the subsequent trauma experience is not likely. The ICD recipient lives with the realistic and tactile threat every second of every day. 10 nursing Focus Nursing is concerned with the diagnosis and monitoring of patients' responses to health problems, with health advancement and optimization, and with the prevention of disease. 19 Nurses serve a focal role in the development of interventions aimed at improving disease response, adaptation to the disease, and the ability to learn to live with chronic disease states. 4 Nurse scientists have published a great number of studies on the adaptation of patients following SCD, ICD implantation, and ICD shock therapy. 4 It is equally important to note that the vast majority of the behavioral and psychosocial interventions that intended to enhance QoL post ICD implantation were developed and tested by nurse scientists. 4 nursing interventions. Nursing interventions, aimed at decreasing the psychological stress of living with heart disease, have identified reductions in anxiety and depression. 20 Medical conditions such as severe depression and anxiety disorder can be diagnosed and treated. 6 Despite the acceptance of feelings as a significant part of the human condition, scientific knowledge of the effect of the clients' emotions on their ability to cope is limited. 21 Cardiac disease, including complications, will inevitably lead to a fundamental emotional reaction. 6 A defense mechanism is typically the primary emotional reaction linked to a healthy survival strategy. 6 Although the reaction demonstrates the patient's desire to develop healthy coping strategies, 21 it is clear that emotion affects the ways in which the patient copes with illness. education. Despite the advantage of ICD technology in survival rates, patients with an ICD experience a major disruption in their lives. 22 Guidelines for nursing care have been published, highlighting that education is essential for ICD recipients. 23 The guidelines focus on the patient's understanding of his or her condition, functions of the ICD, implantation procedure, preoperative and postoperative care, restrictions on activities of daily living, and discharge instructions. 22 The underlying supposition is that the patient will process and understand the information received as well as adapt to daily life activities. 24 review of literature An extensive literature search using Center for health evidence (CCHNE.net), CINAHL, ClinicalTrials.gov, Cochrane, Embase, Guidelines.gov, Medline, PubMed, and OVID was undertaken to search for publications describing PTSD in ICD recipients. Each database was searched for the most current evidence-based data, randomized controlled trial (RCT), systematic reviews including Evidence-based Practice Center, and Health Technology Assessment reviews and meta-analyses conducted between the years 2000 and 2015. Cohort or other prospective non-RCT designs were also considered. A total of 26 guidelines and systematic reviews arrived at diverse conclusions, provided different recommendations, and observed different effectiveness of therapies, 25 regarding PTSD in ICD patients. However, many guidelines identified trauma-focused psychological treatments as a preferred method viewing medications as an adjunct or a next-line treatment. 25 The range of participant inclusion criteria included the assessment of one of the following outcomes: PTSD symptoms, remission (no longer having symptoms), QoL, disability or functional impairment, or adverse events. Settings included outpatient and inpatient care, cardiovascular, electrophysiology (EP) clinic, primary care, and mental health care settings. Ladwig et al. 26 found that experiencing SCD outside of the hospital setting resulted in an even greater prevalence of PTSD (27%-38%) among ICD recipients. A total of 48.6% of the sample had clinically significant levels of PTSD at any one point in time. 26 ICD recipients with positive PTSD scores after device implantation were considerably more likely to have shock storm. 27 These rates dropped significantly in the first six months after ICD implantation to 15% and remained stable at one year. 27 von Känel et al. 28 found a 31% prevalence of PTSD in ICD patients two years post implantation. At five and a half years post ICD implant, the PTSD prevalence had increased to 36%. 28 A total of 19% of the participants had PTSD at both assessments, 12% had PTSD at baseline, and 18% had PTSD at the follow-up visit. 28 Likewise, elevated PTSD scores were associated with a 3.2 times greater likelihood of mortality within five years compared with ICD patients, with no to moderate symptom levels of PTSD, even after controlling for disease and demographic parameters. 26 Moreover, Ladwig et al. 26 reported that the relative mortality risk was 3.45 (adjusted for age, gender, diabetes, left ventricular ejection fraction, beta-blocker use, depression, and anxiety) in ICD recipients with PTSD (high Impacts of events scalerevised (IES-R) score) compared with those without PTSD. Ladwig et al. 26 found that prior ICD shocks had no influence on the experience of PTSD symptoms. In addition, Kapa et al. 27 found that ICD recipients with shocks and those without shocks differed only in their scores on physical component of the short form-36 health survey (SF36). Therefore, regardless of the occurrence of ICD shock, the experience of cardiac arrest, or being told of the potential threat, there is no evidence highlighting the incidence of PTSD in the first year after implantation. 27 Methods PTSD in ICD recipients is known to be associated with 55% cardiac-specific mortality. 29 It is important to recognize PTSD symptoms early in this patient group due to the high risk of mortality and morbidity and equally important to ensure that they receive appropriate care to reduce their risk of detrimental outcomes. Clinical observations and an exhaustive literature review suggest that PTSD is frequently undetected in ICD recipients followed up at EP outpatient clinics. There were no known studies at the time of this project that screened ICD recipients all-inclusively, regardless of indication for the ICD implant on behalf of PTSD symptoms utilizing the PC: PTSD screen in the outpatient EP clinic. The focus of the project was to develop and implement a PTSD protocol using the PC: PTSD tool. In our project, similar to the Ladwig et al. 26 study, age and gender were evaluated. However, the baseline cardiovascular disease state and indication for ICD implant were excluded intentionally. The current project focused on screening all ICD recipients, regardless of ICD indication with the PC: PTSD tool for symptoms of PTSD in the outpatient EP clinic. The focus of the Ladwig et al. 26 study was to evaluate for PTSD symptoms at baseline and predict long-term mortality risk in patients with ICDs. Although the focus of the study was different, both studies were performed in a similar urban and suburban outpatient clinic and all participants were screened for PTSD symptoms. Our project evaluated the PTSD prevalence and was compared accordingly in which prevalence was determined as the proportion of patients with positive symptoms of PTSD. Based on the previous research by Ladwig et al. 26 , implementing the PC: PTSD screening tool to ICD recipients in an EP outpatient clinic would demonstrate a greater than 20% prevalence of PTSD. Following the institutional review board's approval from the University of Alabama in Huntsville, Alabama, Washington University School of Medicine (WASHU) in St. Louis, Missouri, who approved a partial waiver of HIPAA authorization, the patients were screened for PTSD using the validated PC: PTSD tool (Appendix A). Patients older than 19 years of age, with an ICD, were eligible for inclusion in the project. The exclusion criteria included combative or confused patients without family support. The patients were asked to review the consent letter (Appendix B) and complete the attached PC: PTSD screening tool (Appendix A). ICD patients, who agreed to participate, were screened during their follow-up visits. All of the participants were identified through the electronic medical record (AllScripts) and recruited at WASHU in the EP clinic during their Usual customary care (UCC) visit. The research was conducted in accordance with the Declaration of Helsinki. Implementation Phase I of the project began with a presentation detailing the project goals, interventions, and expected outcomes to senior leadership, nursing management, and multidisciplinary office staff. A consent letter with the attached PC: PTSD screening tool was reviewed in detail (Appendices A-B). The PC: PTSD screening tool was developed by Annabel Prins et al. 30 (2003) and was designed for use in primary care or other medical settings to screen for PTSD. It is a four-question tool that includes an introductory sentence, prompting respondents regarding traumatic events. Prins et al. 30 (2003) suggested that any PC: PTSD screen should be considered positive for most participants with three "yes" responses to any item in the screen. Phase II of the project began with the completion of the administration and provider and staff training. Patients were identified using Allscripts, Washington University's electronic medication record. All patients with an ICD aged 19 years and older were given the informed consent letter with the attached PC: PTSD survey. By the end of 12 days, 50 participants had completed the attached survey. Nine of the 50 participants had positive findings on the screening. Each patient with a positive PC: PTSD survey was referred to a mental health specialist for further evaluation and treatment. We expected a 20% prevalence rate of PTSD in the EP outpatient clinic of the participating patients. Phase III of the project began when enrollment of 50 participants was met. SAS v9.4 software was used to calculate the prevalence of PTSD in ICD recipients in the EP outpatient clinic, and all data were compared to the work by Ladwig et al. 26 , demonstrating a greater than 20% burden. The project coordinator tracked this information. Framework The biopsychosocial model (BPS), used as the framework for this project, assisted medical personnel in facilitating and/or promoting healthy client behaviors. 31 BPS entails the conceptualization and treatment of health problems as an interplay between biological factors, psychological factors, and social factors, culminating in the manifestation of symptoms. 32 The BPS is predicted to be the best theoretical framework capable of establishing a therapeutic process or producing an antitherapeutic effect on ICD patients suffering from PTSD. 33 Screening to identify ICD recipients who are currently suffering or are at risk of PTSD signifies the need for comprehensive, superior care, consistent with BPS. 31 The BPS allows healthcare professionals to expand their analyses, diagnoses, and treatment of illness. 34 Evidenced-based interventions with proven patient outcomes are essential in clinical practice. 4 Nursing leadership requires promoting change and expanding the nurses' scope of practice. This requires the nurses to demonstrate leadership and educational reform in their practice. 4 The Institute of Medicine Report on "The Future of Nursing: Leading Change Advancing Health" endorsed the need for nurses to coordinate care among clinician and healthcare agencies, prevent occurrences of acute care episodes, and be involved in managing chronic illness and disease progression, resulting in prevention of rehospitalization. 35 Nurses are in an optimal position to affect important disease outcomes for patients and their families after ICD implantation. 4 evaluation In this project, a structured PTSD screening protocol using the PC: PTSD tool was administered to all ICD patients in a large EP outpatient clinic in the Midwestern United States. A 30-day time frame was utilized to obtain consent and screen participants. The data were extracted from the electronic health record used in the facility. Project costs. The costs of materials and staffing time were eliminated, as the health administrators determined that the evaluations of PTSD in post ICD implants met the current standard of care. There were no salary costs associated with the project. results The purpose of this analysis was to determine the prevalence of PTSD among patients with ICD implants seen during outpatient EP clinic visits and to establish a referral protocol to mental health services for any positive PC: PTSD screen. A positive response to the PC: PTSD screen was defined by three "yes" responses to any of the four screen questions. The proportion of patients having PTSD and an exact 95% confidence interval based on the binomial distribution are presented in Table 1. Comparisons between PTSD and non-PTSD patients were done using a two-sample t-test for continuous variables and Fisher's exact test for categorical statistics (Table 2). PTSD prevalence was compared to Ladwig et al. 26 study prevalence using Fisher's exact test (Fig. 1). In the Ladwig et al. 26 study, prevalence was determined as the proportion of patients having a positive PTSD result (n = 38 of 147). All analyses were conducted using SAS v9.4. A total of 50 ICD recipients (33 male and 17 female) participated in the project. A total of 18% of the participants had a positive PC: PTSD screen. Each participant with a positive PC: PTSD screen, nine patients in total, was referred to mental health specialists for further evaluation and treatment. When evaluating the PTSD symptoms from the time of the ICD implant (2009)(2010)(2011)(2012)(2013)(2014)(2015), an increased burden of PTSD symptoms was observed in the group of participants with ICD implanted in 2015 (26%), compared to those with ICD implanted in 2009 (6%; Fig. 2). The evaluation of the responses to the PC: PTSD question demonstrated significant findings (Fig. 3). A total of 26% of the overall patient group experienced nightmares during the previous 30 days (P = 0.001). Within the same patient group, 31% reported symptoms of avoidance (P = 0.001). A total of 20% of these patients reported that they felt on guard (P = 0.001) and 24% of this group documented that they felt numb (P = 0.001). Ladwig et al. 26 study reported a notable 26% incidence rate compared to an 18% prevalence in the current sample (P = 0.34). This project did not demonstrate a 20% prevalence of PTSD symptoms as initially hypothesized. However, the project did demonstrate a significant 18% burden of PTSD. discussion Growing evidence suggests that PTSD symptomatology is highly prevalent in EP clinic and harmful to psychosocial and physical health. The current project supports the need for routine screening for the presence of PTSD in the outpatient EP clinic based upon the evidence found in the literature review and project findings. Utilization of the PC: PTSD screen as recipient's implantable cardiac defibrillator, these results may not generalize to other populations. Second, though the PC: PTSD tool is a validated measure of PTSD, it has never been studied looking at the operating characteristics in the EP outpatient clinic to screen for PTSD symptoms. summary There is an urgent need for a multidisciplinary approach to care for ICD recipients with PTSD symptoms as was demonstrated by the current project. The increased prevalence of PTSD in ICD patients measured by this study as 18% supports the need and is consistent with previous research. PTSD symptoms are a key source of emotional distress in patients with ICDs. These symptoms may persist for years; therefore, they should not be overlooked. EP clinicians should screen regularly for PTSD symptoms, and those with a standard of care in an EP clinic on all ICD recipients would help identify patients at increased risk for PTSD. Early recognition and referral to a mental health specialist provides comprehensive, superior care. Given the known association of increased morbidity and mortality in patients with cardiovascular disease and PTSD, this recommended practice standard becomes imperative for improved patient outcomes such as reducing the likelihood of future ICD shocks and increased mortality risk. limitations The findings from this study should be interpreted in light of several limitations. First, the study sample was relatively small and located in a large EP outpatient clinic in an urban community. While this is an important group to study, given their underlying all-inclusive cardiac indicators for the notes: twenty-six percent of the overall patient group experienced nightmares during the previous thirty days (P = 0.001). Within the same patient group, 31% reported symptoms of avoidance (P = 0.001). twenty percent of these patients reported that they felt on guard P = 0.001). twentyfour percent of this group documented that they felt numb (P = 0.001). positive results should be referred to a mental health specialist. Improved integration of mental health services in the EP clinic will better serve these high-risk patients. Future research is needed to validate the global prevalence of PTSD in EP clinics with ICD recipients. Subsequently, the magnitude of PTSD could be fully evaluated to determine if patient education and a referral protocol to mental healthcare services decrease the prevalence of PTSD and prevent detrimental health outcomes.
2016-10-31T15:45:48.767Z
2016-01-01T00:00:00.000
{ "year": 2016, "sha1": "451c46677048c2da793c9b612e359029a3260c72", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.4137/CMC.S39957", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "090cf1d6c1a9b13d03e47017dbae1cacba768e21", "s2fieldsofstudy": [ "Medicine", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
259919067
pes2o/s2orc
v3-fos-license
Diagnosis of Latent Tuberculosis Infection in Hemodialysis Patients: TST versus T-SPOT.TB Hemodialysis (HD) patients should be screened for latent tuberculosis (TB) infection. We aimed to determine the frequency of latent TB infection in HD patients and to compare the effectiveness of the tests used. The files of 56 HD patients followed between 1 January 2021 and 1 October 2022 were retrospectively analyzed. Demographic data, the presence of the Bacillus Calmette-Guerin (BCG) vaccine, whether or not the patients had previously received treatment for TB before, the status of encountering a patient with active TB of patients over 18 years of age, without active tuberculosis and who had a T-SPOT.TB test or a Tuberculin Skin Test (TST) were obtained from the patient files. The presence of previous TB in a posterior–anterior (PA) chest X-ray was obtained by evaluating PA chest X-rays taken routinely. Of the patients, 60.7% (n = 34) were male and their mean age was 60.18 ± 14.85 years. The mean duration of dialysis was 6.43 ± 6.03 years, and 76.8% (n = 43) had 2 BCG scars. The T-SPOT.TB test was positive in 32.1% (n = 18). Only 20 patients (35.7%) had a TST and all had negative results. While the mean age of those with positive T-SPOT.TB results was higher (p = 0.003), the time taken to enter HD was shorter (p = 0.029). T-SPOT.TB test positivity was higher in the group that had encountered active TB patients (p = 0.033). However, no significant difference was found between T-SPOT.TB results according to BCG vaccine, albumin, urea and lymphocyte levels. Although T-SPOT.TB test positivity was higher in patients with a previous TB finding in a PA chest X-ray, there was no statistically significant difference (p = 0.093). The applicability of the TST in the diagnosis of latent TB infection in HD patients is difficult and it is likely to give false-negative results. The T-SPOT.TB test is not affected by the BCG vaccine and immunosuppression. Therefore, using the T-SPOT.TB test would be a more appropriate and practical approach in the diagnosis of latent TB in HD patients. Introduction Tuberculosis (TB) continues to be one of the main causes of death due to infectious diseases all over the world. The World Health Organization (WHO) has implemented the 'End tuberculosis' strategy and in relation to this, it recommends screening and treating latent TB infection (LTBI) [1]. According to the Turkish Ministry of Health's Tuberculosis Diagnosis and Treatment Guidelines, it is recommended that patients with a high risk of latent TB reactivation, such as hemodialysis (HD) patients, should be screened. Since the risk of transmission will be high in hemodialysis units, the development of tuberculosis disease in this patient group must be prevented [2]. When prior studies are examined, in the systematic reviews conducted by Alemu et al., LTBI and active tuberculosis infection were found to be more common in dialysis patients [3,4]. In the study of Xia et al., it was found that the rate of development of active tuberculosis was higher in hemodialysis patients with LTBI. In the same study, in which patients were followed up about three years, LTBI was also shown to be associated with major adverse cardiovascular events [5]. In the study of Park et al., it was shown that active tuberculosis is more common in dialysis patients and kidney transplant recipients compared to the general population and causes higher mortality rates [6]. In the study of Romanowski et al., it was found that active tuberculosis was seen less frequently in patients who were treated for LTBI [7]. Interferon Gamma Release Assays (IGRA) and the Tuberculin Skin Test (TST) are used in LTBI screening, and it is recommended that IGRA should be performed in immunocompromised groups such as hemodialysis patients when the TST is negative or cannot be performed. Among the IGRA tests, the T-SPOT.TB, QuantiFERON-TB Gold In Tube (QFT-GIT) or QuantiFERON-TB gold plus test are used [2,[8][9][10][11][12]. When studies comparing the diagnostic tests are examined, there is no gold standard test. In the study of Akbar et al., the QuantiFERON-TB gold plus test was shown to be superior to the TST. However, the small sample size was determined as a limitation of the study [13]. On the other hand, in the study of Setyawati et al., the use of the TST was recommended in the diagnosis of LTBI [14]. However, although it is stated that IGRA tests are not affected by immunosuppression, studies in patients with chronic kidney disease have shown that, as the time of dialysis increases, IGRA tests are more likely to give false-negative results. In this context, it is recommended that patients with chronic kidney disease be screened for LTBI at an early stage [15][16][17][18]. Considering the systematic reviews carried out in recent years, it has been shown that IGRA tests are superior to the TST [11,19,20]. The aim of this study is to determine the frequency of latent TB infection in patients undergoing hemodialysis in our hospital and to compare the effectiveness of tests used in the diagnosis of latent TB infection. At the same time, the aim is to investigate the reasons for the inconsistency between the tests by determining the factors affecting the tests. Study Design and Population This study was planned as a retrospective, cross-sectional study and was conducted with the approval of Erzincan Binali Yildirim University Clinical Research Ethics Committee (Date: 10 November 2022/Decision No: 05/09). Latent TB infection screening is routinely performed in the hemodialysis unit of our institution. Patients are regularly referred to a tuberculosis dispensary for a TST to be performed. The T-SPOT.TB test is performed simultaneously with a hemogram, biochemical examinations and a posterior-anterior (PA) chest X-ray taken during routine dialysis, for patients who cannot undergo a TST or whose results are negative. So, the files of HD patients who were followed up in the hemodialysis unit of a tertiary research and training hospital between 1 January 2021 and 1 October 2022 were reviewed retrospectively. Demographic data of the patients (age, gender, comorbidities, duration of hemodialysis admission, etc.) and data on the presence of Bacillus Calmette-Guerin (BCG) vaccine, whether or not they had previously received treatment for active TB, and their prior encounters with a patient with active TB were obtained from patient files. The presence of previous TB in a PA chest X-ray was obtained by evaluating the PA chest X-rays taken routinely. The inclusion criteria were: 1. To have regular hemodialysis; 2. To be over 18 years old; 3. To have T-SPOT.TB or TST results; 4. To not have a concurrent active TB diagnosis. Accordingly, out of a total of 67 patients, 56 patients who met the inclusion criteria were included in the study. Since the number of patients who did not undergo a TST was high, a comparison of both tests could not be made. Therefore, the factors affecting the results of the T-SPOT.TB test and the TST were evaluated separately. Methodology After blood samples were taken using special tubes for the T-SPOT.TB test (Oxford Immunotec, Oxford, UK), T-Cell Xtend reagent was added to the blood samples and sent to the laboratory. Mononuclear cells were obtained by centrifugation from the blood taken for the T-SPOT.TB test. The resulting mononuclear cells were added to wells previously coated with IFN-γ antibodies. Then, the TB antigens ESAT-6 and CFP-10 and Phytohemagglutinin were added for positive control. The negative control was determined as the well without antigens. These wells were incubated overnight at 37 • C with 5% CO 2 . After incubation, the wells were washed and secondary conjugated antibodies were added to measure the IFN-γ response. Spots that formed in the wells in which the IFN-γ response was observed were measured by an automated ELISPOT reader (AID systems, Strassberg, Germany). The result was considered positive if the test wells contained at least five more spot-forming cells than the average of the negative control wells [21]. The TST was applied intradermally to the upper inner 2/3 of the left forearm of the patients, in a hairless area away from the veins, with an insulin injector, with 0.1 mL of 5 TU PPD containing tuberculin solution. The transverse diameter of the formed induration was measured in mm after 48-72 h. Results with an induration diameter of 5 mm or more were considered positive [22]. The hemogram test of the patients was performed using the Sysmex XN-1000 Hematology System (Sysmex Corporation, Kobe, Japan); biochemical tests were performed with AU 5800 (Beckman Coulter, Brea, CA, USA). Statistical Analysis The NCSS (Number Cruncher Statistical System) 2007 (NCSS LLC, Kaysville, UT, USA) program was used for statistical analysis. Descriptive statistical methods (mean, standard deviation, median, frequency, ratio, minimum, maximum) were used while evaluating the study data. The conformity of quantitative data to normal distribution, the Shapiro-Wilk test and graphical evaluations were used. Student's t-test was used for comparisons of normally distributed quantitative variables between two groups, and the Mann-Whitney U test was used for comparisons of non-normally distributed variables. Pearson's chi-squared test, the Fisher-Freeman-Halton test and Fisher's exact test were used to compare qualitative data. Logistic regression analysis was used in multivariate evaluations of the risk factors affecting T-SPOT.TB positivity. Significance was evaluated at the p < 0.05 level. Results The study was carried out at a research and training hospital between 1 January 2021 and 1 October 2022. It was carried out with 56 HD patients, of whom 39.3% (n = 22) were female and 60.7% (n = 34) were male. The ages of the patients ranged from 20 to 81, with a mean of 60.18 ± 14.85 years. The duration of HD ranged from 1 to 27 years, with a mean of 6.43 ± 6.03 years. In total, 66.1% (n = 37) of the cases had comorbidities. When the types of comorbidities were examined, it was observed that 32.4% (n = 12) had type 2 Diabetes Mellitus (DM), 78.4% (n = 29) had essential hypertension (HT) and 45.9% (n = 17) had other diseases. Of the patients, 3.6% (n = 2) had a prior history of active TB. The patients stated that they had completed their treatment. The number of patients who had encountered active tuberculosis patients was 8.9% (n = 5). Assessment of T-SPOT.TB Results A statistically significant correlation was found between age and the T-SPOT.TB test result (p = 0.003; p < 0.01). The mean age of the group with positive results was found to be higher than the group with negative results. A statistically significant correlation was found between the time on HD and the T-SPOT.TB test result (p = 0.029; p < 0.05). The HD time in the group with positive results was found to be shorter than in the group with negative results. While the T-SPOT.TB test results did not show a statistically significant difference by gender (p = 0.072; p > 0.05), it is noteworthy that the rate of positive results in men was higher than that in women. There was no statistically significant correlation between the presence of comorbidities and the T-SPOT.TB test results (p > 0.05). A statistically significant correlation was found between encountering a patient with active tuberculosis and the T-SPOT.TB test results (p = 0.033; p < 0.05). The rate of positive results in the group that encountered a tuberculosis patient was higher than the group that had not. No statistically significant correlation was found between the number of BCG scars and the T-SPOT.TB test results (p > 0.05) ( Table 2). No statistically significant correlation was found between leukocyte count, lymphocyte count, albumin level and urea level and the T-SPOT.TB test results (p > 0.05). While no statistically significant correlation was found between previous TB findings on PA chest X-rays and the T-SPOT.TB test results (p = 0.093; p > 0.05), it is noteworthy that the rate of positive results was higher in the group with a previous TB finding in a PA chest X-ray (Table 3). When we evaluated the risk factors affecting the T-SPOT.TB test, such as age, gender, a previous TB finding on a PA chest X-ray, time of dialysis and encountering an active tuberculosis patient with Enter Logistic Regression Analysis, the model was found to be significant and the explanatory coefficient of the model (76.8%) was found to be at a good level. It is seen that the effect of a unit increase in age on T-SPOT.TB positivity increases the ODDS ratio 1.101 (95% CI: 1.016-1.192) times. The effect of encountering an active tuberculosis patient has an effect on T-SPOT.TB positivity with an ODDS value of 59.762 (95% CI:1.59-2233.42) times. The effects of gender, a previous TB finding on a PA chest X-ray and time of dialysis were not significant in the multivariate evaluation (p > 0.05) ( Table 4). Results of Patients with Known History of TST A TST was performed in 35.7% (n = 20) of the patients and it was found that all of them had negative results. Of these cases, 30% (n = 6) were female and 70% (n = 14) were male. Their ages ranged from 20 to 81 years, with a mean of 61.85 ± 17.1 years. The duration of HD ranged from 1 to 20 years, with a mean of 6.40 ± 6.08 years. In total, 50% (n = 10) of the cases who underwent a TST had comorbidities. When the types of comorbidities were examined, it was observed that 20% (n = 2) had type 2 DM, 90% (n = 9) had essential hypertension and 50% (n = 5) had other diseases. The rate of having had tuberculosis previously was found to be 5% (n = 1), while 20% (n = 4) of the patients who underwent a TST stated that they had previously encountered an active tuberculosis patient. When the BCG scars of the tested participants were examined, 10% (n = 2) had no scar, 15% (n = 3) had one scar and 75% (n = 15) had two scars (Table 5). Mean leukocyte counts were 7000 ± 4336.87 mm3; mean lymphocyte counts were 2209 ± 3951.76 mm 3 ; the mean albumin level was 16.39 ± 16.41 g/dL; and the mean urea level was calculated as 141.95 ± 22.97 mg/dL. There was no significant difference between leukocyte and lymphocyte count according to the positive and negative T-SPOT.TB test. In 10% (n = 2) of the patients who underwent a TST, previous TB was found in a PA chest X-ray. The T-SPOT.TB test results of the patients who underwent a TST and all had negative results were found to be 20% (n = 4) negative and 80% (n = 16) positive (Table 6). Discussion The frequency of latent TB infection and the probability of TB reactivation in hemodialysis patients are higher than in the normal population [1,2,[23][24][25]. Therefore, HD patients should be screened for latent TB infection [1,2]. The TST and IGRA are used in screening, and in a study investigating the frequency of latent TB infection in HD patients in low-and high-risk groups for latent TB infection, IGRA were shown to be superior to the TST [26]. In our study, the rate of latent TB infection was 32.1% and was similar to the study of Bandiara et al. that was conducted in HD patients (39.1%) [27]. In another study conducted in Thailand, the frequency of latent TB infection in dialysis patients was found to be 25% [28]. In the study of Lemrabott et al., 25% of latent TB infection was found in Senegal [29]. Similar rates of latent TB infection were found in the study of Putri et al. [17]. The frequency of latent TB infection in our study was as high as in Rheumatoid Arthritis and Ankylosing Spondylitis patients in the other immunosuppressive patient group [30]. The rate of latent TB infection in our study could be determined with the T-SPOT.TB test because only 35.7% (20) of the patients had a TST and all of them had negative results. Most of the patients (64.3%) refused to go to the tuberculosis dispensary for a TST. This shows that the applicability of the TST in hemodialysis patients is difficult. At the same time, in the study of Say et al., in which the QFT-GIT and TST were compared for the diagnosis of latent TB infection, there was no concordance between the two tests [31]. In the study of Southern et al., there was a high degree of discordance between IGRA and the TST in hemodialysis patients [15]. In a study conducted with HIV-infected individuals, another group of immunosuppressive patients, moderate concordance was found between T-SPOT.TB and the TST, and it was stated that the discordance might be due to falsepositive and -negative results of the TST [32]. The disadvantages of the test are that the TST gives false positive results in the presence of the BCG vaccine and atypical mycobacterial infection, and false-negative results in the presence of immunosuppression [33,34]. In our study, T-SPOT.TB positivity was found in 16 (80%) of 20 patients whose TST results were negative. This result shows that the TST gives false-negative results. On the other hand, the expensiveness of IGRA is the disadvantage of the tests, while the advantages are that they are not affected by the presence of the BCG vaccine, atypical mycobacterial infection and immunosuppression [34,35]. In the study of Sargın et al., which was conducted in a rheumatologic patient group, the sensitivity and specificity of IGRA tests were shown to be superior to the TST [36]. In our study, no significant correlation was found between T-SPOT.TB test results according to BCG vaccine status, and this shows that the test is not affected by the BCG vaccine. The BCG vaccine is in the routine childhood vaccination schedule in our country and is administered at the end of the 2nd month [2]. It is important for our country that the T-SPOT.TB test is not affected by the BCG vaccine. Although the number of patients is small, the positive T-SPOT.TB test in patients with a TB finding on a PA chest X-ray indicates that the probability of a false-negative result is low. However, false-negative results should be investigated in HD patients, including in many patients with a history of microbiologically proven tuberculosis. In our study, T-SPOT.TB test positivity was statistically higher in patients who had encountered active TB patients. In the study of Park et al., QFT GIT positivity, which is one of the IGRA tests, was higher in HD patients who had a history of TB [37]. In a study, T-SPOT.TB test positivity in patients with high risk factors, such as encountering an active TB patient, shows that the probability of developing active TB is higher [38]. In this case, it is important to initiate latent TB treatment without delay in high-risk patients with a positive T-SPOT.TB test. It has been shown that advanced age, active smoking and close contact with someone who has previously had TB are among the risk factors for latent TB infection in HD patients. In the same study, it was stated that high albumin levels and short HD duration facilitate the detection of latent TB infection [39]. In a study conducted in Japan in hemodialysis patients, the frequency of LTBI was found to be higher, especially in people aged 60 and over [40] and in other studies conducted in China and Lebanon, advanced age was found among the risk factors for latent TB infection [5,41]. In our study, the high mean age of the patients with a positive T-SPOT.TB test and the shorter time to enter HD in patients with a positive T-SPOT.TB test support the literature. The incidence of TB in our country has been decreasing over the years, so that the incidence of TB, which was 29.8% in 2005, decreased to 14.4% in 2018 [42]. This may be the reason why the T-SPOT.TB test gives high positive results in older age groups. In our study, there was no significant difference between positive/negative results in terms of albumin, urea and lymphocyte levels, while the average albumin levels of patients with a T-SPOT.TB positive result were higher. However, the fact that there was no significant correlation between the T-SPOT.TB test results according to the urea levels and lymphocyte counts of the patients suggests that the test is not affected by immunosuppression. The small number of patients and the fact that many patients did not have a TST are the limitations of the study. In conclusion, HD patients should be screened for latent TB infection as soon as possible. Although it is recommended to perform a TST first in screening, the applicability of the test is not easy and the possibility of false-negative results is high, which limits its use. The most important advantage of the T-SPOT.TB test is that it is not affected by immunosuppression and it is studied with a single measurement from blood. Therefore, the use of the T-SPOT.TB test would be a more practical and accurate approach to screen for latent TB infection in HD patients. Informed Consent Statement: Since it is a retrospective study, patient consent was not obtained. Data Availability Statement: Data will be shared upon request. Conflicts of Interest: The authors declare no conflict of interest.
2023-07-16T15:12:39.182Z
2023-07-01T00:00:00.000
{ "year": 2023, "sha1": "9ee52ffd1f0da2203536b89bec1e5f5e7cbc50e4", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2075-4418/13/14/2369/pdf?version=1689319242", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "257bd9bcfcfdf75e3c47e9f2848a1812e5aeb930", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [] }
239054502
pes2o/s2orc
v3-fos-license
Effect of Culturally Mediated Right-Favoritism on the Direction of Pseudoneglect on Line Bisection Tasks Objectives: Arabs have a right-to-left language and engage in favoring of the right side or limb when implementing daily routine practices. The purpose of this research is to explore the effect this cultural attitude might have on pseudoneglect, by comparing with a southeast Asian sample that has a left-to-right language structure. Methods: Participants were from two separate ethnic groups (Arabs and Filipinos), residing in Saudi Arabia, healthy individals 18 years and above were allowed to volunteer in the study. The participants were recruited at King Saud University Medical City and the general community by both convenience and snowball sampling. Social demographic information such as gender, age, years of education, dominant hand, was also documented. The line bisection task (LBT) contained 36 randomly assorted lines of three different lengths placed at five different locations on a white sheet. The percent deviation score (PDS) was used to quantify pseudo-neglect. Tests of statistical significance including t-tests and mixed-effects regression were performed to determine if differences existed among different demographic variables or among line properties, respectively. Results: A total of 256 were enrolled (Arabs 52.3%). The overall PDS mean and standard deviation (SD) was −0.64 (2.87), p = 0.0004, which shows a significant leftward deviation in the entire cohort. PDS was −1.26 (2.68) in Filipinos, and −0.08 (2.94) in Arabs. The difference was statically significant (p < 0.0001). Mixed effects model showed positive changes in the PDS value as the length of the line increased (p < 0.0001) and as the line was more rightward placed (p < 0.0001). However, Filipino participants would still exhibit negative changes in the PDS value in comparison to Arabs (p < 0.0001); There were no significant associations between PDS and other factors such as age, years of education and gender. Conclusion: Differences found here between two distinct ethnic groups support the hypothesis that certain cultural aspects such as language direction and other cultural practices influence direction and degree of pseudo-neglect. Objectives: Arabs have a right-to-left language and engage in favoring of the right side or limb when implementing daily routine practices. The purpose of this research is to explore the effect this cultural attitude might have on pseudoneglect, by comparing with a southeast Asian sample that has a left-to-right language structure. Methods: Participants were from two separate ethnic groups (Arabs and Filipinos), residing in Saudi Arabia, healthy individals 18 years and above were allowed to volunteer in the study. The participants were recruited at King Saud University Medical City and the general community by both convenience and snowball sampling. Social demographic information such as gender, age, years of education, dominant hand, was also documented.The line bisection task (LBT) contained 36 randomly assorted lines of three different lengths placed at five different locations on a white sheet. The percent deviation score (PDS) was used to quantify pseudo-neglect. Tests of statistical significance including t-tests and mixed-effects regression were performed to determine if differences existed among different demographic variables or among line properties, respectively. Results: A total of 256 were enrolled (Arabs 52.3%). The overall PDS mean and standard deviation (SD) was −0.64 (2.87), p = 0.0004, which shows a significant leftward deviation in the entire cohort. PDS was −1.26 (2.68) in Filipinos, and −0.08 (2.94) in Arabs. The difference was statically significant (p < 0.0001). Mixed effects model showed positive changes in the PDS value as the length of the line increased (p < 0.0001) and as the line was more rightward placed (p < 0.0001). However, Filipino participants would still exhibit negative changes in the PDS value in comparison to Arabs INTRODUCTION Pseudoneglect, a physiological phenomenon first described in 1980 (Bowers and Heilman, 1980), is defined as a normal tendency of healthy individuals to shift spatial attention in a certain direction. The term is used to describe findings stating that normal individuals without any neurological problems would have a systematic asymmetry in spatial attention toward the left (Kisbourne, 1970;Jewell and McCourt, 2000). It mirrors the clinical condition hemispatial neglect displayed by patients with right parietal lobe damage that manifests as contralateral spatial attention disruption. This has an important implication for the interpretation of the directional deviations displayed by patients with neglect. A common tool used to investigate pseudoneglect and visuospatial attention is the line bisection test, which requires an individual to identify the exact middle of a line (Carone, 2007). Converging evidence from a large body of literature has found that patients with neglect bisect the line far to the right of the center, whereas healthy subjects typically bisect lines with a minor left bias. This supports a right hemisphere bias for attention allocation and provides evidence that lateralized processes predominantly located in the right hemisphere are normally engaged in healthy subjects during tasks where patients with unilateral neglect failed. This proves the importance of the right frontoparietal network in attention allocation (Mennemeier et al., 1997). In addition to the lateralized processing bias of the right hemisphere, handedness, gender, assigned hand use, and length and position of the line have all been identified as influencers on pseudoneglect direction during line bisection tasks (LBTs; Jewell and McCourt, 2000). Bias toward the left has been described to be larger in men (Jewell and McCourt, 2000), and multiple studies have demonstrated an attenuation generally of leftward biases with advancing age (Barrett and Craver-Lemley, 2008;Friedrich et al., 2018). Men may even exhibit a reduction in biases in either direction with increasing age (Barrett and Craver-Lemley, 2008;Chen et al., 2011;Friedrich et al., 2018). The length of the bisected line may also be a factor; for example, younger individuals would bisect to the left of the line center the shorter the line was and to the right the longer the lines were (Pierce, 2000), a phenomenon described as a cross-over effect (Pierce, 2000;Friedrich et al., 2018). Other proposed factors include scanning habits, which stem from the reading direction of participants; these are thought to govern scanning strategies used during the task and as the final result of perceptual asymmetries (Manning et al., 1990;Abed, 1991). In support of this, previous studies have found consistent leftward bias in native left to right readers and central or rightward bias in right to left readers in various visuospatial tasks (Abed, 1991;Morikawa and McBeath, 1992;Fagard and Dahmen, 2003;Heath et al., 2005;Chokron et al., 2009;Friedrich and Elias, 2014), including those of line bisection (Chokron and Imbert, 1993;Chokron and De Agostini, 1995;Chokron et al., 1997). Perhaps among the most important factors in line bisection outcomes is native reading habits. A study conducted on 120 normal right-handed Israeli and French subjects (Chokron and Imbert, 1993) revealed that Israeli subjects bisected the line to the right of the center, whereas French subjects bisected the line to the left of the center. This demonstrates that reading habits' may influence bisection, with a rightward bisection for rightto-left readers and a leftward deviation for left-to-right readers. This suggests that scanning direction and other reading habits may play a role in space utilization. However, a study conducted with the aim of investigating how an imposed scanning direction could influence space perception among normal dextral patients and those with neglect with opposite reading habits suggested that these scanning-related effects are not specific to patients with neglect but determine the perceptual organization of normal subjects (Chokron et al., 1998). Moreover, according to a recent study that investigated the development of visuospatial attention in 159 typically developing children, left-handed children had a more significant leftward deviation compared with right-handed children (Ickx et al., 2017). These results again support that handedness may also play a role in deviation. One of the few studies exploring LBT performance in an Arab population yielded novel results, namely, a different pseudoneglect direction than what was previously found in Western studies. A 2019 study (Muayqil et al., 2019) analyzed the performance of healthy Arab volunteers in the LBT and revealed a rightward bias and a tendency for male participants to deviate more strongly than female participants. This type of gender difference is similar to that described in previous Western studies, despite the opposite direction of deviation. Education is a variable that has not been studied in detail with regard to pseudoneglect and is worth exploring, given that studies that explore visuospatial abilities are influenced by education, such as cancelation tasks (Azouvi et al., 2006;Brucki and Nitrini, 2008). Line bisection has been found to correlate with education in a Brazilian study (Luvizutto et al., 2020) but has not been found to be significantly related in Arabs (Muayqil et al., 2019). Hence, if pseudoneglect to the left represents a default state, then a person with higher education, in Arabic for example, might demonstrate an alternate pseudoneglect direction. Line positioning on a page has also yielded conflicting results (Mennemeier et al., 1997;Ellis et al., 2006;Learmonth et al., 2018;Learmonth and Papadatou-Pastou, 2021). In this study, we aimed to investigate whether differences could be demonstrated between two ethnic groups that differed in both language direction and cultural or religious favoring of the right on the direction of pseudoneglect during tasks of line bisection. Here, we hypothesized that Arabs would have a relatively more rightward position for line bisection in comparison with the southeast Asian group. The influence of ethnicity, education, age, gender, and line characteristics was also explored. Participants Participants were from two separate ethnic groups: Arabs residing in Saudi Arabia (predominantly Saudi Arabian) and Southeastern Asians that consisted entirely of Filipino nationals. In both groups, participants 18 and above were allowed to volunteer in the study. They were healthy individuals who were able to give consent. Those who suffered from disorders involving the central or peripheral nervous system, such as vascular disease, infectious diseases, trauma, disorders secondary to toxic or metabolic states, neurodegenerative disorders, autoimmune diseases, active systemic diseases, and malignancies, were excluded from the study. Furthermore, the exclusion criteria for Arabs included those who were exposed to any foreign language at an early age, living in a foreign country during early childhood, and those who were enrolled in international schools to eliminate any factors that could introduce novel reading or scanning strategies. For the Filipino population, being Muslim was an exclusion because of their exposure to Arabic and other Islamic teachings that encourage right-handedness. Those who could read or write in Arabic were also excluded. The participants were recruited by medical students at King Saud University Medical City and the general community by both convenience and snowball sampling. Both populations were divided into five age groups: 18-29, 30-39, 40-49, 50-59, and 60+ years, which were further divided by gender to ensure equal presentation of all the age groups and genders of both populations. Social demographic information, such as gender, age, years of education, and dominant hand, was also documented. Education was divided into two groups (grade 12 or less and >12th grade). Hand dominance was determined by self-identification. Procedures The LBT contained 18 randomly assorted lines of three different lengths on a horizontal A4 paper. Each participant would complete two line-bisection sheets (A and B), where form B was an inverted version of form A. The tasks were conducted in well-lit quiet rooms to aid in concentration. The examiners would place the first paper horizontally in front of the participant and provide instructions to bisect each line in its center with a specified hand. Upon completion, the examiner would take the first sheet, place the second sheet, and the opposite hand would be used to complete it. The participants were pseudorandomized so that they were alternately assigned to complete either sheet A or sheet B first. They were instructed not to rotate the paper or erase or change their marks and to use their corrective eyewear if needed. The lines were drawn and divided equally into three lengths (6, 12, and 18 cm). Each paper had 18 lines for a total of 36 lines. Each line was 3 mm thick, with a space of 1 cm between each line and the next. The first line was 1.5 cm from the upper edge of the paper, and the last line was 1 cm from the lower edge. The exact midpoints of the lines were measured and divided into three categories: 0 cm (the true center of the paper), 2.5 cm (from the true center), and 5 cm (from the true center). To ensure no recurring pattern, we used simple randomization to distribute the lines to five locations on the sheet: far-left, left, center, right, and far-right. This was also made to limit the ability of the participants to use a preceding line as a visual cue. The participants were asked to mark where they thought the center of the line was with either a pen or pencil. They would do this for each line and on both sheets. They were allowed to wear their corrective eye wear if needed and sit at a comfortable reading distance from the sheet of paper (30-45 cm). They were not allowed to move the sheet in any other orientation or raise it from the table. They were allowed to complete the task at their own pace but had to bisect lines from top to bottom. The examiners would then measure the distance of the participants' marks from the actual midpoint for every line in both forms. Any mark with a deviation of less than 1 mm was considered in the center. A frequently used percent deviation score (PDS; Scarisbrick et al., 1987;Hausmann et al., 2002;Facchin et al., 2016;Ickx et al., 2017) was used to quantify the amount of deviation by subtracting the actual left half of the line from the left half of the line, as marked by the participant, dividing it by the actual half of the line length, and then multiplying it by one hundred. Analysis The mean PDS was calculated for each participant. Each participant's PDS illustrated an average score acquired from the 36 lines they bisected. To calculate the overall PDS mean, we averaged the PDS of every line bisected by each participant (36 lines). Negative and positive PDS values suggest leftward and rightward deviations, respectively. Descriptive statistics (mean and SD) were used to describe the quantitative and categorical variables. A one sample t-test was used to determine the significance of the bias measured with the PDS in comparison to a hypothesis of no measurable bias (PDS = 0). Bivariate statistical analysis was carried out using Chi-square and Student's t-test. ANOVA was used to explore for statistical differences between age groups. Mixed-effects regression was performed to determine the PDS change with repeated measures on participants with each line length and each line position as dependent variables. A p-value of <0.05 and 95% CI were used to report the statistical significance and precision of results. The data were analyzed using STATA version 15. Demographics The total number of participants in the study was 256, 134 (52.3%) of whom were of Arab ethnicity, and 122 (47.66%) were Filipino. The ages of the participants ranged from 18 to 76 years; the mean age was 40.48 years (SD = 12.78 years). The largest percentage of the participants belonged to the 30-39 years age bracket (30.86%), whereas participants aged 60+ years made up only 9.77% of the total sample size. The total number of females in the study was 132 (51.56%), and that of males was 124 (48.44%). The mean (SD) of years of education was 14.5 (3.85) years, with a range of 0-27 years. Among the 256 participants, 4 identified as ambidextrous (3 Arab and 1 Filipino), 23 as left-handed (14 Arab and 9 Filipino), and 229 as right-handed (117 Arab and 112 Filipino). Other demographic data are shown in Table 1. Line Bisection The overall PDS mean (SD) and median were −0.64 (2.87) and −0.56, respectively. These results indicate the presence of a statistically significant degree of leftward pseudoneglect Further analysis was done between the two ethnicities comparing each respective gender. The results showed a statistically significant difference between Filipino males and Arab males (t = 2.75, p = 0.007, d = −0.5) and between Filipino females and Arab females (t = 2.069, p = 0.04, d = −0.4) ( Table 1). The PDS differed according to the line position and length. The mixed-effects regression model results are presented in Table 2. A significantly positive increase in average PDS was observed when the task was completed on a longer line and when the line center was located further to the right of the page. Among the variables that significantly affected the PDS according to both line length and position were ethnicity and being lefthanded, both being significantly associated with more negative PDS values ( Table 2). DISCUSSION This study is among the few describing line bisection discrepancies between populations with distinct cultural backgrounds. One group consisted of Arab individuals brought up in rightward-favoring cultures with a right-to-left direction of written language. In comparison, the other group was composed of Filipino individuals who represented a left-to-right language population with a culture permissive in laterality and choice of handedness. Using samples from both populations with different age groups and education levels revealed an overall pseudoneglect to the left of the true center of each line. This overall result is consistent with previous studies that suggest the presence of a slight left bias in healthy people with intact right hemispheres The top row shows the mean (standard deviation) of the PDS obtained for each line in relation to its length and in relation to its position on the page. The coefficient for line indicates average increase in PDS value with each longer line or each rightward positioned line. *The four self-identified as ambidextrous were included under left handed group. (Jewell and McCourt, 2000;Çiçek et al., 2009). Despite this, when looking at the PDS mean of each ethnic group separately, a clear difference in the extent of pseudoneglect was observed in each population. In our study, the PDS mean of Filipinos was found to be significantly larger to the left, whereas the Arabs gave a much smaller degree of deviation that was not significant. The significant leftward deviation of Filipinos is in agreement with what most Western studies have described about the characteristics of pseudoneglect, and an influence from factors, such as written language direction and a lack of cultural preferences toward one direction over the other, is likely. This theory is supported by Friedrich and Elias (2014), who reported that leftward bias is only found in left-to-right language readers on a greyscale task. Although our PDS mean for the Arab population sample did not show a strong rightward deviation, unlike a previous study that gave a PDS mean of +1.57 (SD 3.4) to the right (Muayqil et al., 2019), our results showing no significant leftward pseudoneglect are still consistent. This finding resembles those described in a previous study of native right-to-left readers who did not have a demonstrable pseudoneglect (Friedrich and Elias, 2014) when performing the gray scale task. Rightward deviation has also been seen in older studies, such as in Chokron and Imbert (1993), in which Hebrew and French readers were compared using a line bisection test that displayed a rightward bias in Hebrew readers. We can assume that the lack of pseudoneglect found here and the rightward deviation found in Middle Eastern participants, among other studies mentioned, is likely to be caused by the reading direction of their native language and Middle Eastern and Islamic cultural favoritism to the right. Whether this rightward bias could dampen the clinical assessment of unilateral spatial neglect in Arab patients with parietal lesions is unknown. This will require further detailed studies of lesioned patients of different backgrounds. While increasing age has been noted as a factor that would induce rightward deviation (Jewell and McCourt, 2000) and younger participants would display a leftward bias (Jewell and McCourt, 2000;Failla et al., 2003), we found no significant association between age of participants and PDS in our study. This could be due to the larger number of younger participants in the study, in which 73% of the participants were under 50 years old. Although values showed that females deviated slightly less than males, the association with gender did not reach statistical significance in our study. This is not entirely inconsistent with previous reports of a modest relation (Jewell and McCourt, 2000) showing that males were more likely to deviate to the left than females. Interestingly, a recent study that examined a group of right-handed patients' performance on line bisection tests within 24 h of an acute right hemispheric stroke demonstrated no difference in relation to gender when performing the tasks for unilateral spatial neglect (Kleinman et al., 2008). Even when analysis was performed between the same genders from both ethnic groups, statistical significance was found. Taking into account the lack of an association between gender and the PDS, one can infer that inherent traits within each ethnicity influence the PDS. As expected, the vast majority of participants were righthanded. Previous literature has explored the role of handedness in LBT performance and found that left-handed individuals made larger leftward errors when using their dominant hand compared with right-handed individuals using their right hand (Scarisbrick et al., 1987;Jewell and McCourt, 2000;Ickx et al., 2017;Muayqil et al., 2019). Here, we found that those who identified as left-handed were more likely to have a negative PDS value; however, the relatively small number of left-handed individuals limited the ability to explore further effects of the dominant hand on the degree of pseudoneglect in this study. This was, however, controlled for by having participants take equal random turns using each hand. The rightward placement of a line on the sheet was associated with a more positive PDS. This appears to be consistent with multiple previous reports on the effect of line location (Jewell and McCourt, 2000). Also consistent with this finding is an earlier study of Arabs that showed increasing leftward errors with more leftward positioned lines (Muayqil et al., 2019). The crossover effect, a phenomenon where more rightward displacements occur with shorter lines in normal individuals and vice versa in neglect patients, has been previously described in studies but with no specified line length of when these phenomena could occur (Monaghan and Shillcock, 1998;Rueckert et al., 2002;Nicholls et al., 2016). Although we found more rightward deviations with a longer line length in this study, earlier studies have been inconsistent, with previous meta-analytical studies describing an overall increase in the same direction of the original bias, more leftward biases, or more rightward biases with increasing line length (Jewell and McCourt, 2000;Friedrich et al., 2018;Learmonth and Papadatou-Pastou, 2021). While years of education yielded no significance in our study, it is a factor that could be further explored in future research. A recent proposed influencer on bisection results was the level of urbanization (Linnell et al., 2014), which may actually induce a leftward bias. As Riyadh is considered an urban city, it may be beneficial to compare the performance of groups from urban areas to rural areas. Another factor worth exploring is the knowledge of two languages with opposite script directions, which has been proposed to cause an attenuation of bias (Kazandjian et al., 2010). Most Arabs of recent generations are taught English early in school or have learned it when taking graduate degrees, and the population has some level of proficiency in the language. A study comparing bilingual Arabic and English speakers to monolingual Arabic speakers would be of interest. Limitations Although our study was conducted on a number of participants and balanced for sex and hand use through different age categories, the proportion of younger individuals seems to be larger. This may have hindered finding a more accurate representation of age and PDS mean associations, which will require future studies on a larger number of the older population. In addition, we controlled only for hand use and not handedness; further exploration with validated handedness scales is needed. Other considerations include the limited number of test items in the current study and the absence of a controlling factor to discriminate reading direction effects from overall rightward favoring habits. Lastly, cultural differences may lead to differences in emotional processing, which has been described as playing a role in line bisection (Vicario et al., 2021). CONCLUSION In conclusion, the present study allowed us to contrast the degree and direction of pseudoneglect for two ethnically distinct groups. The analysis supports the proposed hypothesis that culturally acquired cognitive strategies within different ethnicities influence the direction and magnitude of pseudoneglect when performing LBTs in healthy individuals. This is likely inferred from language direction and the existence of a cultural favoritism to the right. DATA AVAILABILITY STATEMENT The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. ETHICS STATEMENT The studies involving human participants were reviewed and approved by Internal Review Board, College of Medicine, King Saud University. The patients/participants provided their written informed consent to participate in this study. AUTHOR CONTRIBUTIONS TM was involved in conception, supervision of research process, and analysis and drafting of manuscript. GAh, LA, NA, HA, AA, and GAq were involved in participant assessment, data collections, data management, and drafting of manuscript. WA and MA were involved in revision of analytic process and manuscript. All authors contributed to the article and approved the submitted version.
2021-10-22T13:26:23.204Z
2021-10-22T00:00:00.000
{ "year": 2021, "sha1": "94730203ce0dd006c83b4e4130fdc608f1066b62", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2021.756492/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "94730203ce0dd006c83b4e4130fdc608f1066b62", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Medicine" ] }
56069357
pes2o/s2orc
v3-fos-license
Effects of speed exercises on acceleration and agility performance in 13-year-old female soccer players The aim of the recent study was to examine the effect of a high-intensity sprint program on adolescent female soccer players. A training group of 13 female soccer players, mean age (± SD) 13.6 years (± 0.2) followed an eight-week intervention program for one hour per week, and a group of 13 female soccer players of corresponding age, mean age 13.7 years (± 0.3) served as a control group. Preand post-tests assessed 10-m linear sprint, 20-m linear sprint and agility performance. Results showed a significant improvement in agility performance, pre 8.56 s (±0.54) to post 8.03 s (± 0.38) (p<0.01), and a significant improvement in 10-m linear sprint, pre 2.13 s (± 0,08) to post 2.02 s (± 0.12) (p<0.05), and in 20-m linear sprint, pre 3.75 s (± 0.15) to post 3.62 s (± 0.22). The correlation between 10-m sprint and agility was r = 0.70 (p<0.01), and between 20-m straight sprint and agility performance, r = 0.78 (p<0.01). These findings demonstrate that organizing the training sessions with short sprint bouts at maximum effort, interspersed with adequate recovery time, results in improvements in both in linear sprint (acceleration) and in agility performance in adolescent female soccer players. Introduction Running at high speed is component of children's play, and have been shown to promote development of the muscular system and to stimulate to the long-term effect on higher bone density in the skeletal system (Rowland, 2005). Agility can be recognized as the ability to change directions rapidly while sprinting or to start and stop quickly (Little & Williams, 2005). These movements develop coordination, balance, and motor learning (Jullien et al., 2008;Pearson, 2001), and are therefore important not only regarding youth athlete training, but also in physical education sessions. The age between 13 to15 years is a 'window' for developing speed (Hughes et al., 2012;Reilly et al., 2000b), however, very few studies have reported the effect of sprint training, especially among adolescent females (Wong et al., 2010). Some reports on youth males have shown an improvement in speed (Pettersen & Mathisen, 2012;Mujika et al., 2009;Venturelli et al, 2008), however, others have shown no effect, possibly because of insufficient training load and duration (Milanovic et al., 2012;Rowland, 2005). Explosive actions are elements of success in soccer; sprint times is often only 2-4 seconds (Castanga et al., 2003), and sprints occurs approximately every 90 seconds (Wong et al., 2010). Short sprint bouts at maximum or near-maximum effort are supposed to have implications for the short-term energy system and the central nervous system (Brown & Ferrigno, 2005); however, the effect of training is not well understood in the youth population (Hughes et al., 2012;Rowland, 2005). Another aspect is that a higher rate of knee ligament injuries have been reported in youth female athletes, and it have been speculated that if the program focused on improving athlete performance, such as speed and agility, it would also prevent injuries (Noyes et al., 2013). Considering that no study has been conducted with young females aged 13 years or younger, we wanted to assess the effect on this group. Most previous programs with children and youths have been conducted with adult training methods (Venturelli et al., 2008); therefore, we wanted to use a method involving more playful and competeive exercises to motivate them to maximal effort (Pettersen & Mathisen, 2012). It was hypothesized that the current program would enhance speed and agility more than ordinary soccer training alone. Experimental approach To study the effects, we tested (pre and post) 10-m and 20-m linear sprint and agility performance with 13-year-old female soccer players. The intervention took place in the preseason period, and the exercises were completed with a one-hour session per week for a total of eight weeks. The training group (TG) replaced one of the three ordinary soccer training sessions with the current program, and followed a strict regime. Each session started with a 10-minute warm-up and was followed by 50 minutes of short-burst running straight-line sprints-, or change-of-direction sprints of 15 to 20 meters, interspersed with recovery periods lasting 40 to 90 seconds. The program consisted of eight partner-resisted sprints (15-m), eight 20-m linear sprints, eight change of direction sprints (15-m) with 60° and 90° turns, and finished with relay races with 90° turns, each participant competing eight races. Thus, the session consisted of a total of 32 short-burst sprints. The participants were instructed to complete the sprints at maximal speed, and the exercises were performed as competitive sprinting in order to assure optimal motivation. In addition to the intervention program, the participants in TG undertook two one-hour organized traditional soccer training sessions, consisting of technical drills and small-sided games. The control group (CG) followed an ordinary soccer-training program, and undertook the same volume of training during the period consisting of three one-hour session per week, consisting of technical drills and small-sided games. Participants Thirteen female soccer players from a local club, with mean age 13.6 years (± 0.2), participated in the study. Thirteen female soccer players, with mean age 13.7 years (± 0.3) from the same league, served as a control group (CG). Written informed consent to participate in the study was obtained from both the participants and their parents in both groups. The study was conducted according to the Declaration of Helsinki, was given institutional ethical approval, and met the ethical standards in sports and exercise science research (Harris & Atkinson, 2011). Testing procedures The sprint-test consisted of a 20-m track with 10-m split-time recording. The photocells were placed at 20-cm height in the starting position, and at 100-cm height at 10 m and 20 m in the straight-line test. All tests were completed from a standing start, with the front foot placed 30 cm behind the photocells' start line. The agility test course was a 20-m standardized course used in previous studies, starting with 5-m straight sprint followed by a 90° turn, 2.5 m sprint followed by a 180° turn, 5-m slightly curved sprint followed by 180° turn, 2.5 m straight sprint followed by a 90° turn, and 5-m straight sprint (Pettersen & Mathisen, 2012). Three 120-cm high coaching sticks, which were not allowed to be touched, were used to ensure correct passage in the turns. The test was executed with the same starting procedure as the straight-line test and with photocells placed at 100-cm height at the finish line. Each participant performed two trials with three minutes' recovery between; times were recorded to the nearest 0.01 s, and the better of the two times was recorded. Trials run to familiarize participants with the sprint and agility track were conducted during both the pre-and post-tests with two submaximal trials prior to the start of the test. Electronic photocells timing gates were used to record split and completion times (Brower Timing System, USA). The exercises and the tests were executed in a gym with a parquet floor at a temperature of 20° C. Prior to the testing the participants followed the same warm-up procedure with jogging and sprint drills. Statistical Analyses Data were checked for normality by a histogram plot and by using the Shapiro-Wilk's normality distribution test. Descriptive statistics were then calculated and reported as mean ± standard deviations (SD) of the mean for each group of players on each variable. Students t-test showed no difference in baseline between groups. A two way analysis of variance (ANOVA) was conducted for the mean difference between training group and control-group before and after the intervention. The relationship between performances in linear sprints and agility tests was determined by using Pearson's correlation (r). The same procedure was used to detect any correlation among linear sprint, agility, and anthropometrical variables. The reliability of tests was assessed using ICC, and the test-retest reliability of parameters describing the players' running and agility performance showed good reliability in the tests. All calculations were carried out using SPSS v 19.0 (Inc., Chicago, Il., USA). Discussion In the present study we tested the effect of a high-intensity speed-training program on youth female soccer players (Table 1). The main result is a significant improvement (6.2%) in agility performance (Table 2), and this is in accordance with findings in youth males aged 11 to 12 years (Pettersen & Mathisen, 2012), and to our knowledge this is the first study carried out with the 13-year-old or younger females. It is claimed that initial acceleration and short sprint are more difficult to enhance than maximal velocity (Meylan & Malatesta, 2009), but this study shows a significant improvement in the acceleration phase (5.1%) in 10-m straight sprint, and (3.5%) in 20-m straight sprint (Table 2). Faster completion of acceleration and agility indicates that the intervention program is effective regarding performance enhancement in the TG. It has been suggested that soccer practice alone may contribute to speed development because of the high frequency of short maximal sprints (Michailidis et al., 2013), and growth and maturation may contribute to performance gains, however, no significant changes in sprint and agility performance were found in the CG ( Table 2). The development of running speed in youth females is not parallel with that of youth males; however, the results from the current study are in accordance with similar programs consisting of short-speed regimes in youth males (Pettersen & Mathisen, 2012;Mujika et al. 2009;Bloomfield et al., 2007). Other reports shows discrepancy in the results (Milanovic et al., 2012), possibly because of insufficient training load and duration. In the current investigation, experienced training staff leading all training sessions, thus ensuring high intensity, with a focus on executing all exercises with maximal effort and controlling the recovery times. It is supposed that, by muscular contraction force at high-speed, as in the current program, explosive actions such as acceleration and agility can be improved (Bangsbo, 1994). High-speed actions need an adequate rest interval because incomplete recovery may reduce the force of shortening and muscle power. Two to five minutes of rest have been recommended for adults to ensure the quality of each repetition; in youth athletes, the recovery from high-intensity activities has been reported to be better than in adults (Ramirez-Campillo et al., 2014). Thus, the recovery period of 40 to 90 seconds used in the present program is supposed to be sufficient, and is in line with findings from a study with youth soccer-players, with rest intervals between 30 to 120 seconds (Ramirez-Campillo et al., 2014). Task-specificity in the program appears to be essential (Young et al., 2001), and the program in the current study consisted of straight-line sprinting, and sprinting with change of directions, acceleration and deceleration with maximal effort. However, the literature are unclear as to which mechanisms may support the development. Adaptions gained from the sprint program is supposed connected to the neural plasticity of puberty, and it is believed that the improvement is due to neuromuscular adaptions including the recruitment and activation of motor units and coordination (Hughes et al., 2012;Thomas et al., 2009;Rowland, 2005). We suppose the performance is influenced by strength, balance and neuromuscular coordination through the training program (Meylan & Malatesta, 2009;Brown & Ferrigno, 2005;Aagaard, 2003). Furthermore, linear sprint and agility have been found to be independent abilities that are specific and produce limited transfer to each other in adult athletes (Little & Williams, 2005;Young et al, 2001), and both acceleration and agility are regarded as independent predictors of soccer performance in youth soccer (Reilly et al., 2000a(Reilly et al., , 2000b. Previous studies with youth males, aged 11 to 14 years, found a stronger correlation in straight-line sprint and agility performance than in adults (Jakovljevic et al., 2012;Pettersen & Mathisen, 2012), and in the current study we showed a significant correlation between linear speed and agility; 10-m vs agility r=0.70, and 20-m vs agility r=0.78 (Table 3). We can speculate whether the youth population shares common physiological and biomechanical determination, implying transfer to each other, and that the specificity is more pronounced in the adult population. Agility has a high relevance for team sports like soccer because of changeof-directions actions in response to an opponent (Little & Williams, 2005). Speed and agility have shown differences between elite and sub-elite youth soccer players (Reilly et al., 2000a), and high-speed actions are believed to have implications for the outcome of match-play. Further research should focus on volume and frequency for optimal improvement, and investigate the effectiveness of combined speed training with strength or plyometric training. Conclusions The aim of the recent study was to examine the effect of a high-intensity sprint program on 13-year-old female soccer players. The participants showed significant improvement in both linear speed up to 20 meter (acceleration), and in agility performance, however no change was found in the control group with a program of traditional soccer training alone. The result of the study highlights the potential of using a program consisting of competitive exercises of short sprint bouts at maximum effort in adolescent female training.
2018-12-05T02:01:27.598Z
2014-12-01T00:00:00.000
{ "year": 2014, "sha1": "0c88ade6f2180b12089b8fb349c5e266b63fd8f4", "oa_license": null, "oa_url": "https://doi.org/10.7752/jpes.2014.04071", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "0c88ade6f2180b12089b8fb349c5e266b63fd8f4", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Psychology" ] }
236978717
pes2o/s2orc
v3-fos-license
Household transmission of COVID‐19 among the earliest cases in Antananarivo, Madagascar Background Households are among the highest risk for the transmission of SARS‐CoV‐2. In sub‐Saharan Africa, very few studies have described household transmission during the COVID‐19 pandemic. Our work aimed to describe the epidemiologic parameters and analyze the secondary attack rate (SAR) in Antananarivo, Madagascar, following the introduction of SARS‐CoV‐2 in the country in March 2020. Methods A prospective case‐ascertained study of all identified close contacts of laboratory‐confirmed COVID‐19 infections was conducted in Antananarivo from March to June 2020. Cases and household contacts were followed for 21 days. We estimated epidemic parameters of disease transmission by fitting parametric distributions based on infector‐infected paired data. We assessed factors influencing transmission risk by analyzing the SAR. Findings Overall, we included 96 index cases and 179 household contacts. Adjusted with the best‐fit normal distribution, the incubation period was 4.1 days (95% CI 0.7–7.5]). The serial interval was 6.0 days (95% CI [2.4–9.6]) after adjusting with the best‐fit Weibull distribution. On average, each index case infected 1.6 family members (95%CI [0.9–2.3]). The mean SAR among close contacts was 38.8% (95% CI [19.5–58.2]) with the best‐fit gamma distribution. Contacts older than 35 years old were more likely to be infected, and the highest SAR was found among them. Conclusion The results of our study provide key insights into the epidemiology of the first wave of SARS‐CoV‐2 in Madagascar. High rates of household transmission were found in Antananarivo, emphasizing the need for preventive measures to reduce community transmission. Understanding of COVID-19 comes largely from disease surveillance and epidemiologic studies undertaken in China (2, 3) and highincome countries (4)(5)(6). However, confirmed cases of COVID-19 have also occurred in low-and middle-income countries (7,8). The first confirmed case of COVID-19 in Africa was reported in Egypt on February 14, followed by Algeria (9). By March, COVID-19 cases were reported across most of the continent. The first three confirmed COVID-19 cases were imported to Antananarivo, the capital city of Madagascar, on March 19-20, 2020. Following this notification and with the aim of stopping or slowing down the rate of transmission of SARS-CoV-2, the Malagasy government introduced stringent NPIs, such as physical distancing (school closures, work-from-home arrangements for civil servants, closing bars and restaurants, and suspension of public leisure). The country exceeded 17 000 confirmed cases and more than 240 total deaths in early November 2020 (10). During the early phases, testing, contact tracing, quarantine, and isolation were carried out when cases were identified. Uninfected and asymptomatic contacts were often closely tracked, providing information about transmission and natural history of the disease. Here, we analyzed data from the earliest cases detected in Antananarivo, Madagascar, and their intra domiciliary contacts to characterize epidemiological parameters of COVID-19 during the first wave that affected the capital city of Madagascar. Using data from contact tracing, we evaluated SARS-CoV-2 transmission by estimating the serial interval, the household secondary attack rate (SAR), and the average number of family members infected by each index case. Furthermore, we describe risk factors for transmission and infection. | METHODS We used an adaptation of generic protocols already in place in some countries, such as "The First Few Hundred (FF100)" enhanced case and contact protocol for pandemic influenza in the United Kingdom of Great Britain and Northern Ireland (11). Case identification On March 12-20, 2020, the Malagasy Ministry of Public Health tested for SARS-CoV-2 all travelers coming from Europe and China on international flights. Nasopharyngeal and oropharyngeal specimens (NP/OP) were sent to the virology unit at the Institut Pasteur de Madagascar, where they were tested for SARS-CoV-2 using real-time RT-PCR (RT-qPCR) as previously described (12). The population included in this study was among the first confirmed cases of COVID-19. NP/OP and blood specimens were collected from laboratoryconfirmed cases and household contacts as soon as possible after laboratory confirmation. For all laboratory-confirmed index cases and household contacts, data were collected during the first visit and every 7 days until 21 days. NP/OP specimens were tested using RT-qPCR within 24 h following collection, while sera were tested retrospectively at the end of the study. Epidemiological parameters • The incubation period refers to the delay between exposure or contact with confirmed cases and symptom onset. We determined the left and right boundaries of the possible exposure and symptom onset times. Imported cases were assumed to have been exposed within 14 days prior to symptom onset. Cases without recent international travel history but with exposure to a confirmed case were assumed to be exposed from the time of earliest to latest possible contact with the case. Only cases for which it is possible to identify the earliest and latest time of exposure and who had a date of symptom onset were included in the estimation of the incubation period. The earliest exposure time was assumed to be within 15 days, and the latest exposure time was assumed to be within 30 days. Its distribution was calculated by fitting a parametric distribution (normal, gamma, lognormal, Weibull). • Transmission was analyzed by examining the relationship between index cases and their close contacts. • The serial interval is the average time expressed in days between the time of symptom onset of a primary case and that of a secondary case (13). Only pairs of symptomatic primary and secondary cases are included in the estimation of this indicator. • The household SAR is calculated by dividing the household contacts who were later confirmed to have SARS-CoV-2 infection by the total number of household contacts included in the study. • The distribution of the average number of family members infected by each index case was calculated from the number of secondary infections observed among close contacts of each index case. All distributions of those parameters were also calculated by fitting parametric distributions (Normal, Lognormal, Gamma, Weibull) based on infector-infected paired data. -SAR was estimated for a range of factors using univariate analysis and multivariate mixed effects logistic regression models with a random intercept for households. Households with one or more household members were included. The following potential explanatory variables were examined: characteristics of the index case, including age, gender, comorbidities, and whether the case was symptomatic; characteristics of the contact, including gender and age group; and household size. A stepwise backward selection variable (less than 0.20) was used in univariate analyses to choose the final model in the multivariate analysis. All analyses were conducted with R software (14). Ethical statement Written informed consent was obtained from participants before enrolment in this study. It was approved by the Ethics Committee of | Household transmission dynamics The empirical SAR among close contacts was 29.6%, and after gamma distribution adjustment, the SAR was 38.8% (95% CI [19.5- Our estimated SAR in Antananarivo was higher than those reported in Asia, mainly in Singapore (15) and China (16,17) and was similar to the household SAR found in the United Kingdom (18) and in some US states, such as Tennessee and Wisconsin (19). The heterogeneity in SAR across different regions might be explained by differences in control measures and crowdedness in households (17). This high SAR we reported in the current study reflects the existence of high transmission within households. In Madagascar, at the beginning Our results suggested a higher transmission among household contacts aged 35 years and older compared to children. A systematic review and meta-analysis by Madewell ZJ et al. (22) reported that the SAR of SARS-CoV-2 to adults was higher than that to children, assuming that adults might be more susceptible to SARS-CoV-2 than children when they exposed themselves to the same sources of infection. 58.2]). Unadjusted SAR odds ratios and multivariate analysis of sec- Data collected in Madagascar from March to September 2020 confirmed the same finding and suggested that individuals aged 50 years and older had a higher probability of having a positive RT-qPCR for SARS-CoV-2 (12). We found an incubation period of 4.1 days (95% CI [0.7-7.5]), which is similar to those reported elsewhere (3,23). This estimate provides evidence to support a 14-day period of quarantine for infected and exposed persons. We reported a wider serial interval, which is similar to that found in China (3,24). Two hypotheses could explain these data: the memorization bias of the date of symptom onset and intrahousehold contamination from asymptomatic cases due to the lack of respect for protective/distancing measures. contacts which may lead to selection bias and biases in the estimation of the incubation period. As the inclusion in the study was limited to the early phase of the outbreak when the epidemic was rapidly growing, we might have missed other infected people with longer incubation periods. Our results confirm that the household is an important venue for transmission and could explain the intensity and rapid spread of the virus during the second wave that started in early March 2021. To avoid transmission in the community, control measures such as appropriate isolation of cases and their household contacts should be adopted.
2021-08-12T06:23:49.274Z
2021-08-10T00:00:00.000
{ "year": 2021, "sha1": "ed4d2d559c7603e3ceaf2805573d563e7c23c59e", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/irv.12896", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "19cf5ec025beb475b8ad07109e84c82c3057430b", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
6152388
pes2o/s2orc
v3-fos-license
Exploring mechanochemistry to turn organic bio-relevant molecules into metal-organic frameworks: a short review Mechanochemistry is a powerful and environmentally friendly synthetic technique successfully employed in different fields of synthetic chemistry. Application spans from organic to inorganic chemistry including the synthesis of coordination compounds. Metal-organic frameworks (MOFs) are a class of compounds with numerous applications, from which we highlight herein their application in the pharmaceutical field (BioMOFs), whose importance has been growing and is now assuming a relevant and promising domain. The need to find cleaner, greener and more energy and material-efficient synthetic procedures led to the use of mechanochemistry into the synthesis of BioMOFs. Introduction Mechanochemistry is a straightforward and clean technique by which the desired products are obtained in high purity and high or quantitative yield. It combines high reaction efficiency with a minimum input of energy and solvent. It is an approach to green chemistry, an area devoted to the discovery of environmentally friendly synthetic pathways, eliminating or drastically reducing the amount of solvent necessary to catalytically promote reactions. Mechanochemistry consists of grinding together two or more compounds to promote a reaction, by inducing the breaking/forming of covalent or supramolecular bonds [1,2]. There are different approaches towards mechanochemistry. The most direct is neat grinding (NG), in which the reagents are ground together without the addition of any solvent or other ad-ditive [3]. NG evolved into liquid-assisted grinding (LAG), also known as solvent-drop grinding or kneading, which includes the addition of catalytic amounts of solvent to facilitate the reaction. This technique proved to be useful for the synthesis of new compounds that could not be obtained by solution or NG techniques, while still avoiding excessive use of solvent [3][4][5][6][7]. The addition of catalytic amounts of an inorganic salt together with catalytic amounts of solvent, resulted in another mechanochemical approach, the ion and liquid-assisted grinding (ILAG), a technique that was also very successful in promoting solidstate reactions [8][9][10][11]. Polymer-assisted grinding (POLAG) is another variation of mechanochemistry, very recently disclosed and making use of polymers to stimulate the reaction [6,12]. All these applications comprise the formation of intermolecular interactions, the basis of supramolecular chemistry. This discipline was fully recognized internationally with the attribution of the Nobel Prize of Chemistry in 1987 to Donald J. Cram and Jean-Marie Lehn [52][53][54][55]. The energetics involved in supramolecular chemical reactions are not very severe, making mechanochemistry an excellent technique to be used in these processes. MOFs combine coordination and supramolecular chemistry. Coordination chemistry is present in the coordination of organic molecules (linkers) to metal ions or clusters (coordination centers), while supramolecular chemistry relies on the formation of intermolecular interactions between linker molecules. This combination results in 1D, 2D or 3D porous frameworks. The pore size can be adjusted by varying the size of the linkers, a modification that can be associated to the change in functional groups in the organic moieties. These functional groups can form intermolecular interactions with potential pore incorporated molecules [72,[84][85][86]. Their characteristics led researchers to explore the potential of MOFs as incarceration and/or delivery systems [70,79,[83][84][85][86][87]. In BioMOFs, endogenous molecules, active pharmaceutical ingredients (APIs) or other bioactive organic molecules are used as building blocks for the framework [8]. Besides the advantages of MOFs as controlled delivery systems, BioMOFs have additional benefits, such as: i) porosity is no longer an issue as the release of the APIs or bioactive molecules is achieved by degradation of the framework, ii) no multistep synthesis is required as the molecules are part of the matrix itself, iii) synergetic effects between the active molecule and the metal may be explored, and iv) co-delivery of drugs is possible if a porous network is built with one ingredient and an incorporation of another is feasible [88]. BioMOFs are promising candidates for the development of more effective therapies with reduced side effects. Two families of MOFs, MILs (materials of Institute Lavoisier) and CPOs (coordination polymers from Oslo), were the first to be studied for their potential medicinal applications. Here, the main focus was their use as drug-delivery systems [71,72,89], with particular attention to the toxicity of the metal centers [84]. Toxicity is a concern not only for the safe use of these compounds for humans but also for environmental reasons. These issues also led to the quest for biodegradable MOFs, the first being prepared in 2010 by Miller et al. [77]. Another family of MOFs, ZIFs (zeolitic imidazolate frameworks), that involves organic imidazoles as linkers, has been explored for medicinal purposes as a result of the enhancement of MOF structural and stability properties [90,91]. Bioactive molecules like caffeine [92,93] and anticancer drugs [94][95][96][97][98] were incorporated in ZIF-8 and tests proved that these systems allowed for a controlled drug release. Further studies involving ZIF-8 with encapsulated anticancer drugs have also shown that these have potential to be used in fluorescence imaging. The number of reports on MOFs synthesized by mechanochemistry [8,28,50,[99][100][101] has been increasing and some in situ studies on the mechanosynthesis of MOFs and coordination polymers are already being carried out with success. These studies show the propensity for stepwise mechanisms, especially in case of ZIFs, with a low density or a highly solvated product often formed first which is then transformed into increasingly dense, less solvated materials, resembling Ostwald's rule of stages [8,[102][103][104][105][106][107]. Many reviews on mechanochemistry [10,28,29,50,101,107,108] and MOFs [76,78,79,88,90,109] have been published due to the increasing relevance of both the technique and the type of com-pounds. We have recently published two reviews, one focused on the use of mechanochemical processes towards attaining metallopharmaceuticals, metallodrugs and MOFs synthesized within our group [49], and another on the design, screening, and characterization of BioMOFs in general [110]. To the best of our knowledge, this is the first short review targeting on the mechanochemical synthesis of BioMOFs. Review BioMOFs prepared by mechanochemistry and their main features BioMOFs can be divided into two major classes: i) BioMOFs in which the APIs are the building blocks of the framework, thus excluding the need for large pores and ii) BioMOFs in which the API is incorporated (encapsulated) as a guest within the pores of the MOF. In the second situation, the choice of the linker is crucial, as it needs to be an organic molecule listed of the generally regarded as safe (GRAS) compounds, an endogenous compound or a bioactive molecule. In both classes, the judicious choice of the metals to be used in these systems is of great importance. Several metal species are known to display important biological activities that are applied for the treatment or diagnosis of several diseases. So, BioMOFs should contain either endogenous metal cations essential for life or exogenous metals that display a specific bioactive function in appropriate dosages, allowing to take benefits of possible synergetic effects between the metal and the APIs. Nevertheless, toxicity is also dependent on many other factors such as speciation, chemical nature, administration route, exposition time and accumulation/ elimination from the body [88]. The examples given here will be separated according to the function of the APIs in the BioMOF: linker or guest. BioMOFs with active pharmaceutical ingredients (APIs) as linkers Several BioMOFs with APIs as building blocks have been synthesized recurring to mechanochemistry and we will just present a few examples herein. It is common that these compounds are reported as coordination networks, or metallopharmaceuticals. One example we would like to mention has been proposed by Braga et al. [111], in which gabapentin was used as linker to build two new coordination complexes with ZnCl 2 and CuCl 2 ·2H 2 O by manually grinding both solids. Gabapentin is a neuroleptic drug used for the prevention of seizures, the treatment of mood disorders, anxiety, tardive dyskinesia [111][112][113][114][115][116][117][118][119], and neuropathic pain [120]. The synthesis of these coordination compounds with gabapentin was based on studies concerning the understanding of the physiological and pathophysiological roles played by Zn 2+ and Cu 2+ in various biological systems [121][122][123], and therefore the use of such coordination complexes was envisaged a new route for the delivery of those drugs. Gabapentin was also used by Quaresma et al. [124] in the synthesis by manual grinding of seventeen new metal coordination networks with Y(III), Mn(II) and several lanthanide chlorides (LnCl 3 ), Ln = La 3+ , Ce 3+ , Nd 3+ , and Er 3+ . Ten out of these compounds were structurally characterized and represent the first coordination networks of pharmaceuticals involving lanthanides, showing different types of architectures based on mono-, di-, tri-and hexametallic centers and 1D polymeric chains. These new compounds proved to be unstable under shelf conditions. With regard to their thermal stability these compounds lose water at approximately 80 °C and melt/decompose above 200-250 °C [124]. This type of BioMOFs enclosing lanthanides and cations with potential luminescence properties can be explored for theranostic applications. Figure 1 shows some examples of the networks obtained. Braga et al. synthesized new BioMOFs using 4-aminosalicylic acid and piracetam. 4-Aminosalicylic acid is an antibiotic that has been used in the treatment of tuberculosis, inflammatory bowel diseases, namely distal ulcerative colitis [125,126] and Crohn's disease [127], while piracetam is a nootropic drug used to improve cognitive abilities. A 1D framework was synthesized which is stable up to 130 °C. The new compound resulting from the reaction between piracetam and Ni(NO 3 ) 2 ·6H 2 O consists of a polymeric chain based on a tetrameric repeating unit comprising a pair of piracetam molecules and two metal atoms and proved to be stable up to approximately 80 °C. Both BioMOFs were obtained recurring to manual mechanochemistry. Due to the possibility of synergic effects with Ag + , a known antimicrobial agent, the new network with 4-aminosalicylic acid and silver is highly interesting, as it represents a promising candidate to future biomedical applications [128]. Having in mind the synthesis of BioMOFs involving the excipient magnesium oxide initially proposed by Byrn et al. [129], Chow et al. and Friščić et al. developed new BioMOFs by LAG, grinding together MgO with the non-steroidal anti-inflammatory drugs (NSAIDS) ibuprofen (S and RS-forms), salicylic acid [130] and naproxen using water as the grinding liquid [7]. With naproxen, LAG was also used to screen for hydrated forms of magnesium-naproxen by systematically varying the fraction of water in the LAG experiments [7]. Low, intermediate and high amounts of water as grinding liquid led to the formation of a 1D coordination polymer monohydrate, a tetrahydrate complex and an octahydrate, respectively ( Figure 2) [7,29]. BioMOFs based on generally regarded as safe (GRAS), bioactive or endogenous linkers for the encapsulation of APIs Another approach to build a BioMOF consists of the use of generally regarded as safe (GRAS), bioactive or endogenous linkers to form the 3D framework followed by the encapsulation of the APIs in the BioMOF. In these cases, the 3D frameworks may be synthesized by mechanochemistry, but the encapsulation of the drug is usually carried out by soaking methods. However, a significant number of these frameworks obtained by mechanochemistry with potential to be used as drug delivery systems have not yet been fully tested for the loading of drugs. Pichon et al. proposed the first BioMOF synthesized by mechanochemistry using copper acetate and isonicotinic acid [46]. This type of compounds is useful for gas separation applications, but they haven't been tested for biological applications yet. The solvothermal methods that were previously reported for the synthesis of this compound required high temperatures (150 °C), a 48 hours reaction and the use of solvents. With mechanochemistry, the same compound is obtained in high yield within 10 minutes at room temperature and without the use of solvents. Thus, this work revealed a fast, convenient, less expensive and effective preparative method for the synthesis of robust and stable 3D BioMOFs and rapidly inspired other groups to follow this methodology. This has been proved by the work of Wenbing Yuan et al., in which a very important 3D BioMOF, known as HKUST-1, was synthesized by grinding together copper acetate with 1,3,5benzenetricarboxylic acid (BTC, Figure 3) in a ball mill for 15 minutes without solvent. This procedure delivered HKUST-1 with some improved properties, including higher microporosity and surface area, when compared to those made in solution and by other techniques [131]. The presence of unsaturated open metal sites turns this compound into a potential adsorption/desorption material. Gravimetric tests with nitric oxide (NO), a gas with medicinal applications, demonstrated that HKUST-1, despite showing a reasonable aptitude to absorb this gas, displays very low rates of desorption when compared to others MOFs [56,84,133,134]. Furthermore, HKUST-1 is reported as a mean to achieve a controlled release of biologically active copper ions and it has shown to be an effective antifungal agent against representative yeast and mold [135]. Friščić et al. also reported the synthesis of coordination polymers and BioMOFs using LAG by grinding together zinc oxide and fumaric acid. In this work, they initially obtained four different coordination polymers, depending on the choice of the grinding liquid: anhydrous zinc fumarate (1) when grinding with ethanol or methanol; a dihydrate (1·2H 2 O) when using a mixture of water and ethanol; a tetrahydrate (1·4H 2 O) and a pentahydrate (1·5H 2 O) when grinding with three or four equiv of water, respectively ( Figure 4) [136,137]. This method was further applied to the mechanochemical synthesis of porous materials with introduced auxiliary ligands. These would allow for coordination to zinc in order to generate pillared MOFs, that could be used to incorporate APIs as a guest. Indeed, they synthesized two BioMOFs by grinding together zinc, fumaric acid and 4,4'-bipyridyl (bipy) or trans-1,2-di(4-pyridyl)ethylene (bpe) as ligands in the presence of a space-filling liquid agent (N,N-dimethylformamide, DMF). This synthesis also proceeded when using environmentally more friendly solvents, such as methanol, ethanol or 2-propanol, making these BioMOFs acceptable for biological and pharmaceuticals applications ( Figure 5) [136,138]. However, studies supporting this goal have not been reported so far. In 2015, Prochowicz et al. reported a new mechanochemical approach called "SMART" (secondary basic units-based mechanochemical approach for precursor transformation), in which pre-assembled secondary building units were explored. This method led to the successful synthesis of MOF-5 by mechanochemistry starting from Zn 4 O and 1,4-benzenodicarboxylic acid, without the need for bulky solvents, external bases or acids and high temperatures, all required in the conventional synthetic procedure [139]. Even though MOF-5 has not yet been tested for the incorporation of drugs, using the same linker, Xu et al. unveiled in 2016 the mechanochemical synthesis of MIL-101(Cr) involving heating which was successfully tested for the incorporation of ibuprofen. In this case, mechanochemistry proved once again to be a much faster process than the traditional hydrothermal synthesis that was used to obtain this compound involving solvents and often also hydrofluoric acid [140]. The linker used to build MIL-101 is 1,4-benzenedicarboxylic acid. Different applications of MIL-101 have been reported, from which we highlight the delivery of ibuprofen. MIL-101 exhibits a very high capacity of ibuprofen and therefore only very little quantities of MIL-101 are necessary for the administration of a high dosage of ibuprofen [141]. The mechanochemical synthesis was expanded by Beldon et al. to the synthesis of a very different family of metal-organic materials, the zeolitic imidazolate frameworks (ZIFs) [8]. ZIFs exploit a combination of metal ions and imidazolate linkers to build the 3D framework and have simultaneously the character-istics of MOFs and zeolites, making them very promising for biomedical applications [90,91]. In their work, Beldon et al. explored the synthesis of new ZIFs using imidazole (HIm), 2-methylimidazole (HMeIm) and 2-ethylimidazole (HEtIm) as ligands. Initially, they used LAG with ZnO and the previous imidazole ligands in the presence of DMF as a space-filling liquid. However, this method only partially succeeded: with HIm the quantitative formation of ZIF-4 was obtained after 60 min, whereas with HMeIm only partial formation of ZIF-8 was achieved and with HEtIm no reaction was observed at all [8]. As ILAG had already shown to accelerate and direct the formation of large-pore pillared MOFs [9], it was applied to these systems. A variety of ZIFs with defined topologies was obtained quantitatively by this method using ammonium nitrate, methanesulfonate or sulfate. Topology control could be achieved by either the solvent chosen for grinding or the choice of the salt additive. The most impressive result was the persistent formation of ZIF-8 ( Figure 6) as it was obtained in all the reactions, showing the notable stability of this framework and making it a promising candidate to biomedical applications [8]. Indeed, ZIF-8 has been largely used to encapsulate APIs such as doxorubicin, an anticancer drug [96,142] or even as an efficient pH-sensitive drug-delivery system [92,95,143,144]. Usually, the encapsulation of small molecules into MOFs involves two steps: i) the synthesis of the framework and ii) the encapsulation of the small molecule by soaking and diffusion methods under mild conditions [96]. However, there are some one-pot syntheses reported for the encapsulation of small molecules into ZIF-8. Liédana et al. disclosed the in situ encapsulation of caffeine into ZIF-8 [98] and Zhuang et al. proposed a method to synthesize nanosized ZIF-8 spheres with encapsulation of small molecules into the framework during synthesis [95]. Also, Zheng et al. proposed a fast, single step synthesis of ZIF-8 with direct incorporation of small molecules, including doxorubicin [142]. The controlled drug release is due to the small pore size of ZIF-8 that prevents premature release and its pH sensitivity. At pH 5-6 there dissociation of the framework takes place with consequent drug release ideal to target cancer cells [95]. Mechanochemistry in the synthesis of a metallodrug, another metal-organic target The study of the chemical reactivity of bismuth and carboxylic acids, in particular salicylic acid, is quite relevant for the pharmaceutical industry, because of the large production of bismuth subsalicylate (Pepto-Bismol), an anti-acid used in the treatment of stomach and intestine disorders. So far, this product was synthesized exclusively in solution involving harsh reaction conditions. André et al. [11] used ILAG [146,147] to prepare it directly from Bi 2 O 3 (Bi) and salicylic acid (SA) in a 1:1 (Bi·SA) stoichiometry. This method proved not only to be more efficient but also very selective [11]. Changing the stoichiometric ratio of the reactants to 1:2 and 1:3 allowed the syntheses of another two bismuth-salicylate compounds, namely the disalicylate and the trisalicylate, respectively. The only previously known crystal structure obtained for bismuth salicylates was a Bi 38 cluster isolated by recrystallization of the trisalicylate from acetone [148] and this was then considered a possible model for the structure of bismuth subsalicylate [11]. In 2011, André et al. performed a similar recrystallization of the disalicylate and obtained a similar Bi 38 cluster with coordinated N,N-dimethylformamide (DMF) molecules instead of acetone, showing the structural robustness of this core in solution. The crystal structure solution from powder X-ray diffraction data of the disalicylate revealed the first crystal structure of a bismuth salicylate without coordinated solvent molecules (Figure 7). This indicates that bismuth salicylates form extended structures without the presence of other ligands [11]. Conclusion All examples presented herein and collected in Table 1 show the advantages of combining pharmaceutically relevant organic molecules with metal centers, in order to obtain compounds with enhanced biological properties. New metal-organic frameworks, BioMOFs, for the use of controlled drug delivery and/or release or other biological applications, were successfully synthesized either by direct incorporation of the bioactive molecule in the framework (linker), or by encapsulation (guest). Mechanochemistry has proved to be an efficient, high performance, environmentally friendly, cleaner, and faster synthetic procedure, leading to significantly lower costs of production. There is still much to explore in the combination of BioMOFs with mechanochemistry and this is certainly an expanding area in the field of organic coordination chemistry. salicylic acid [11]
2018-04-03T03:30:41.700Z
2017-11-14T00:00:00.000
{ "year": 2017, "sha1": "5639860dfd3282dc58ef892895ec902539f56a70", "oa_license": "CCBY", "oa_url": "https://www.beilstein-journals.org/bjoc/content/pdf/1860-5397-13-239.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "5639860dfd3282dc58ef892895ec902539f56a70", "s2fieldsofstudy": [ "Chemistry", "Materials Science" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
88518804
pes2o/s2orc
v3-fos-license
Can we define a best estimator in simple 1-D cases ? With a small number of measurements, the three most well known estimators of a single parameter give very different results when estimating a scale parameter. A transformation of this scale parameter to a location parameter by using logarithms of the data rends the three estimators equivalent in the absence of any a priori information. Scope What is the best estimator for assessing a parameter of a probability distribution from a small number of measurements? Is the same answer valid for a location parameter like the mean as for a scale parameter like the variance? It is sometimes argued that it is better to use a biased estimator with low dispersion than an unbiased estimator with a higher dispersion. In which cases is this assertion correct ? In order to answer these questions, we will compare on a simple example the determination of a location parameter and a scale parameter with three "optimal" estimators: the minimum-variance unbiased estimator, the minimum square error estimator and the a posteriori mean. Relevance Nowadays, it seems that processing a huge amount of data is a very common task. However, in some cases it is of great importance of being able to assess the statistical parameters of a process from a very small number of measurements. This can occur for instance in the analysis of the very long term behavior of time series (e.g. amplitude estimation of very low frequencies, time keeping, etc.). This paper focuses on the choice of the best estimator to be used over, say, less than 10 measurements. Prerequisites The reader is expected to have a basic understanding of data statistical processing, such as the one developed in [1]. .1 Two examples of measurement We consider in this note the simplest archetypal measurement situation: an unknown quantity µ is measured N times, giving N measurements forming a set {d i , i = 1 . . . N }, noted {d i } in the following. Each measurement may be written as where • αµ is the deterministic part: α is a known constant 1 , |α| ≤ 1, and µ a parameter we want to estimate; • n i is the random part which is supposed to be a zero-mean additive white Gaussian noise, of unknown variance σ 2 , independent from one measurement to another. In the following, we discuss successively the estimation of the two unknown parameters appearing in this measurement process. The generic name of the unknown parameter will be θ, corresponding respectively to: • (Example 1) θ = µ. In this first example, θ can be either positive or negative and is called a location parameter, since the probability density of the data d i can be expressed as a function of the difference of location |d i − αθ|. • (Example 2) θ = σ 2 . This variance is positive and is called a scale parameter since it determines the precision scale relevant for the measurement of the location parameter of the measurement process. Before going further to the definition of estimators, let us recall the heuristic concepts of location parameter versus scale parameter and model world versus measurement world. Location parameter and scale parameter Let us consider a random variable d which depends on a parameter θ. Let us denote p θ (d) its probability density function (PDF). Location parameter A location parameter is a parameter whose variation induces a shift of the PDF of a random variable which depends of it. It is an additive parameter. Let us denote p 0 (d) the PDF obtained for θ = 0: The mean and the median of a normal distribution are location parameters. Scale parameter A scale parameter is a positive parameter which controls the flattening or the narrowness of a PDF, for example the variance of a normal distribution. It is a multiplicative parameter. Let us denote p 1 (d) the PDF of the estimator of this parameter, obtained for θ = 1. We have : p 1 (d) is for example a χ 2 distribution if d is the estimator of the variance of a normal distribution. Model world and measurement world Two problems are generally addressed: the direct problem, which aims to forecast the measurement data knowing the parameter and the inverse problem, which aims to estimate the parameter knowing the measurement data. In the same vein, Tarantola distinguishes the model space, i.e. the space in which the parameter is given, from the data space, i.e. the space in which the measurement data are given [2]. In the following, we will use the terms "model world" and "measurement world". Model world (direct problem) In the model world, the question is "Knowing the parameter θ, how are the measurements {d i } distributed?". We have to define the conditional PDF: p(d i |θ) where the vertical bar means "knowing" (i.e. the probability of obtaining the measurement d i knowing that the parameter is equal to θ). In the model world, the model parameter θ is considered as a definite quantity whereas the measurements {d i } are realizations of a random variable d. However, the parameter θ is precisely the unknown quantity that we want to estimate. Supposing that this parameter is known has sense only in theory and simulations. Measurement world (inverse problem) In the measurement world, the question is "Knowing the measurements {d i }, how to estimate a confidence interval over θ?". We need thus to reverse the previous conditional PDF for defining p(θ|{d i }) which describes the probability that the parameter is equal to θ knowing that the measurements are This is the right question of the metrologist! Let us notice that in the measurement world, the parameter θ is considered as a random variable whereas the measurements {d i } are data, i.e. totally determined values. Three "optimal" estimators We want to construct an "optimal" estimatorθ as a function of the measurements:θ = f ({d i }) and we will see rapidly that the usual optimality criteria do not work equally on both examples. Three estimators are often used as optimal, even if it is well known that they are generally different from each other for small N . Let us first see the main properties of these three estimators. Their mathematical calculations for both examples will be described in Section 4.5. P1.2) among the unbiased estimators, it has the smallest variance: Since we consider the mathematical expectation ofθ, it means that we consider this estimator as a random variable, like the measurements, and thus we define these properties in the model world. • (Estimator 2) minimum mean square error estimator (MMSE). A MMSE estimator is an estimation method which minimizes, in the model world, the mean square error of the estimator regardless of a possible bias [3]. Properties: the idea is to admit some bias (first term of the sum) in order to strongly diminish the variance of the estimator (second term). • (Estimator 3) a posteriori mean. In this so-called Bayesian approach, the measurements, and thereforeθ, are no more considered as random variables (as they are in the model world), but as a particular realization of these random variables, i.e. known data having given values in the measurement world. In this measurement world, θ appears as a random variable and we aim to construct a probability law on θ with density p(θ) that takes into account these measurements: p(θ|{d i }) and, if available, all the information that was known before the measurements: π(θ). This information π(θ) is called "a priori " and p(θ) = π(θ)p(θ|{d i }) is called "a posteriori probability density" (i.e. after the measurement process). The a posteriori mean Properties: P3.1) this estimator minimizes the a posteriori mean square error:θ is now a constant and since the variance of θ (second term of Eq. (5)) does not depend on the estimator, the mean square error is minimized if the first term vanishes. Applying the three estimators to the two measurement processes Let us show some significant differences in the use of these estimators on the two above examples. Minimum-variance unbiased estimator on Example 1 It is evidently:θ whered is the sample mean, i.e. the average of the N measurements. However, this estimator cannot be employed if N |α| 2 1: though the noise term has a zero mean, its variance σ 2 N |α| 2 is high and the error, despite its null expectation, can be high. Therefore, estimators 2 or 3 must be used. MMSE estimator or a posteriori mean on Example 1 We find (see Annex 1):θ This formula tells us that we may restore θ, the true value of the signal before measurement, if the signal-to-noise ratio after measurement is high. It is known as Wiener filtering and is based on an estimation, even rough, of this signal-to-noise ratio. This kind of information does not come directly from the measurements: at a specific frequency, it is not straightforward to distinguish between the signal and the noise. It is called "a priori" information (before the measurements). To simplify, we have supposed E prior (θ) = 0 and the derivation of the a posteriori mean [3] assumes Gaussian a priori laws for n and θ. Of course, if we have absolutely no information about the signal-to-noise ratio, we should consider all the output signal as carrying information and the unbiased estimator is the best. This situation rarely occurs in practice: the power of the additive noise can often be estimated, for example at a high frequency where the transfer function is zero, and the power of the signal can be estimated at low frequencies. Even if this estimation is not precise and if the noise deviates appreciably from the Gaussian hypothesis, the restoration by using Eq. (7) proves [4] to be much better than a simple multiplication by 1 α . Example 2: estimation of the variance of a Gaussian process Although all the three above estimators are asymptotically unbiased (i.e. converge to the true value for large N ), they give very different results for small N in absence of any a priori information about σ 2 . Let us consider N = 2, the minimum number of measurements that gives an information on the variance. We also take α = 1: we have two measurements {d i = θ + n i , i = 1, 2} where n i is an additive independent centered Gaussian noise, of totally unknown variance σ 2 : no a priori information is available. Well known calculations (see Annex 2 for passing from Eq. (8) to Eq. (9)) lead to: • Minimum-variance unbiased estimator: • MMSE estimator: • a posteriori mean: The minimum-variance unbiased estimatorσ 2 E given in Eq. (8) is known in the time and frequency metrology domain as the Allan variance. It should be certainly used, because of its unbiasedness, if we can repeat the measure on many other couples of measurements. However, we restrict our analysis to the case where only d 1 and d 2 are available, or, at least, where the number of measurements is small. Rough explanation of the differences between the estimator results Unlike in Example 1, the last two estimators give very different results. An explanation of this difference can be given as follows. Let us define, for N = 2, Y =σ 2 E /σ 2 . In the model world, Y obeys, as shown in Annex 2 Eq. (19), a χ 2 law with N − 1 = 1 degree of freedom (see Figure 1(A)), where the estimatorσ 2 E is a random variable and the true value σ 2 appears as a constant coefficient. In the measurement world, σ 2 is a random variable which follows, as shown in Annex 2 Eq. (20), an inverse χ 2 N −1 distribution (see Figure 1(B)) andσ 2 E a known constant coefficient issued from the measurements. In both worlds, the probability of having a true value σ 2 much greater than the unbiased estimatorσ 2 E has the same non negligible value : for example P (σ 2 > 10σ 2 E ) = 0.25. However, this probability has completely different consequences in each world. In the measurement world, the possibly huge values of the true value σ 2 induce the divergence of the a posteriori mean for N < 4 (huge values of 1 20)). These huge values occur in the real world with a non negligible probability and the divergence of the mean is a simple consequence of this existence (see Figure 1(B)). In the model world for a given true value σ 2 , the random realizations of the measurements give, with the same non negligible probability, values such that the estimator σ 2 E is much smaller than σ 2 . In other words, the true unknown value of σ 2 is huge, if expressed in units ofσ 2 E , the only available value from the measurements. Unfortunately, these low values ofσ 2 E with respect to the true value have almost no weight in the estimator expectation given by Eq. (19), and also in the MMSE estimator expectation which is proportional to it: less than 1% of this expectation is due to values ofσ 2 E < 0.1σ 2 , though the probability of havingσ 2 E < 0.1σ 2 is the same 0.25 as in the measurement world (see Figure 1(A)). Because the danger of underestimating the true value is not properly taken into account, the MMSE estimator is a bad estimator for a scale parameter. Though not new (see for example [2]), this statement was often missed [5]. Even for a greater number of measurements, the difference between the estimators remains non negligible. For instance, the MMSE and a posteriori mean differ by 20% for 20 measurements (see Figure 2). Solution: An optimal estimator ? The situation of Example 2 seems at first sight desperate, since the "right" estimator in the measurement world diverges. The best solution would be, of course, to make more measurements: E(σ 2 ) is defined for N ≥ 4. In some cases, this is not possible, especially in the time and frequency metrology domain at very long duration (10 6 − 10 7 s): we would have to wait for several days-months. Moreover, appreciable differences remain between the estimators even for more measurements, as shown in Figure 2. The following considerations give clues to the solution: • A confidence interval on σ 2 can be defined without any difficulty: at 95 % of confidence, σ 2 lies between 0.18σ 2 E and 700σ 2 E since it obeys an inverse χ 2 density with one degree of freedom. The divergence of the mean comes from the values above the high limit of this interval. • Because of this huge confidence interval, only an order of magnitude of σ 2 can be determined, suggesting that the natural variable choice is log(σ 2 ). • The entire set {d i } can be replaced without any loss of information by an estimator, called a sufficient statistics: the three estimators, Eq. (8-10), define each a sufficient statistics for the variance of a gaussian distribution, differing only by a known multiplicative constant for a given number of measurements. More generally, C ·σ 2 E is a sufficient statistics whatever the value of the multiplicative constant C. Likewise the sample mean is a sufficient statistics for the determination of the mean of a gaussian distribution. [1] • To determine the a posteriori law p(θ), we have used the so called "fiducial argument", introduced by Fisher [6], which is valid if: 1. no a priori information exists, rendering the measurements strictly not recognizable as appertaining to a subpopulation [6] 2. transformations of a sufficient statistics C ·σ 2 E to u and of θ to τ exist, such that τ is a location parameter for the PDF p(u|τ ) [7], i.e. u = log(C ·σ 2 E ) and τ = log(θ), transforming (see Eq. After this transformation, the quotients characterizing any scaled probability density, Eq. (3), become differences and the probability density in both the model and the measurement worlds can be expressed as a function of u − τ : p(u|τ ) = p(τ |u) = f (u − τ ), implying a constant a priori probability density π(τ ). Indeed, p(u and τ ) = p(τ |u)p(u) = p(u|τ )π(τ ), implying π(τ ) = p(u), where p(u) is constant since u is a constant issued from the measurements. If such a transformation exists, the derivation of p(θ) is warranted: as stated in Eq. (20), p(σ 2 |σ 2 E ) is an inverse χ 2 density and the expectation of σ 2 can be calculated. On the other hand, the direct use of u and τ , though not yet the most popular choice, allows a perfect symmetry between both worlds [8]. Indeed, for any scale parameter θ we have u = log(θ) + B, τ = log(θ), where B is a constant chosen in order to obtain a non biased estimator in the model world: where θ 0 is the true value of the parameter θ and τ 0 = log(θ 0 ). Then we have in the measurement world, after measurements leading to a given value u 0 : The demonstration is performed by a variable change x = u − τ 0 in Eq. (11), leading to +∞ −∞ xf (x)dx = 0 by using +∞ −∞ f (x)dx = 1. Then Eq. (12) is obtained by using y = u 0 − τ . In the particular case of the Example 2 with N = 2, we find u = log(σ 2 E ) + B, with B = 1.27. Hence we proposed [8] in linear units a new estimator σ 2 L = exp(u) = exp(B)σ 2 E = 3.56σ 2 E . This estimator is log-unbiased: E [log(σ 2 L )] = log(σ 2 ) and, in the measurement world E [log(σ 2 )] = log(σ 2 L ). In short, log(σ 2 L ) is both an unbiased estimator and an a posteriori mean. The above arguments extend to other estimators of a scale parameter. The most evident other example is the estimation of the expected lifetime λ −1 of a Poisson process, defined as the inverse of the mean rate λ. Similar considerations allow the definition of a log unbiased estimator, that is, for example for a unique measurement, 1.78 (i.e. exp(e), where e is the Euler's constant) times the minimum-variance unbiased estimator. Note that making N measurements consists in waiting from an origin time t = 0 until the time t N where the N th event occurs. The minimum-variance unbiased estimator is t N /N . A wrong procedure would be to define a priori a time interval T and to count the number of events in T . Such a procedure induces a priori information on the magnitude of λ and leads to famous absurdities when trying to define unbiased estimators [9]. Let us return to Example 1 under the light of the above considerations. The sample meand obeys a Gaussian distribution of mean θ and variance σ 2 /N (for α = 1). Hence, it is directly a location parameter, ensuring thatd is both a minimum-variance unbiased estimator in the model world and the a posteriori mean in the measurement world, if no a priori information on the mean is available. This is a great difference with the situation of Eq. (7), where the a priori information, i.e. the a priori mean power E(θ 2 ) of the signal, could be taken into account either with the MMSE estimator or with the a posteriori mean, but not with an unbiased estimator. Note that, in the measurement world, p(d − θ) is no more Gaussian since σ 2 is known by its probability density. It is well known that is a Student distribution ford in the model world. Seidenfeld [10], has shown that this law is also, as proposed by Fisher [6], a Bayesian a posteriori law for θ. Of course, obtaining an unbiased estimator in both worlds is not sufficient to define an optimal estimator. It should also have the minimum variance in the model world. In the measurement world, p(θ) must be constructed from such a minimum-variance unbiased estimator to ensure a minimum variance on θ. In this case, the estimator will be also MMSE because the constant a priori probability density ensures the same MSE in both worlds for a location parameter. For Example 1 in the absence of any a priori information, p(θ) can be inferred either from the sample average or defined as the mean of the probability density equal to the product of the data likelihoods. Both methods lead to the same results, since the sample mean is a complete sufficient statistics for the underlying Gaussian probability. The minimum-variance unbiased estimator can have more complex forms: for example in the case of a biexponential (or Laplace) distribution of the data, it is obtained by adequate weighting of the ordered data [11]. In this case, we have verified that the mean of the product of the data likelihoods gives the same estimator as the so-called "efficient estimator" proposed in [11]. Conclusion We have recalled that the three most popular estimators give very different results for a small number of measurements in some standard situations. If a priori information is available, the difference is irreducible because the best estimator is biased in the direction of this a priori information. If no a priori information is available, except the model of the underlying probability, these three estimators give the same result for a location parameter. For a scale parameter, using the logarithms of the data allows the transformation of this scale parameter to a location parameter, ensuring the equivalence of the three estimators. Authors • Eric Lantz (eric.lantz@univ-fcomte.fr) is professor in the Department of Optics, Femto-ST, at the University of Franche-Comté. Annex 1: Wiener filtering Let us construct from the data d i = αθ + n i an estimatorθ: Clearly the real coefficient β should approach 1 if the noise term inθ can be neglected, while β should approach 0 if this noise becomes predominant. The mean-square error writes: where we have used our hypotheses on the noise: n i is centered and additive, i.e. independent of θ, meaning that all cross terms between the true value and the noise vanish. The determination of the value of β minimizing the mean square error is immediate but uses the unknown true value θ: Hence, to use in practice this filter, we have to replace θ 2 by a level of signal E(θ 2 ), known a priori, leading to Eq. (7). See [4] for more details. E can be normalized such that Y = (N − 1)σ 2 E /σ 2 obeys a χ 2 N −1 probability density p Y (Y ), of mean N − 1. Let Z = f (Y ) be a random variable which is a monotonic function of Y . If p Z (Z) is the corresponding probability density, we have p Y (Y )dY = p Z (Z)dZ, giving immediately for Z =σ 2 E : In the measurement world, whereσ 2 E is known, the fiducial argument [6] consists in considering the random variable Z = σ 2 = (N − 1)σ 2 E /Y , leading to This quantity is infinite for N < 4 and equal to for N ≥ 4. Note that Eq.(20) can be also obtained from a pure Bayesian point of view by introducing an a priori law 1/σ 2 to calculate the a posteriori law p(σ 2 |σ 2 E ). Both points of view are equivalent [7].
2014-08-11T19:32:39.000Z
2012-12-18T00:00:00.000
{ "year": 2012, "sha1": "1b5afac52145473146af4ded13f9d7706453b1a2", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "1b5afac52145473146af4ded13f9d7706453b1a2", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
246607969
pes2o/s2orc
v3-fos-license
Active Visuo-Tactile Interactive Robotic Perception for Accurate Object Pose Estimation in Dense Clutter This work presents a novel active visuo-tactile based framework for robotic systems to accurately estimate pose of objects in dense cluttered environments. The scene representation is derived using a novel declutter graph (DG) which describes the relationship among objects in the scene for decluttering by leveraging semantic segmentation and grasp affordances networks. The graph formulation allows robots to efficiently declutter the workspace by autonomously selecting the next best object to remove and the optimal action (prehensile or non-prehensile) to perform. Furthermore, we propose a novel translation-invariant Quaternion filter (TIQF) for active vision and active tactile based pose estimation. Both active visual and active tactile points are selected by maximizing the expected information gain. We evaluate our proposed framework on a system with two robots coordinating on randomized scenes of dense cluttered objects and perform ablation studies with static vision and active vision based estimation prior and post decluttering as baselines. Our proposed active visuo-tactile interactive perception framework shows upto 36% improvement in pose accuracy compared to the active vision baseline. Abstract-This work presents a novel active visuo-tactile based framework for robotic systems to accurately estimate pose of objects in dense cluttered environments. The scene representation is derived using a novel declutter graph (DG) which describes the relationship among objects in the scene for decluttering by leveraging semantic segmentation and grasp affordances networks. The graph formulation allows robots to efficiently declutter the workspace by autonomously selecting the next best object to remove and the optimal action (prehensile or non-prehensile) to perform. Furthermore, we propose a novel translation-invariant Quaternion filter (TIQF) for active vision and active tactile based pose estimation. Both active visual and active tactile points are selected by maximizing the expected information gain. We evaluate our proposed framework on a system with two robots coordinating on randomized scenes of dense cluttered objects and perform ablation studies with static vision and active vision based estimation prior and post decluttering as baselines. Our proposed active visuo-tactile interactive perception framework shows upto 36% improvement in pose accuracy compared to the active vision baseline. Index Terms-Interactive Perception; Visuo-Tactile Perception; Force and Tactile Sensing; Perception for Grasping and Manipulation I. INTRODUCTION F OR a variety of applications ranging from safe objectrobot interaction to robust grasp and manipulation, the ability to accurately estimate the 6 degree-of-freedom (DoF) pose of objects is critical. Especially in unstructured cluttered environments, objects may be occluded from certain viewpoints or may have other objects resting on each other leading to challenging scenarios for accurate object pose estimation. Such scenarios are common for logistic or retail warehouse robots as well as robots operating inside households. Interactive perception wherein purposeful physical interactions produce new sensory information to change the state of the Manuscript received: September 9, 2021; Revised January 3, 2022; Accepted January 21, 2022. This paper was recommended for publication by Editor Dan Popa upon evaluation of the Reviewers' comments. This work was supported in part by BMW Group and the European Commission via INTUITIVE (Grant agreement ID: 861166) Fig. 1: Experimental setup: A Robotiq two-finger adaptive robot gripper is equipped with 3-axis tactile sensor arrays and mounted on a UR5 robotic arm and a Franka Emika Panda robot with an Azure Kinect (RGB-D) sensor attached to its end-effector. The scene consists of objects placed randomly in dense clutter. An optical tracker is used to provide ground-truth pose of the target object. environment to enhance perception has been proposed to deal with such scenarios [1]- [3]. Particularly, prehensile and nonprehensile manipulation actions such as grasping or pushing objects can be used to rearrange the cluttered scene to reduce uncertainty in perception [4]- [6]. Such interactive perception maneuvers need to leverage dynamic visual viewpoints as the scene changes upon executing the manipulation actions. Furthermore, there might be residual uncertainty in the pose estimate through visual perception due to incorrect calibration of the sensors, environmental conditions (occlusions, variable lighting conditions), or object properties (transparent, specular, reflective) [7], [8]. Tactile perception can be used to verify the visual pose estimate to provide a robust and correct pose estimation [9]- [11]. Vision-based pose estimation in clutter using RGB images or 3D point clouds have been proposed in several works [12]- [16]. As single viewpoints for pose estimation in clutter is extremely challenging, prior works have used multi-views and combined the observations to recover the object pose [13]. The next best view (NBV) calculation for selecting multiple views have been proposed through information gain metrics such as Shannon entropy [15], mutual information [17] and so on. As vision-based methods are susceptible to failure in cases of dense object clutter, interactive perception methods have been proposed [18]. Semantic scene understanding methods that are critical for interactive perception have been proposed such as support graphs [19]- [21] which describe the support relationships between objects through geometric reasoning. However, these works abstract the real world objects as simple shapes such as cubes, cylinders and spheres to draw support relations which may not be always applicable in realistic scenes. Similarly, analytical grasp planning relying on geometrical cues may fail in dense clutter with complex objects due to unknown object dynamics. Hence, data-driven approaches have gained popularity for performing manipulation in unstructured scenes [22]. In [23], grasping objects in clutter was demonstrated using generative grasping convolutional neural networks (GG-CNN). Zeng et al. [18] proposed a framework for learning to push and grasp policy that were learnt simultaneously using deep-RL for objects in clutter for grasping applications. Taking advantage of the synergies of combining prehensile and non-prehensile manipulation actions is of interest while yet to be comprehensively explored by the research community. Furthermore, incorporating mechanisms to choose the best type of action for a given object can increase the autonomy of the robot. As there may be residual uncertainty with visual estimation, prior works have considered using high-fidelity tactile data to finely localize an object provided with a visual estimate [24]- [26]. A known issue in this context is handling the sparsity and density of tactile and visual information respectively. We introduced in [26] a novel translation-invariant Quaternion filter (TIQF) for point cloud registration which we extend in this work to active vision-based and active tactile-based pose estimation. While tactile data can be collected in an uniform or randomized manner or even manually through human tele-operation, these approaches often result in longer data collection time, human intervention and degradation of the sensors due to repeated actions. Hence, active approaches wherein the robot reasons upon the next best action to reduce the collection of redundant data and overall uncertainty of the system have been proposed by Kaboli et. al. in uncluttered scenarios [1], [4], [27], [28]. Using their proposed framework, the robotic system autonomously and efficiently explores an unknown workspace to collect tactile data of the object (construct the tactile point cloud dataset), which are then clustered to determine the number of objects in the unknown workspace and estimate the location and orientation of each object. The robot strategically selects the next position in the workspace to explore, so that the total variance of the workspace can be reduced as soon as possible. Then the robot efficiently learns about the objects' physical properties, such that with a smaller number of training data, reliable observation models can be constructed using Gaussian process for stiffness, surface texture, and center of mass. Our contributions are as follows: (I) A novel graph-based method for autonomous active decluttering of the scene, enabling the robot to choose the next object to remove and the optimal action (prehensile or non-prehensile) to perform (Figure 2 (IV) Evaluation of the proposed framework on a setup with two robots coordinating to achieve the objective with extensive ablation studies. A. Problem Formulation and Proposed Framework We propose a novel framework shown in Figure 2 to robustly estimate the 6 DoF pose of a known object of interest or target object through active visuo-tactile perception in dense clutter by interactively decluttering the other objects in the workspace. Firstly, the robot deterministically declutters the workspace by using either prehensile or non-prehensile actions. This provides the flexibility to choose the action with the highest probability of success. The robot reasons upon the next object to remove to declutter the workspace with minimal actions. Secondly, upon sufficient decluttering the robot actively chooses viewpoints for vision-based pose estimation using an information gain approach. Finally, an active tactile based pose estimation is performed to correct and verify the visual pose estimate. B. Active Decluttering of the Workspace In order to interactively declutter the workspace, the physical geometry relations between various objects in clutter are autonomously inferred. We define a directed scene graph in form of a tree termed declutter graph G = (V, E) wherein the vertices in V represent the various objects O i in the scene and the edges E define the action to be used to declutter the object. The root node of the graph G is the target object O T , which we seek to localize. Furthermore, the graph explicitly encodes the next object to be removed by computing a weight signifying how much it occludes O T and the associated action (grasp or push). The steps in building the graph are depicted in Figure 3(a). From a cluttered scene, a RGB image and a depth image are taken as inputs to our framework. We use a state-of-the-art semantic segmentation network [29] and grasp affordance network [23] on the RGB image and depth image to extract the semantic segmentation M seg and grasp success metrics q k ∈ [0, 1] respectively. We adapted the pretrained segmentation network [29] with our own dataset consisting of different objects in clutter and their respective segmentation masks. For two objects O i , O j an edge e i j ∈ E is added if the overlap-metric is above a threshold µ o or the minimum distance between the contours d i j is below µ d . Thus, an edge e i j ∈ E is given by The Intersection Over Union (IoU) is used as overlap measure with been tuned empirically to be 0.05 and µ d to 0.5. Subsequently, an action attribute is added to each edge of the graph. Starting with the leaf vertices, for each vertex O k , we attribute the incoming edge to the vertex, e ik ∈ E in the graph with a prehensile or non-prehensile action a k to declutter according to a grasp quality value q k as: We use a mix of both types of actions as for some objects with peculiar shapes, it is challenging for the robot to perform prehensile grasp actions whereas push may be simpler. Since the goal is to declutter the scene in a deterministic manner, Equation 2 ensures that if an object can be grasped with high confidence, a grasp action is executed. If the confidence is below the threshold µ q , a push action is executed. The value of µ q has been empirically set to 0.1. The object to be removed next is inferred from the leaf nodes with the highest valued e ik defined in Equation 1. 1) Push Action: We parameterize the push action by a tuple composed of a push point and direction, i.e., a push = (p push , − → d push ). The trajectory of pushing is a straight line for a fixed predefined distance. We further assume quasistatic pushing [30] and that the object moves on a flat 2D surface. Given the segmentation mask, we compute vectors v i,k ∀i between the centroid of the bounding box of each object and the object to be pushed. The vector pointing towards the clutter is then given by v = ∑ i w i v i,k . Therein each vector is weighted with w i , such that objects that are further away, have less influence on the direction. Finally, the push direction is obtained from The push point p push is calculated as the point at the intersection of the contour of the segmentation mask and push direction − → d push placed at the centroid, as shown in Figure 3(b). This ensures the push action is aligned towards the centroid of the object. However, due to the width of the fingertips of the gripper, it is not always possible to reach this point due to surrounding clutter. To incorporate this constraint, we sample points in the vicinity of the touch point, place a bounding box in the size of the gripper and calculate the mean IoU with all objects. The size of the gripper is calculated by projecting the real world gripper length in the image plane using the transformation between the world frame W and camera frame C at the configuration where the push affordance is computed. The point leading to the smallest mean IoU is chosen as p push . Furthermore, we use the tactile sensors embedded in the gripper to detect a loss of contact during push which stops the execution and triggers a recalculation of the push action. 2) Grasp Action: We used the generative grasping CNN (GG-CNN) [23] for providing grasp affordances in terms of the grasp position, the orientation and the probability of success of the grasp given by the quality measure q k , which is also used in the declutter graph creation. Since we require object specific grasping, we use our semantic segmentation output to mask the depth image input to GG-CNN. Furthermore, in order to improve the object specific grasp estimates, we move the robot to a new viewpoint above the centroid of the chosen object given by the segmentation mask at a predefined height. The grasp action a grasp is defined by a grasp point p grasp , a grasp angle α grasp and an end point to place the object at p place as a tuple (p grasp , α grasp , p place ). If the grasp action fails during execution detected by a loss of contact using tactile sensors, the execution is stopped and recalculation of the grasp action is triggered using the vision sensor. C. Translation-Invariant Quaternion Filter (TIQF) for Pose Estimation We tackle the active visual and active tactile pose estimation problem via a Bayesian-filter based approach termed as translation-invariant quaternion filter (TIQF). The TIQF is a sequential filtering method for point cloud registration that is applicable to sparse as well as dense point clouds. Point cloud registration problem given known correspondences can be formalised as where s i ∈ R 3 are points in the scene cloud S extracted from sensor measurements and o i ∈ R 3 are the corresponding points belonging to the model cloud O extracted from the model mesh, R ∈ SO(3) and t ∈ R 3 are the unknown rotation and translation respectively which aligns o i to s i . We decouple the rotation and translation estimation by finding the relative vectors between a pair of corresponding points as s ji = s j − s i and o ji = o j − o i . This simplifies Equation (3) as: As Equation (5) is independent of t, this is termed as translation-invariant measurements. Given a rotation estimatê R, the translation estimatet can be found in closed form solution as:t To estimate rotation, we cast the problem into a Bayesian estimation framework. We denote the rotation estimateR in its quaternion form as the state x which needs to be identified through measurements z obtained via actions a upto time t. Upon decluttering, the objects' pose remain unaltered during active vision-based and tactile-based pose estimation as we perform guarded touch actions [31]. Hence the state estimate is provided by a recursive Bayes filter as: p(x|z 1:t , a 1:t ) = η p(z t |x, a t )p(x|z 1:t−1 , a 1:t−1 ), where η is a normalization constant. We estimate the current belief p(x|z 1:t , a 1:t ) through a Kalman filter. To derive a linear filter, we derive a linear state and measurement model. We reformulate Equation (5) using quaternions as: where is the quaternion product, x * is the conjugate of x, and s ji = {0, s ji } and o ji = {0, o ji }. As x is an unit quaternion, using the fact that x * x = x x * = 1 to get: We can rewrite Equation (9) as: where [ ] × is the skew-symmetric matrix form. Equation (11) is of the form H t x = 0 where H t is the pseudo-measurement matrix such that The Equation (11) represents a noise-free state estimation where H t solely depends on the corresponding measurements. It can be inferred that x must lie in the nullspace of H t . Similar to [32], we design a pseudo-measurement model as: wherein we enforce the pseudo-measurements z h = 0. As we assume the x and z t to be Gaussian distributed and a static process model, the resulting Kalman equations are given by: wherex t−1 is the normalized mean of the state estimate at t − 1, K t is the Kalman gain andΣ x t−1 is the covariance matrix of the state at t − 1. The parameter Σ h t is the measurement uncertainty at timestep t which is state-dependent and is defined as follows [33]: where ρ is a constant which corresponds to the uncertainty of the correspondence measurements and is empirically set to 0.05 and tr refers to trace. However, Kalman filter does not preserve the constraints on the state-variables such as the unit-norm property of the quaternion in our case [33]. Hence, a common technique is to normalise the state and the associated uncertainty after each update: The rotation estimatex (quaternion) is converted to R ∈ SO(3) and used to estimate the translation according to Equation (6). Thus, with each iteration we obtain a new rotation and translation estimate which is used to transform the model. The transformed model is used to recompute correspondences and repeat the Kalman Filter update steps. We calculate the change in homogeneous transformation between iterations ∆ T IQF < ξ conv i.e., if the difference in the output pose is less than a specified threshold which in our experiments is 0.1mm and 0.1 o respectively and/or maximum number of iterations in order to check for convergence (max it T IQF = 100). D. Next Best Action for Pose Estimation 1) Next Best View (NBV) Selection: The next best view (NBV) problem seeks to find the most optimal next view point to observe an environment given previous measurements by minimising some aspect of the unobserved space through an objective function [15]. In comparison to existing approaches for NBV which is used for mapping the entire environment [15] or for object reconstruction [34], we design an object-driven active exploration method for object pose estimation. We extract the approximate centroid of the current target object from our semantic segmentation network. We capture a point cloud from an initial view that is randomly sampled within the constraints of the workspace and the robot. The semantic segmentation output is used to crop the entire point cloud around the region of interest of the target object. We discretize the resulting point cloud into a 3D occupancy grid OG with resolution g res . Each cell c i in the occupancy grid is represented by a Bernoulli random variable and has an occupancy probability p(c i ). There are two possible states for each cell with c i = 1 indicating the cell is occupied and c i = 0 for an empty cell. A common independence assumption of each cell with other cells enables the calculation of the overall entropy of the occupancy grid as the summation of the entropy of each cell. The Shannon Entropy of the entire grid can be computed as [35]: To estimate the NBV, we compute the expected entropybased information gain. As it is intractable to calculate the exact entropy from a predicted viewpoint, we perform a common simplifying approximation by predicting the expected measurementsẑ view t from a viewpoint a view t using ray-traversal algorithms. A sensor model representing our RGB-D sensor is defined with the given horizontal and vertical field of view (FoV) and resolution to cast a set of rays R = r 1 , r 2 , . . . r j for a given distance d ray in the z-axis of the sensor model coordinate frame. A viewpoint a view ∈ A view is defined as the 3D position p view ∈ R 3 and orientation R view ∈ SO(3) of the camera frame. We perform Markov Monte-Carlo sampling of N viewpoints on the hemisphere space located above the centroid o centroid of the target object. The size of the sphere is limited by the kinematic workspace limits of the robot. The 3D position p view is sampled as a point on the hemisphere and the orientation of the view as axis of rotationê and angle θ is computed witĥ whereẐ = {0, 0, 1} is the Z-axis of the world frame as shown in Figure 4. Using the resulting angle-axis formulation (ê, θ ) or equivalent rotation matrix R view from (21), the camera is oriented towards the target object. The grid cells which are traversed by the rays are computed to be occupied or free and the respective log-odds are updated accordingly [36]: where p h and p m are the probabilities of hit and miss which are user-defined values set to 0.7 and 0.4 respectively as in [36]. The expected information gain by taking a viewpoint a view k and corresponding expected measurementẑ view t is given by the Kullback-Leibler (KL) divergence between the posterior entropy after integrating the expected measurements and the prior entropy [15]: Hence, the selected action a view * is given by: 2) Next Best Touch (NBT) Selection: Similar to next best view selection, for tactile-based pose estimation we select the action to extract measurements that would reduce the uncertainty of the estimated pose. We define an action a loc t as a ray represented by a tuple a loc t = (s, , with s as the start point and − → d the direction of the ray. The TIQF algorithm and active touch selection is initialised with minimum of 3 points, hence for initialisation the touches are sampled randomly given the visual-pose estimate. We generate the set of possible actions A loc by Monte-Carlo sampling of actions around each face of a bounding box placed on the current estimate of the object. The predicted measurement upon performing an action is estimated by ray-mesh intersection algorithm. We seek to choose the action a loc * t ∈ A loc , that maximizes the overall Information Gain measured by the Kullback-Leibler divergence between the posterior distribution p(x|ẑ 1:t ,â 1:t ) after executing actionâ t and the prior distribution p(x|z 1:t−1 , a 1:t−1 ). We denote the predicted action and associated measurement aŝ z andâ respectively. Given that the prior and posterior are multivariate Gaussian distributions from our definitions in the TIQF formulations, the KL divergence in discrete form can be computed in closed form as [37]: where d is the dimension of the quaternion state vector and d = 4 in our case,x t−1 is the normalized mean of the quaternion state estimate at t − 1, andΣ x t−1 is the covariance matrix of the state at t − 1. This enables to evaluate an exhaustive list of actions at marginal computation cost in real time without the need to prune actions or setting trade-offs with computation time as compared to literature [38], [39]. Timing analysis of our active action generation and selection approach is provided in our prior work [26]. The next best action for pose estimation is graphically depicted in Figure 4. The stop criterion for both the NBV and NBT selection is defined similarly as the convergence criteria: if the change in position and rotation between each sensor acquisition is less than a specified threshold ξ stop = {ξ stop T , ξ stop R }. In our experiments we set ξ stop III. EXPERIMENTS A. Experimental Setup The experimental setup shown in Figure 1 consists of a Universal Robots UR5 robot with a Robotiq 2F140 Gripper and a Franka Emika Panda robot with the standard Panda Gripper. The standard gripper pads of the Robotiq 2F140 are replaced with the tactile sensor array from XELA Robotics on the fingertips and the phalanges. The tactile sensing system consists of N T = 140 taxels that provide 3-axis force measurements on each taxel in the sensor coordinate frame. It is composed of eight tactile sensor arrays in total, where 4 tactile sensor arrays are on each finger: phalange sensor (24 taxels), outer finger (24 taxels), finger tip (6 taxels) and inner finger (16 taxels). The tactile sensors function on the principle of Hall-effect sensing and are covered with a soft, textile material. The raw data from the XELA sensor is a relative value of force measurement but it is not directly characterized to Newtons. The normal force values (along z axis) range between 36000 and 45000. We normalize the raw values received from the sensor. An Azure Kinect DK RGB-D camera is rigidly attached to the Panda Gripper with a custom designed flange which provides the vision point cloud v S. Hand-eye calibration is performed to find the transformation between the Panda Gripper and the camera frame and consequently transformed into the common world coordinate frame W [40]. A marker-based optical tracking system from Advanced Realtime Tracking 1 is placed overlooking the workspace which provides the ground-truth pose of the target objects only. The markers are placed only on the target object. We used 12 objects in total: olive oil bottle, cleaner, spray, transparent wineglass, shampoo, transparent box, sponge, can, black box, screwdriver, duster, marker as shown in Figure 5. The objects have been chosen according to the following criteria: varying shape between simple (e.g. can) to complex (e.g. screwdriver), varying degrees of transparency (highly transparent box to highly opaque black box), varying center of mass (e.g. shampoo) and varying degree of deformability (e.g. sponge). Some of the objects such as the transparent box and wineglass are intentionally chosen to test the robustness of the framework. The background has been intentionally chosen to be plain white to increase the visual perceptual difficulty of the transparent objects. Four objects i.e., olive oil bottle, spray, cleaner and transparent wine glass are used as the target object whose pose needs to be accurately estimated while the other 8 objects are used to clutter the workspace. A software architecture developed in ROS is used for the data communication between the two robots, camera, and tactile sensors. For the implementation of the finite state machine, we used the Octomap library [36] for the NBV calculations. The robot experiments were executed on a workstation running Ubuntu 18.04 with 8 core Intel i7-8550U CPU @ 1.80GHz and 16 GB RAM. The maximum allowed speeds for the UR5 and Panda were 75 mm/s and 100 mm/s respectively for safety constraints. The fine tuning of the semantic segmentation network [29] employed NVidia GeForce RTX 2080 Super GPU with 8GB RAM. No further training of the grasp affordance network [23] was performed. B. Robot Experiment Results Given the estimated pose R est , t est and the ground truth poses R gt , t gt , we employ the model-free translation and rotation error metric and the model-dependent Average Distance of model points with Indistinguishable views metric (ADI) [16] 1 https://www.ar-tracking.com/en for evaluation. The translation and rotation error is defined as follows: where, ||x|| 2 is the L 2 norm of x. As objects having an axis of symmetry can produce infinite rotational solutions, we only report the L 2 norm of translation error for all the objects. Instead, we use the ADI metric as it is not affected by symmetric objects. The ADI metric is defined as follows: for all points p 1 , p 2 ∈ and M is the total number of points in O. For both the metrics, lower values signify higher accuracy. Considering the implementation of robot actions, we used both vision and tactile feedback for the push, grasp and touch actions. For instance, given a push or grasp action, the tactile readings are continuously sampled at 40 Hz to detect possible loss of contact during pushing or grasping. We use a constant grasping force of 5N provided by the Robotiq 2F140 gripper. For the push actions, the contacted taxels' normalized raw force values are monitored such that they are constantly above a predefined threshold i.e., f r > τ p (set to 1.06). On the other hand, for touch actions for localization, we use guarded motions so that the robot does not accidentally push or topple other objects or the target object. As soon as the normalized force value measured on any of the taxels exceeds the threshold f r > τ f (set to 1.02), the motion is stopped and the 3D locations of the excited taxels are recorded as the tactile point cloud t S. We compare our active visuo-tactile pose estimation by decluttering the scene with 3 baselines: (a) static vision without decluttering, (b) active vision without decluttering and (c) active vision with decluttering. This ablation study is performed to evaluate the importance of each part of the framework. In all cases, the pose estimation is performed using our TIQF algorithm with the same initial conditions and scene segmentation to ensure uniformity. We repeated all the baseline experiments and our proposed framework twice for each target object by randomly changing the scene clutter each time. In total, we performed 32 experimental trials including baselines. The results for the experiments are shown in Table I. Figure 6 shows the accuracy of the pose estimation using L 2 norm of the translation error and ADI for the active vision and active touch-based pose estimation. A typical run of the whole framework consisting of 4 objects to declutter, followed with 3 different viewpoints for active vision and 4 touch-acquisitions respectively takes around 795s, while 87% of the time is used for robot actions alone. We also report an overall success rate of 83.3% for grasp actions and 70% for push actions for the decluttering phase. C. Discussion As seen from Table I, the ADI metric and the L 2 norm decreases and accuracy improves from static vision to our proposed active visuo-tactile estimation with decluttering approach. We note approximately 44.7% reduction in median ADI error with active vision compared to static vision. This corroborates with prior work [13], wherein selecting viewpoints actively can improve accuracy over static viewpoints. Moreover, demonstrating the validity of our proposed decluttering strategy, we see a reduction of 53% in median ADI error before and after decluttering. On the other hand, active visionbased pose estimation on a scene without clutter may still have residual uncertainty. This is demonstrated by the improved performance of 35.6% in median ADI using active tactilebased pose estimation. We intentionally used a transparent wineglass as a target object which is very challenging for pose estimation from visual perception as it is nearly invisible to a time-of-flight (ToF) depth sensor. This is seen by the relatively higher errors in Table I (Object #4) in comparison to other target objects. However, visuo-tactile based estimation using TIQF reduces the ADI error by nearly 85% compared to active vision after decluttering. The errors are consistent with other target objects, highlighting the strength of tactile sensing for challenging objects for visual modality. Furthermore, the ability of the TIQF to handle dense and sparse clouds is shown by the improved accuracy in vision and tactile-based estimation respectively over each action as shown in Figure 6. The TIQF converges to a stable pose estimate with < 1cm average error within 4 touches. In Figure 6, we note that a change in modality from active vision to active tactile during interactive perception helps to improve the accuracy of pose estimation. IV. CONCLUSIONS In this paper, we proposed an active visuo-tactile pose estimation framework for objects in dense clutter. We proposed a novel declutter graph based approach for scene representation for decluttering which allows to select the next object to remove and provides the optimal action to perform. The declutter scene graph further encodes two types of actions: push and grasp action. Furthermore, we extended our novel TIQF for active vision based and active tactile based pose estimation. We performed an object-driven exploration strategy for active viewpoint and active touch point selection. In the evaluation, we demonstrated that our proposed method significantly improves the accuracy of pose estimation over monomodal baselines. We also demonstrated the importance of using a secondary modality to correct or verify the estimation from a first modality. ACKNOWLEDGMENT The video can be found in: https://youtu.be/sjqWRFLL2Xw
2022-02-07T02:15:13.509Z
2022-02-04T00:00:00.000
{ "year": 2022, "sha1": "1ef308f7e7e82db36bda65593756bd56f20714e5", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "1ef308f7e7e82db36bda65593756bd56f20714e5", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
6959054
pes2o/s2orc
v3-fos-license
Mitophagy: a balance regulator of NLRP3 inflammasome activation The NLRP3 inflammasome is activated by a variety of external or host-derived stimuli and its activation initiates an inflammatory response through caspase-1 activation, resulting in inflammatory cytokine IL-1β maturation and secretion. The NLRP3 inflammasome activation is a kind of innate immune response, most likely mediated by myeloid cells acting as a host defense mechanism. However, if this activation is not properly regulated, excessive inflammation induced by overactivated NLRP3 inflammasome can be detrimental to the host, causing tissue damage and organ dysfunction, eventually causing several diseases. Previous studies have suggested that mitochondrial damage may be a cause of NLRP3 inflammasome activation and autophagy, which is a conserved self-degradation process that negatively regulates NLRP3 inflammasome activation. Recently, mitochondria-selective autophagy, termed mitophagy, has emerged as a central player for maintaining mitochondrial homeostasis through the elimination of damaged mitochondria, leading to the prevention of hyperinflammation triggered by NLRP3 inflammasome activation. In this review, we will first focus on the molecular mechanisms of NLRP3 inflammasome activation and NLRP3 inflammasome-related diseases. We will then discuss autophagy, especially mitophagy, as a negative regulator of NLPP3 inflammasome activation by examining recent advances in research. [BMB Reports 2016; 49(10): 529-535] INTRODUCTION The immune system is a host protection mechanism against invading pathogens, such as viruses and bacteria. Macrophages, the front line defenders who recognize invading pathogens, not only kill pathogens by phagocytosis, but also initiate inflammation producing inflammatory cytokines upon infections. The maturation and secretion of IL-1, one of the strongest pro-inflammatory cytokines, is enhanced by the inflammasome in macrophages, leading to inflammation and cell death (1,2). Inflammasomes are molecular platforms that trigger activation of caspase-1, leading to pro-inflammatory cytokine maturation. One of the best characterized inflammasome is the NLRP3 inflammasome. Even though NLRP3 inflammasome activation is important for protecting cells from pathogenic microbes, excessive NLRP3 inflammasome may cause hyperinflammation resulting in tissue damage and organ failure. NLRP3 inflammasome is also activated by danger signals released from stressed cells and dysregulation of NLRP3 inflammasome activation causes many diseases, such as neurodegenerative diseases, metabolic diseases and sepsis. Therefore, mechanisms to control NLRP3 inflammasome activation are necessary for health. Upon bacterial infection, innate immune cells, such as macrophages and neutrophils, take up bacteria and generate reactive oxygen species (ROS) (3)(4)(5). The 2 main sources of ROS are NADPH oxidase-mediated ROS and mitochondriaderived ROS in innate immune cells (6)(7)(8). Some studies have proposed that mitochondrial dysfunction is closely associated with NLRP3 inflammasome activation. Infection causes mitochondrial damage through an unknown mechanism and the damaged mitochondria release mitochondrial DNA (mtDNA) and mitochondrial reactive ROS (mtROS) and are thought to work as danger signals (9,10). Cells have a regulatory mechanism, 'mitophagy', a mitochondria-selective autophagic process to eliminate damaged or unwanted mitochondria, to maintain mitochondrial homeostasis against stress. This review will discuss how the balance between pro-inflammatory response and anti-inflammatory response is maintained in cells focusing on NLRP3 inflammasome activation and mitophagy. combat foreign organisms entering the body that are able to cause disease (11). Most foreign organisms express various pathogen-associated molecular patterns (PAMPs) in their cell wall or on their cell surface (12). Therefore, it is possible that a host can distinguish between self and non-self by recognizing PAMPs. Moreover, under stressed conditions, cells release danger signals called danger-associated molecular patterns (DAMPs) to notify their urgent situation to the immune system. PAMPs and DAMPs are recognized by innate immune receptors pattern-recognition receptors (PRRs), such as Toll-like receptors (TLRs) and nucleotide-binding and oligomerization domain (NOD)-like receptors (NLRs) (13). Activation of PRRs lead to Nf-B activation and inflammatory cytokine production. Many studies have shown that the inflammasome is a key regulator of inflammation (1,(14)(15)(16)(17). Most inflammasomes contain NLR proteins and the NLR family can be classified into 3 subtypes based on their domain structures: the NODs (NOD1-5 and CIITA), the NLRPs (NLRP1-14) and the IPAF (IPAF also known as NLRC4, and NAIP) subtypes (18). The NLR family commonly contains a central nucleotide-binding and oligomerization (NACHT) domain. Most NLRs also contain C-terminal leucine-rich repeats (LRRs) and N-terminal caspase recruitment domains (CARD) or pyrin domains (PYD). NACHT plays an essential role in the activation and formation of signaling complexes and LRR mediates ligand sensing, while CARD and PYD function in homotypic protein-protein interactions for downstream events (18,19). A non-NLR inflammasome member, absent in melanoma-2 (AIM2), was later identified that can form an inflammasome composed of AIM2, ASC and caspase-1. The function and mechanism of many NLR family members remain poorly understood, but some are well-characterized. For example, NOD1 and NOD2 are receptors for bacterial peptidoglycan fragments. NOD1 and NOD2 recognize D-glutamyl-meso-diaminopimelic acid (DAP) and muramyl dipeptide (MDP), respectively. Upon sensing their ligand, NOD1 and NOD2 oligomerize and recruit RIP2 via CARD-CARD interactions, triggering an inflammatory response (20, 21). CIITA uniquely acts as a transcription factor playing a key role in the regulation of class II MHC genes (22). The IPAF inflammasome is activated in response to gramnegative bacteria containing type III or IV secretion systems, such as Psudomonas aeruginosa, Salmonella typhimurium and Shigella flexneri (23-25). The AIM2 inflammasome is activated by sensing cytosolic double-stranded DNA (dsDNA) through the HIN-200 domain of AIM2 (26, 27). As is demonstrated herein, each inflammasome is activated by different stimuli, but more studies concerning other NLR family members are required. In this review, the NLRP3 inflammasome, which is the most characterized of the inflammasome members, will be discussed in detail. NLRP3 INFLAMMASOME The NLRP3 inflammasome is activated in response to abundant stimuli and 2 distinct steps are required for this activation. First, Nuclear factor-B (NF-B) activation by TLR ligands increases NLRP3 and pro-1L-1; this step is called 'priming'. Second, activating stimuli derived from microbes or the host induce the assembly of NLRP3 inflammasome components (17). The NLRP3 inflammasome is composed of NLRP3, apoptosis-associated speck-like protein containing a CARD (ASC) and pro-caspase-1. ASC is an inflammasome adaptor containing N-terminal PYD and C-terminal CARD. NLRP3 inflammasome is assembled by the following steps: PYD of NLRP3 is oligomerized with ASC through PYD-PYD interactions. Subsequently, pro-caspase-1 is recruited and interacts with ASC through CARD-CARD interactions (28). After the formation of the NLRP3 inflammasome, pro-caspase-1 is autocleaved to active caspase-1 and the active caspase-1 matures cytokines, such as IL-1, to their bioactive and secreted forms (17). Currently, various sterile DAMP signals and pathogens, including mitochondrial ROS, mitochondrial DNA, potassium efflux, MDP, monosodium urate (MSU), cholesterol crystals, cathepsins, influenza virus, Salmonella typhimurium, Mycobacterium tuberculosis and others, are known to activate the NLRP3 inflammasome (29). Compared to NLRP3 inflammasome activators, negative regulators of the NLRP3 inflammasome are relatively less known. Recent studies suggest that nitric oxide (NO), Ca 2+ and cyclic AMP negatively regulate the NLRP3 inflammasome but the detailed mechanisms are unclear (30, 31). If the upregulated NLRP3 inflammasome activation is not downregulated, inflammation will be persistently induced by NLRP3 inflammasome and contribute to a diverse array of diseases. NLRP3 INFLAMMASOME IN DISEASE; NEURODEGENERATIVE DISEASE Dysregulation of NLRP3 inflammasome activation is observed in neurodegenerative disease. Alzheimer's disease (AD), a representative neurodegenerative disease, is characterized by amyloid- plaque accumulation. Halle et al., using in vitro and in vivo mouse models, identified that amyloid- causes lysosomal damage releasing cathepsin B and the cathepsin B activated NLRP3 inflammasome (32). In addition to the mouse study, recently performed human studies showed that increased active caspase-1 expression is observed in the brains of AD patients (33). It has been reported that not only AD but also Parkinson's disease (PD) are associated with the NLRP3 inflammasome. Accumulation of Lewy bodies (LB) formed by -synuclein (Syn) aggregation is a main pathogenesis of PD. A recent study revealed that Syn induces synthesis of pro-IL-1 by an interaction with TLR2 and activates NLRP3 inflammasome resulting in caspase-1 activation and IL-1 maturation in human primary monocytes (34). In contrast, it is reported that mitochondrial dysfunction may lead to neurodegenerative disease (35). These studies indicate that both the NLRP3 inflammasome and mitochondrial dysfunction are 531 http://bmbreports.org BMB Reports involved in neurodegenerative disease though more studies are required to clarify the relationship among NLRP3 inflammasome, mitochondria and neurodegenerative diseases. NLRP3 INFLAMMASOME IN DISEASE; METABOLIC DISORDER It has been known that a High fat diet (HFD) can induce Type 2 diabetes (T2D) and obesity and chronic inflammation is thought to contribute to T2D and obesity (36). As such, there is a possibility that the NLRP3 inflammasome, which can lead to chronic inflammation, is related to these metabolic diseases. Recently Lee et al. observed the activated NLRP3 inflammasome in T2D patients and found that mitochondrial ROS is involved in this phenomenon (37). Other groups showed that saturated fatty acid-induced inflammation causes mitochondrial impairment in adipocytes, while saturated fatty acids can upregulate NLRP3 inflammasome activation (38, 39). Also, a higher level of NLRP3 inflammasome components were detected in obese patients, and active caspase-1 were increased with obesity development in adipose tissues (40). Interestingly, caspase-1 deficient mice gained less weight and they had less adipose tissue formation in the HFD-induced obesity mouse model (41). These data strongly suggest that the NLRP3 inflammasome and mitochondrial dysfunction are closely linked to metabolic disorders. NLRP3 INFLAMMASOME IN DISEASE NLRP3; SEPSIS Sepsis is a life-threatening systemic inflammatory condition caused by a host immune response to microbial infection. Normally, increased inflammatory response to infection should be resolved in a timely manner, but when it gets out of control, exaggerated activation of the inflammasome produces excessive inflammatory cytokines leading to sepsis. A few studies provided evidence that mitochondrial dysfunction is associated with sepsis. Damaged mitochondria of the cells treated with NLRP3 inflammasome activators release danger signals, such as mtROS and mtDNA, and these signals can activate the NLRP3 inflammasome (9, 10). Supporting these, increased mtDNA and inflammatory cytokines were detected in septic plasma samples (42). This evidence shows that both the NLRP3 inflammasome and mitochondria contribute to sepsis. AUTOPHAGY AND NLRP3 INFLAMMASOME As described above, there is a lot of evidence to suggest that damaged mitochondria contribute to disease, including NLRP3 inflammasome-related diseases. Therefore, the importance of mitophagy, which is a process to remove damaged mitochondria, can be considered as a key process to regulate NLRP3 inflammasome activation. Despite the importance of mitophagy, it has mostly only been studied in mitochondria-related gene overexpressed mammalian cell lines in recent years. To investigate its mechanism and function accurately, more in vitro and in vivo studies are required. Knowledge of macroautophagy (hereafter autophagy) is a prerequisite to understanding mitophagy. Autophagy is a self-degradation process to recycle cellular components under stressed conditions, such as lack of nutrients. Autophagy has been regarded as a nonselective process in the past; however, accumulating evidence shows that it can be selective to remove specific proteins and damaged organelles, such as mitochondria under certain conditions (51)(52)(53). Autophagy consists of 4 sequential steps: initiation of autophagosome formation, elongation and closure of autophagic membrane, fusion between autophagosome and lysome, and degradation. In many cellular cases, autophagic induction is regulated by the mammalian target of rapamycin (mTOR) and the AMP-activated protein kinase (AMPK). mTOR inactivates unc-51-like kinase 1/2 (ULK1/2) by phosphorylation under basal conditions, but mTOR is inhibited under stress http://bmbreports.org conditions, allowing ULK1/2 to be modified to their active forms by AMPK phosphorylation initiating autophagy. Next, VPS34 lipid kinase complex and phosphatidylinositol 3-phosphate (PI3P) are recruited to complete autophagosome formation. In the step of elongation and closure of the autophagic membrane, the ubiquitin-like proteins, ATG12 and ATG5, are conjugated and then the conjugated forms an E3-like ligase complex with ATG16L1. After that, ATG8 (LC3 and GABARAP subfamilies) is conjugated to the lipid phosphatidylethanolamine by the ATG16L1 ligase complex and in the case of LC3, LC3-I form changes to LC3-II form, which is often used as an autophagosome marker (44, 54). VPS34-Beclin 1 complex containing ultraviolet radiation resistance associated gene protein (UVRAG) is involved in the step of fusion between the autophagosome and lysosome. Once the autophagosome and lysosome are fused to the autolysosome, the cargo in the autolysosome is degraded through lysosomal hydrolase activity (55). When there are certain proteins or organelles to be eliminated, they are ubiquitinated by E3 ubiquitin ligases and the ubiquitin chains serve as 'eat-me' signals. Then, adaptor proteins, such as p62 and NBR1, which contain a ubiquitin-associated UBA domain and LC3-interacting region (LIR), capture the ubiquitinated cargos and recruit LC3-II to proceed to selective autophagy (54). Accumulating evidence has shown that autophagy is involved in the innate immune response and NLRP3 inflammasome activation. Inhibition of autophagy by 3MA, which is a PI3K inhibitor increased mtROS production resulting in NLRP3-dependent IL-1 secretion in the absence of inflammasome stimuli (56,57). Similarly, treatment of 3MA blocking autophagosome formation in Mycobacterium tuberculosisinfected cells led to increased IL-1 secretion (58). Consistent with these data, bone marrow-derived macrophages from LC3 knockout mice and Beclin 1 knockout mice had more damaged mitochondria producing mtROS, and had more caspase-1 activation leading to increased IL-1 secretion upon NLRP3 inflammasome activation (9). Also, Atg16L1 deficient mice were more susceptible to dextran sulfate sodium-induced colitis as a result of enhanced inflammasome activation (58). These data show that autophagy negatively regulates inflammasome activation. Moreover, as damaged mitochondria are detected at higher levels under autophagy inhibition (9,56,57), it is plausible to assume that the elimination of damaged mitochondria by autophagy is important to prevent excessive NLRP3 inflammasome activation. MITOPHAGY AND NLRP3 INFLAMMASOME As mentioned above, mitochondrial damage causes NLRP3 inflammasome, and mitophagy can remove damaged mitochondria selectively (Fig. 1). Since mitochondria are evolved from ancient bacteria, a comparison of differences between mitophagy and intracellular bacteria-selective autophagy referred as to xenophagy can provide valuable information to better understand mitophagy. Distinguishing between self and non-self is achieved by different 'eat-me' signals and cargo receptors (52,59). To date, at least 4 cargo receptors have been studied in detail: NDP52, p62, NBR1 and Optineurin (59). Invading bacteria can damage their host vacuoles in which the bacteria replicate exposing glycans from the host vacuoles, and the glycans are recognized by a danger receptor, Galectin-8. Galectin-8 functions as an 'eat-me' signal and interacts with NDP52 inducing xenophagy by recruiting LC3 attached to phagophores (60). NDP52 can also directly bind to ubiquitinated bacteria and recruit the autophagic machinery. Similarly, p62, NBR1 and Optineurin serves to connect ubiquitinated bacteria with LC3, facilitating autophagy. How the engulfed bacteria are ubiquitinated is still largely unknown but LRSAM1 has been identified as an E3 ubiquitin ligase that ubiquitinates Salmonella typhimurium recruiting cargo receptors (61). Another E3 ubiquitin ligase, PARKIN, has also been reported to be involved in xenophagy. PARKIN deficient mice and flies were susceptible to intracellular bacterial infections inducing less autophagy, and these indicate that PARKIN plays a crucial role in xenophagy (62). Interestingly, PARKIN is the best known E3 ubiquitin ligase in mitophagy. The PINK-PARKIN pathway is important in the mitophagic process (63,64). PINK1 is a serine/threonine kinase containing a mitochondrial targeting sequence in the N-terminus. Normally, PINK1 is imported into the mitochondria and anchored at the inner mitochondrial membrane (IMM) and then degraded by mitochondrial proteases. However, when mitochondria are damaged, PINK1 can't be imported to IMM, but instead accumulates on the outer mitochondria membrane (OMM) (65). The accumulated PINK1 recruits a cytosolic E3 ubiquitin ligase PARKIN and activates PARKIN by phosphorylation. Activated PARKIN ubiquitinates certain mitochondrial proteins or PARKIN itself. Then, the mitochondrial ubiquitination acts as an 'eat-me' signal and can be recognized by the adaptor p62 through its ubiquitin binding domain. Since p62 also has an LC3 interacting motif, p62 binds with LC3 on the autophagosome and facilitates the degradation of damaged mitochondria (44). However, not all mitophagy is processed by the PINK-PARKIN pathway. Since mitochondrial proteins BCL2/adenovirus E1B 19kDa interacting protein 3 (BNIP3), NIP3-like protein X (NIX/BNIP3L) and FUN14 domain containing 1 (FUNDC1) can directly interact with LC3-II, it is thought that other mechanisms exist to enhance mitophagy (66)(67)(68)(69). So far we have discussed how mitochondria play a crucial role in the NLRP3 inflammasome activation, however not many studies have been performed concerning the detailed molecular mechanisms of mitophagy. Recently, some studies have shown how mitophagy is related to the control of NLRP3 inflammasome activation. Zhong et al. found that the cargo receptor p62 is increased by NF-B signaling, and that the increased p62 is translocated to damaged mitochondria, which are ubiquitinated by Parkin and induce mitophagy (70). Consistent with this finding, Kim et al. also observed that p62 is increased and translocated to damaged mitochondria in NLRP3 inflammasome activated cells. Not only p62 but also autophagic inducer Sestrin 2 (SESN2) is increased by NO and is translocated to damaged mitochondria and protects cells from hyperinflammation inducing mitophagy. Additionally, SESN2 increases ULK1 stability leading to the initiation of autophagy (71). These studies indicate that mitophagy is one of the self-limiting systems to protect cells from excessive inflammation. CONCLUSION In the past decade, there have been great advances in understanding of the NLRP3 inflammasome and autophagy. However, knowledge of the link between the NLRP3 inflammasome and autophagy, especially mitophagy, is poorly understood. Nevertheless we have discussed the role of mitophagy as a negative regulator of aberrant NLRP3 inflammasome activation, while detailed mechanisms remain largely unknown. There are questions that remain to be answered. Mitochondrial damage can activate the NLRP3 inflammasome, but inflammatory cytokines can also trigger mitochondrial damage, so it is ambiguous as to which one comes first. Since NLRP3 inflammasome activation has a beneficial effect so how to regulate activation appropriately it should be considered for clinical application. Also, how damaged mitochondria serve as 'eat-me' signals in the case where PINK1 is not mediated should be investigated. Further studies are required for a better understanding of the molecular mechanisms of mitophagy, but mitochondria remain possible therapeutic targets to protect cells from aberrant inflammation.
2018-04-03T03:35:48.053Z
2016-10-31T00:00:00.000
{ "year": 2016, "sha1": "73f16cb8f06728fadc490b631e497c908d11a04c", "oa_license": "CCBYNC", "oa_url": "http://koreascience.or.kr/article/JAKO201635064124316.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "73f16cb8f06728fadc490b631e497c908d11a04c", "s2fieldsofstudy": [ "Biology", "Medicine", "Chemistry" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
55491234
pes2o/s2orc
v3-fos-license
The Muon Scattering Experiment ( MUSE ) at PSI and the proton radius puzzle The unexplained large discrepancy of the proton charge radius measurements with muonic hydrogen Lamb shift and determinations from elastic electron scattering and Lamb shift in regular hydrogen of seven standard deviations is known as the proton radius puzzle. Suggested solutions of the puzzle range from possible errors in the experiments through unexpectedly large hadronic physics effects to new physics beyond the Standard Model. A new approach to verify the radius discrepancy in a systematic manner will be pursued with the Muon Scattering Experiment (MUSE) at PSI. The experiment aims to compare elastic cross sections, the proton elastic form factors, and the extracted proton charge radius with scattering of electrons and muons of either charge and under identical conditions. The difference in the observed radius will be probed with a high precision to verify the discrepancy. An overview of the experiment and the current status will be presented. Introduction The proton charge radius is an important quantity characterizing the proton charge distribution associated with the internal structure of the proton.The root-mean-square (rms) radius of the proton, r p , is defined as the square root of the integral of the proton's charge density in the rest frame weighted with r 2 , from the center to infinity.In relativistic Quantum Mechanics, the rms radius is equivalent to the first derivative of the proton charge form factor versus four-momentum transfer squared, Q 2 , in the static limit of Q 2 → 0. Traditionally, the proton rms radius has been measured with elastic electron-proton scattering since the 1950's by investigating how the form factor changes near Q 2 = 0. Hofstadter et al. already got it right to a level of uncertainty that today marks the size of the proton radius puzzle [1].Over the years, the proton charge radius from electron scattering experiments has stabilized around 0.87-0.88fm with an average of r p = (0.877 ± 0.006) fm from the two latest published values from electron scattering [2,3].The proton rms charge radius also contributes to the Lamb shift of the 2S-2P atomic transitions of the hydrogen atom, due to the overlap of the atomic shells with the finite-sized nuclear center.The proton charge radius has been extracted from the Lamb shift of 2S-2P transitions in regular hydrogen in the framework of bound-state Quantum Electrodynamics (QED).The CODATA2010 value [7] of r p = (0.8775 ± 0.0051) fm is based on electron scattering and Lamb shift measurements from regular hydrogen.Both the Lamb shift and a e-mail: kohlm@jlab.orgelectron scattering values have similar precision and are consistent with each other.More recently it became also feasible to use muonic hydrogen for Lamb shift measurements.Here the finite-size effect on the Lamb shift is strongly enhanced compared to regular hydrogen due to the higher mass of the muon and the tighter orbits of its atomic states.The resulting radius measurement is an order of magnitude more precise.In the first published paper [4], the proton charge radius was determined as r p = (0.84184 ± 0.00067) fm, deviating from the CODATA2006 value [5] by five standard deviations. A follow-up work [6] evaluated additional transitions between substates of the hyperfine multiplets and found r p = (0.84087 ± 0.00039) fm, consistent with the first value and with errors reduced even further.The discrepancy with the meanwhile more precise CODATA2010 [7] value has increased to over seven standard deviations, as much as 4% of the value.This unsatisfactory situation is known as the proton radius puzzle, which has been discussed in detail in a recent review [8].Standard theory has been unable to explain this discrepancy in given frameworks.The biggest possible effects due to conventional physics are related to the muon vacuum polarization [9], which have been questioned [10], and due to the proton polarizability contribution to the effect of two-photon exchange, which could be different in size for ep and μp interaction [11][12][13][14][15].It is controversial whether the size of the proton polarizability effect can be large enough to explain the radius discrepancy.The puzzle may be hidden in the bound state physics alone, such as in the size of the Rydberg constant, which is on the other hand known as one of the best measured quantities in physics.Finally it may be an effect that generally distinguishes electrons and muons.Such new physics scenarios have been considered by various models which postulate new unobserved, weakly-coupled light bosons in the MeV mass region [16][17][18][19][20][21].A light boson could influence the charge form factor and its curvature at very low Q 2 .The different behavior of the muon and electron could be achieved through fine tuning [19,20] or with explicit breaking of lepton flavor universality [21].In a constrained parameter space, such bosons could in principle be used to simultaneously explain the muon anomalous magnetic moment and to also provide a link to dark matter.Whatever scenario turns out to be true, some of the implications can be studied with scattering experiments.If the puzzle is solely present in the bound-state physics of hydrogen, then muon and electron scattering should both give consistent values for the proton radius from form factor measurements.If new physics fundamentally distinguishes how muons and electrons interact with the proton, e.g. through a light boson that peferrably interacts with the muon, then a comparison of electron and muon scattering should reveal a difference in the extracted radii from both probes.If the proton polarizability effect is responsible, different sizes of two-photon exchange effects would be observable for e ± p and μ ± p elastic scattering. The MUSE Experiment In order to shed light on this puzzle and its possible resolution, the MUSE collaboration has proposed an experiment at the Paul-Scherrer Institute (PSI) to verify the consistency of the proton radius extractions from elastic scattering with electronic and muonic probes of either charge in a simultaneous measurement [22].MUSE has been approved by the PAC since the beginning of 2013.The idea is to use a common beam of e and μ of either charge to perform elastic scattering from the proton at low momentum transfer.Comparisons of ep and μp elastic scattering will be made for the obtained yield, cross section, charge form factor, and extracted charge radius from the form factor slope with Q 2 for each of the four probes e ± , μ ± .The possibility to reverse the charge sign of the beam allows to study any two-photon exchange effects, which are controversial [13][14][15].The MUSE experiment directly tests the most interesting possible explanations of the proton radius puzzle, that there are differences in the μp and ep interactions.The MUSE setup is shown schematically in the left half of Fig. 1. EPJ Web of C onferences PSI produces a mixed secondary beam of pion, muons and electrons.The 500 MeV high-power proton cyclotron operates in pulsed mode at 50 MHz bunch frequency, i.e. every 20 ns a short pulse of secondary particles is emitted from the production target.The πM1 channel selects momentum and charge of the particles and delivers the charged particle beam to the experiment through a double-C type magnetic chicane, where it is collimated at an intermediate dispersive point to limit the maximum intensity to 5 MHz.Quadrupole magnets are used to produce a beam focus of 1-2 cm in diameter at the target.The secondary beam is cleanly separated by measuring the time of flight between RF signal and arrival time using scintillating fiber hodoscopes.Measurement of the beam position in the dispersive plane between the dipole chicane magnets provides the momentum of the beam particle to better than 1% precision.For selected momenta of 115, 153, and 210 MeV/c, the arrival times of pions, muons and electrons differ by at least 4 ns each, which allows rejecting pions at the trigger level.For each event in the experiment triggered by the scattered particle in combination with RF time and beam scintillating fiber hodoscopes, the beam particle species will be known very cleanly.At the center of the experiment is a liquid hydrogen target.The length of the target cell will be 4 cm, optimized to balance the expected yield with the tolerable resolution due to multiple scattering.The main detector for the scattered particles will be based on wire chambers for tracking and fast time-offlight scintillator walls for trigger, timing and particle identification.Simulations have shown that the time-of-flight technique is sufficient at the low momenta for all purposes and Cerenkov detectors are not needed for particle identification.While the time resolution from the beam hodoscopes is at the ns level, the scintillators of the main detector have a time resolution of 50 ps.The addition of a fast quartz Cerenkov counter in front of the target allows to measure the time of flight for the scattered particle to 100 ps.This identifies the scattered particle type and eliminates most of the in-flight decay background. For the cross section measurements, kinematic information needs to be sufficiently precise and accurate.Since the beam has an envelope of the order of a few cm allowing for angle variations of the order of 100 mrad, a key instrument will be a beam tracker with high spatial resolution that can provide information of the angle of the incoming beam particle to better than 10 mrad, event-by-event.The beam tracker is exposed to the entire beam flux of 5 MHz while the experiment trigger rate will be of the order of 2 kHz.Therefore, Gas Electron Multiplier detectors will be used, which have become available from the recently decommissioned OLYMPUS experiment at DESY [23].Three GEM elements are arranged as a tracking telescope in front of the target region, providing a 3D beam tomography.Event-by-event, measurement of the scattering angle with the main detector will be in reference to the incoming track determined by the GEM beam telescope.The precision and accuracy of the MUSE experiment has been designed to verify the observed discrepancy in the charge radius, if confirmed, with about four standard deviations of significance.The right half of Fig. 1 shows the sensitivity of MUSE to the difference of proton charge radii between electron and muon probes, all relative to the first muonic hydrogen data point [4]. Figure 1 . Figure 1.Left: Schematic view of the MUSE setup, consisting of a mixed, collimated e/μ/π beam, scintillatingfiber hodoscopes for time-of-flight and position, GEM detectors for precise beam position, liquid hydrogen target, as well as drift tube chambers and scintillator walls to detect scattered particles.Right: Sensitivity of MUSE to the difference of proton charge radii between electron and muon probes, all relative to the Pohl data point [4].
2018-12-13T03:56:18.560Z
2014-01-01T00:00:00.000
{ "year": 2014, "sha1": "90024e22ac489d66d7dbd1b82652a1b9bcfbc55a", "oa_license": "CCBY", "oa_url": "https://www.epj-conferences.org/articles/epjconf/pdf/2014/18/epjconf_meson2014_02008.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "90024e22ac489d66d7dbd1b82652a1b9bcfbc55a", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
221695989
pes2o/s2orc
v3-fos-license
ARCHIVE AND WARTIME AERIAL PHOTOGRAPHS AND PROCEDURES OF THEIR TREATMENT The article presents with the use of archive aerial photographs. The first task was to search and identify drainage detail from archive aerial photographs. The second task is to create procedures for processing aerial reconnaissance images (from WWII) to identify sites with potential pyrotechnic load. Both of these tasks are connecting by the effort to determine the internal orientation parameters of the cameras for using and easier calculation of exterior parameters by image correlation. Complete automation process searching of fiducial mark (FM) identification was implemented. The coordinates of all FM are calculated automatically from archive aerial photographs. In addition, the edges of the photographs are automatically found and a program was created to minimize of the cropping of the archive aerial photographs. The next part of the paper describes the procedures of averaging the values of the relative position of FM and transforming archive aerial photographs to a uniform dimension from a set of images taken with the same camera. The second part of the paper describes the process of creating a historical ortophoto with the standard calculation of bundle adjustment performed by an external process in the background of the OrthoEngine module using the Celery library installed as a python service. Finding of external image orientation parameters through bundle adjustment calculation are parameters, in the first, defined in the local system and then transformed into the national geodetic system of the Czech Republic. This entire section is available and free to use for on the internet. The third part of the article describes the practical procedure of the interpretation of archive and wartime photographs with aim of identification of the drainage detail and the procedures leading to the interpretation, identification, location and calculation of the position of unexploded air ammunition. * Corresponding author Objective Between April 2019 and December 2021, the Department of Geodesy and Mine Surveying with the Faculty of Mining and Geology at VSB-Technical University of Ostrava https://www.hgf.vsb.cz/544/en and the company Primis spol. s r.o. from Brno http://www.primis.cz/index.php/en are jointly investigating a project supported by the Ministry of the Interior of the Czech Republic, called "Finding unexploded aerial ammunition of World War II (PID-VI3VS/778)". Since the declassification of the contents of the National archive of aerial photos in 1993 with the Military Geographic and Hydrometeorological Institute (formerly Military Topographic Institute in Dobruška) http://www.mapy.army.cz/, the use and processing of archival photos in the Czech Republic have been of interest with many independent investigators, geodetic companies, legal offices and research institutions. Between 2003 and 2005 two of the co-authors participated in the creation of a historical orthophotomap of the Czech Republic made from photos taken for the purposes of topographic maps production (scale 1:25 000). The historical orthophotomap was made from approximately 21 000 aerial photos taken between 1952 and 1957. To date, the orthophotomap has been the most complex piece of work made from archival aerial photos (AAP) of the whole Czech Republic (73 000 kmsq). The historical orthophotomap is available to public at https://geoportal.gov.cz/web/guest/map (layers -Orthophoto map (50´s)). Currently, it is primarily the company PRIMIS in the Czech Republic who uses exact procedures of AAP processing using digital aerial photogrammetry. Within their applied research activities, PRIMIS are developing optimised procedures for the processing of AAP. The company also process photos from World War II and they closely cooperate with the major processor of war photos in Europe, the company Luftbilddatenbank Dr. Carls GmbH https://www.luftbilddatenbank.de/main/index.php?webcode=ho me&language=english . The companies mainly cooperate on the research in the interpretation of unexploded ammunition based on orthophotomaps made from war photos. Within ISPRS literature, recently a number of works have been published on the topic (Jae Sung Kim et al, 2010;Patrik Meixner et al, 2016;Chen et al, 2016 ). P. Meixner et al, 2016 have been written in cooperation with the above stated German company. Nowadays, Luftbilddatenbank Dr. Carls has approximately 106 000 photos of the Czech territory back from 1939 to 1946. Solution procedure Through gradual steps, the authors made a semi-automatic system of AAP in order to provide a practical self-service system to make high-quality and position precise (RMSE xy =2.4m) orthophotomaps to identify unexploded ammunition. The task the company is dealing with is the identification of the drainage details in the AAP. Concept of solution procedure The solution comprises several interlinked programmes. In the first step, searching for the drainage detail, the users select suitable photos for processing based on the access to the public AAP portal of the Czech State Administration of Land Surveying and Cadastre (CUZK) https://geoportal.cuzk.cz/(S(m4hjwcxvau1lkhzitbwwcuwq))/Def ault.aspx?lng=EN&head_tab=sekce-00 gp&mode=TextMeta& text=uvod_uvod&menu=01&news=yes&UvodniStrana=yes. The photos are selected by overlaying the area of land drain circumference and the circumference of photos overlaying the selected land drain. A simple condition applies for the choice of the photos from the photo database: the date of photo must be one year + following the recording of completion of drainage construction (both the dates are included in the databases). In case of searching for unexploded ammunition, based on the agreement with Luftbilddatenbank Dr. Carls GmbH, timerelevant photos of the locality of interest are selected. Usually, it is necessary to select three time periods of the given locality (1 before an air attack and two after an air attack). The following procedures for both tasks may be identical. Partial tasks for recreating AAP The further steps are significantly automated work procedures of AAP processing and their conversion to be subjected to bundle adjustment and matching calculations to produce the final orthophoto to be able to interpret the contained situation. AAP are scanned using special photogrammetric scanners with track-in internal accuracy of the identical spot on an image of approx. 2 micrometres, i.e. one seventh of a common scanning element size (14 micrometres per TD1). Images are placed in the scanner per discrete shots as they have been preserved in the archives of aerial photos in the Czech Republic (or war photos in reels). This implies that despite careful placement of images in the scanner, the beginning of scanning of AAP is always general towards the fiducial marks (FM). The programme developed by authors to unify and convert the images automatically searches for the fiducial marks on the discrete AAP and modifies the images to prepare them for matching and further digital processing. The programme is conceived as noinstaller one. To run the programme, only a Microsoft Visual C++ library is required, which makes part of the Microsoft package. The programme supports three types of fiducial marks of AAP. Other fiducial marks can be added into the library any time. The authors adhered to all common photogrammetric standards when converting the AAP. The steps are as follows: 1. Constructing the AAP fiducial mark library contents 2. Distribution of FM in AAP 3. Automated identification of FM in the AAP images 4. Determination of FM coordinates in image pixels 5. Calculation of FM values and their averaging towards all images in a set 6.Identification of analogue image margins and calculation of image size cropping 7.Recalculation of image size and conversion of the original AAP 8.Storing the final converted image data for further processing or third-party software's processing. Constructing the AAP fiducial mark library contents: A digital AAP is obtained via scanning the real analogue photo using a photogrammetric scanner. AAP is usually scanned margin to margin. The geometrical parameters of AAP are ensured by FM. Figure 1 below shows three types of fiducial marks that were selected for the FM library with respect to their high frequency. Automated identification of FM in the AAP images: The automatic identification of FM image in AAP is based on the feature surroundings evaluation methods in an image and its specification within FM surroundings. First, according to FMs and in the AAP image contents, key features and their descriptors are calculated using SURF (Speeded-Up Robust Features) method. Next, correct pairs of features are searched for using RANSAC (RANdomSAmpleConsen) algorithm. The last step is to calculate homography and the position of the detected mark is determined. Figure 4 shows the final result of the calculation and the identification of the surroundings based on identifying the mark in the FM library. Determination of FM coordinates in image pixels: Based on the identification of FM and the geometrization of its internal part (the inner circle of the mark in Figure 4), we determine the focal point of the mark and calculate the inner part pixel in image pixel coordinatessee Figure 5. All FMs of each AAP are stored in a temporary file for further calculations. Calculation of FM values and their averaging towards all images in a set: Having calculated all FMs of all AAP from a set of photos, the calculated values are averaged, and a systematic repositioning is determined, which eliminates the difference from the discrete FM median. A "basic image size matrix" is determined in the temporary files. 2.1.6 Identification of analogue image margins and calculation of image size cropping: AAP images that are supposed to enter the matching calculations to produce an orthophoto of the territory must be "unmasked", i.e. the parts with the fiducial marks or frames are cropped. Automatic steps are sequenced to find the frames, and crop these non-image parts so that they showed only the image. Recalculation of image size and conversion of the original AAP: Having averaged the size of AAP croppings to get rid of frame data, FM locations in the images and own frames, we recalculated the image sizes towards FMs, or their average positions calculated as per step 2.1.4. This procedure ensures that the recalculation of the scanned AAP and conversion of AAP into formats suitable for processing in the next step is correct with respect to the best available techniques. This way, the image set still maintains an identical focal distance and image metrics (with scanning element of 14 micrometres) is maintained. The principal feature of the image as well as the symmetry point are identical in all the aerial photos of the given set, and the dimensions of all images in pixels will be identical for all images that enter all further calculations. 2.1.8 Storing the final converted image data for further processing or third-party software's processing: The converted AAP are stored automatically in the final converted image directory. Automatic calculation of bundle adjustment The aim of creating the module for triangulation and inlaying of images was to provide users with a simple web interface, where they upload automatically transformed photos into detected fiducial marks along with manually measured reference points. The chain of applications analyses the images and automatically creates the final orthophoto, without any user's interference. This way, the user does not need any software, only a web browser and Internet connection. Processing phases: 1. Detection and calculation of key features 2. Matching the key featuresdetermination of relative orientation 3. Calculation of incremental bundle adjustment 4. Transformation of the image bundle into S-JTSK coordinate system 5. Orthogonalisation and inlaying into the final orthophoto Currently, apart from commercial applications, the issue of automatic processing of disorganised sets of images has been approached via a number of open source projects. For the first three phases, OpenMVG library (Moulon, 2016) was used, which also integrates an open source Ceres library http://ceressolver.org/ to calculate bundle adjustment. Detection and calculation of key features: The key features ( Figure 6) unambiguously characterize the image area so that the area could be found and compared with the identical area on another image. To detect and compare key features in an image, the OpenMVG library uses a SIFT (Scale Invariant Feature Transform) detector in the default setting (Lowe, D.G., 2004). Unlike a simple correlation of two areas in photos, this detector is partially invariant towards changes in the view geometry, i.e. rotation (approx. 15 degrees), changes in the scale and noise. If key features are detected in each image, including the descriptors, it is possible to proceed to their matching and identification of the corresponding pairsthe correspondences arising from the projection of a point in 3D space into each image, which shall have very similar descriptors. The level of agreement of two key features is clearly defined on the grounds of Euclidian distance of their SIFT descriptors. In case the collection of images is not prearranged and relationships among the different images are unknown, all images must be compared one-to-one. The corresponding sets of key features obtained via matching are usually burdened by an error and false correspondences that are caused by changes in the camera position, in the lighting, digital image noise, etc. These false correspondences may be eliminated using a geometrical criterionepipolar conditionsee Figure 7. Figure 7. Simple representation of epipolar coordinates The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLIII-B2-2020, 2020 XXIV ISPRS Congress (2020 edition) Point X in the 3D space forms an epipolar plane along with the projection centres C and C'. Thanks to the line of the epipolar plane and the projection plane, epipolar lines (epipolars) are formed, which pass through the points x and x' being the projections of point X into the projection plane. These lines crosscut the epipoles e and e', where the epipole is the projection of the projection centre of one camera into the second camera's projection plane. The algebraic formulation of the epipolar condition is the formula below: Where F is the fundamental matrix of 3 x 3 size and the ranks 2 define the relative relation between two cameras independent of the scene structure. To calculate the fundamental matrix, it is not always necessary to know the parameters of the discrete cameras' internal orientation. In the OpenMVG library (Moulon, P., 2016) there is a RANSAC algorithm (Fischler, M 1981) to select the key features correspondences complying with the epipolar condition. The implemented algorithm allows for iterative identification of the best solution matching the given model, i.e. the equation x' F x = 0, and exclude wrongly detected correspondences. The fundamental matrix is calculated using a 7/8-point algorithm (Hartley, R, 2003). Calculation of incremental bundle adjustment: Based on the calculated relative orientations for the discrete images, it is possible to determine the approximate spatial data corresponding to the image coordinates of the detected correspondences that are used as an estimate of the input parameters entering the complex bundle adjustment. The aim of bundle adjustment is to find optimal parameters of internal and external orientation, including the radial distortion coefficients of the lens and such spatial data, for which the distance between the point projections in the space into the image and their detected image coordinates is minimised. Adjustment is attributed to points in the 3-D space and thus the parameters of external and internal orientation. The partial derivation values may be determined analytically via a function derivation according to the variables as well as numerically. Due to significant complexity of the functional relationships to calculate the image coordinates, where the analytical development is difficult, the Ceres library http://ceressolver.org/ uses a numerical solution. At the start of the computation, the most suitable pair of images is chosen, e.g. based on the number of detected key features. The projection centre of the first image in the pair defines the start of local coordinate system, and the external orientation rotation matrix of the first image is selected as the unit matrix. Each image from the pair contains the key features also detected in other images. Thanks to these correspondences, other images are "connected" into the local coordinate system. Bundle adjustment is carried out after each iterationsee Figure 8 for the result. Transformation of image bundles into the Czech coordinate system called S-JTSK: The last phase in the determination of external orientation parameters is the transformation of the geodetic coordinate system via 3D similarity transformation using reference points. Because the image coordinates are always measured in two and more images, it is possible to determine 3D coordinates in the relative system of coordinates. These coordinates are used along with the coordinates of the reference control points in the geodetic system to calculate the parameters of 3D similarity transformation. Using a given transformation key, the 3D point coordinates are transformed, including the calculated parameters of external orientation into the geodetic system. In the experiments we obtained a higher number of consistent results (image correspondences in the final orthophoto) during bundle adjustment in the local coordinate system and the subsequent 3D transformation than from the final bundle adjustment with the reference points of S-JTSK system. We assume this was caused by certain internal stiffness of the image bundle and discrepancies between the terrain model used for reference point level interpolation (carried out from the current DTM) and the reality captured in the archival aerial images decades ago. The bundle adjustment described in the previous Subsection (2.2.3) is solved in the local system. For the transformations into the S-JTSK geodetic system it is vital to find Ground Control Points (GCP). GCP for photogrammetric calculations and creation of an orthophoto may be selected manually or set up through a required text file using the application http://www.vugtk.cz/euradin/gcp/ .The manual setup of a GCP file is based on the selection of GCP from the (CUZK -https://geoportal.cuzk.cz/) data, either from the cadastral map or from a ready orthophoto of Ground sample distance (GSD) in the size at least identical to the size of the scanned AAP's GSD or using the trigonometric point (TP) coordinates of the CR. The last option is theoretically the most accurate but our capacities to identify TP in historical images is very limited, except for the triangulation towers and point signalisation of IV. and V. order, as well as through the required density of the reference points to produce an orthophoto, the usual identifiable number of TP is not sufficient. To select reference points, it is possible to use points in the archival images with points in the cadastre, e.g. in Figure 9 (left) and its detail in Figure 9 (right) from the situation in AAP. Figure 10a is an identical situation from the current imaging and Figure 10b shows a cut-out from a cadastral map. The coordinates are obtained reading them from a digital cadastral map. However, on the territory of the Czech Republic it is currently more advisable to find identical points in AAP and the current orthophoto as, in general, the quality of the orthophotomap in the urban areas is higher than RMSE xy of the cadastral map. Therefore, we recommend users to read coordinates of GCP from produced orthophotos, particularly in such points, in which we may assume that they have not changed their position since the AAP was taken. In case of doubt it is recommended to measure the reference points geodetically in the terrain from the points interpreted from AAP. The automated procedure is described below. As mentioned above, bundle adjustment is first carried out in the local system. For the transformation into the national geodetic system (S-JTSK), the GCP in the national system are needed. Having read the GCP coordinates, the application was cloned from repository https://github.com/posm/posm-gcpi, where the sources for the underlayers and the output coordinate system were modified. The images are uploaded into the application only locally in the browser, they are not sent anywhere. The measurement outcome is a text file containing the measured image coordinates in the image coordinate system, and their 2D equivalents in the S-JTSK coordinate system. Such a file is sent by the user to authors' server, where a digital model of the Czech Republic is stored, and which is used to interpolate the Z coordinate of the selected GCP. Each GCP must be measured in at least two images to be able to calculate its 3D coordinate and the triangulation in the local coordinate system, in which the initial bundle adjustment is carried out. Figure 11 shows the application to measure the reference points; on the left there are the aerial photos, and on the right, there are the measured 2D reference points in the geodetic system of the tested locality. Figure 11 Web application to measure coordinate reference points Orthogonalisation and inlaying into the final orthophoto: The final step of AAP processing, after the conversion of images, bundle adjustment and transformation into S-JTSK, is orthogonalization and inlaying of the images into the final orthophoto. For the orthogonalization, we use the digital terrain model of the Czech Republic of a regular step of 20m. Another alternative for the future is the calculation of DMT via the correlation of AAP used for the production of orthophotos. One of the authors' aims was also to create a module for automatic inlaying. As photographic aerial surveying is usually carried out with sufficient coverage, it is clear that each place on the orthophotomap is captured on two or more images. Thus, it is necessary to define a function to clearly and optimally choose such parts of the images that are least burdened by lens' optical errors. In the inlaying module there is an implemented function, where an image is chosen for each pixel of the final orthophotomap, where the distance of the 3D coordinate projection from the optical axis in the focal plane y is minimal, using the projection of its 3D coordinate in the geodetic system into the plane of the image. For the purposes of visualisation, the discrete images may be replaced by grey levels and represent only parts used in the final orthophotomapsee Figure 12. The choice operation for the most suitable image for each pixel is automatic during the programme run. This way, there is no need for orthorectification and storing of all full images. To maintain the image quality, the R, G, B values of the different colour channels are determined by bilinear transformation from the nearest pixel's surroundings in the source image. For the OrthoEngine module there is also a web interface, where the input is the digital terrain model, the parameters of external and internal orientation in the native format of OpenMVG library, and 2D coordinates of reference points. Figure 12 Mask used for inlaying images into the final orthophotomap CONCLUSIONS The aim to produce a historical orthophoto is to allow for the interpretations and display of conditions at the time of image exposure. This way, it is possible to interpret unexploded ammunition, or the structures of the drainage system, which are difficult to identify in the contemporary images. The final interpretations and vectorisations of the objects in demand based on the produced historical orthophoto are very valuable due to the data on the interpreted object position. The orthophotos have been produced using the described exact methods, and verifiable and controlled procedures. According to copyright rules, the final data interpretation and the orthophoto itself are a cartographic representation of original research results obtained by authors. However, the identification of objects on historical orthophotos does not end by simple production of the orthophoto, but the accuracy of the whole system of the historical orthophoto production is verified by digging out the drainage system piping or the unexploded bombs back from World War II. Figure 13 shows an example of eight processed images from the testing locality having applied the procedure described herein. Figure 14 shows the final processing of wartime images.
2020-08-20T10:06:53.563Z
2020-08-12T00:00:00.000
{ "year": 2020, "sha1": "f7b4e7a70671749ebb8e634b062e8410e9118ba5", "oa_license": "CCBY", "oa_url": "https://www.int-arch-photogramm-remote-sens-spatial-inf-sci.net/XLIII-B2-2020/69/2020/isprs-archives-XLIII-B2-2020-69-2020.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ee5515bc0c02a18135ed8d18acd4eaed73cab5bd", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Computer Science" ] }
242601146
pes2o/s2orc
v3-fos-license
Effects of Panax ginseng on hyperglycemia, hypertension, and hyperlipidemia: A systematic review and meta-analysis Panax ginseng is a medicinal plant is a material with various pharmacological activities and research suggests that it is particularly effective in representative metabolic diseases such as hyperglycemia, hypertension, and hyperlipidemia. Therefore, in this study, systematic review and meta-analysis were performed to investigate the comprehensive effect of P. ginseng on metabolic parameters representing these metabolic diseases. A total of 23 papers were collected for inclusion in the study, from which 27 datasets were collected. The investigational products included P. ginseng and Korean Red ginseng. Across the included studies, the dose ranged from 200 mg to 8 g and the supplementation period lasted from four to 24 weeks. The study subjects varied from healthy adults to those with diabetes, hypertension, obesity, and/or hyperlipidemia. As a result of the analysis, the levels of glucose and insulin area under the curves, % body fat, systolic and diastolic blood pressures, total cholesterol, triglycerides, and low-density lipoprotein cholesterol were significantly reduced in the P. ginseng group as compared with in the placebo group. In conclusion, P. ginseng supplementation may act as an adjuvant to prevent the development of metabolic diseases by improving markers related to blood glucose, blood pressure, and blood lipids. Introduction The development of modern society has led to a convenient and affluent lifestyle for many that, combined with excessive nutrition, fast food consumption, reduced physical activity, and excessive stress, has become a major cause of various metabolic diseases. Typical metabolic diseases include diabetes, hypertension, and hyperlipidemia, which lead to serious conditions such as cardiovascular disease, atherosclerosis, cerebrovascular disease, and cancer, increasing mortality rates. Since metabolic diseases are usually caused by lifestyle, multiple diseases can develop simultaneously as age increases. In particular, metabolic syndrome is a combination of risk factors for various metabolic diseases showing abnormal levels. When various metabolic diseases are present, the incidence and mortality of severe diseases are remarkably increased [1e3]. The main cause of metabolic syndrome and metabolic diseases is insulin resistance [1,4,5]. Therefore, managing obesity, the main cause of insulin resistance, is as important as monitoring blood glucose, blood pressure, and blood lipid levels. The progression of various metabolic diseases, including metabolic syndrome, toward the onset of associated conditions can be mitigated by improving lifestyle habits. Therefore, many people are beginning to pay ample attention to correcting their eating habits and lifestyle to prevent diseases; in this vein, consumer demand for functional foods that support health also continues to increase. Among the various functional foods, products made from ginseng are some of the most commonly consumed globally. Ginseng is a perennial plant of the Araliaceae family and a medicinal crop that has long been widely used in Asia. Ginseng includes various species such as Panax ginseng, Panax quinquefolium, Panax notoginseng, and Panax japonicas. Of these, P. ginseng, mainly grown in Korea and China, accounts for the largest proportion of global ginseng production [6]. The main component of P. ginseng is ginsenoside, one of the saponin groups; to date, more than about 100 kinds of ginsenosides have been reported [7,8]. To maximize the pharmacological activities of ginseng by increasing the bioavailability of ginsenoside, red ginseng, black ginseng, and fermented ginseng, which have undergone steaming and fermentation processes, are also widely consumed [9e11]. The outstanding pharmacological activities of P. ginseng have been reported by many papers and various reviews, systematic reviews, and meta-analyses have been conducted with the accumulation of numerous data. Most [30], and obesity [31,32] have been published, while systematic reviews and meta-analyses of randomized controlled trials (RCTs) have only covered blood glucose [18] and blood lipids [30] to date. In particular, in the case of the blood glucose study, only Korean Red ginseng was included as the investigational product, while the study population was limited to those patients with type 2 diabetes [18]. In addition, until now, no investigation has comprehensively and systemically reviewed the markers of P. ginseng related to metabolic diseases such as blood glucose, blood pressure, body fat, and blood lipids. Given the growing aging society with a high probability of being exposed to multiple diseases at the same time and since the risk of metabolic disease is constantly increasing, it is important to reveal the multitarget efficacy of functional ingredients on metabolic diseases. Therefore, in this study, we analyzed the clinical effects of P. ginseng on metabolic parameters representing various metabolic diseases that are increasingly crucial factors to comprehend in terms of prevention and treatment. For this, a systematic review and meta-analysis were conducted by selecting RCTs that measured the effects of P. ginseng on metabolic parameters in various study populations. Criteria for considering studies The studies included in this systematic review and metaanalysis were RCTs of P. ginseng on metabolic parameters. Studies with an intervention period lasting longer than four weeks were selected and there were no restrictions on the characteristics of the study subjects. Studies incorporating P. ginseng as a part of a complex intervention were excluded from this investigation. Search methods The present study was conducted in accordance with the Preferred Reporting Items for Systematic Reviews and Metaanalyses (PRISMA) guidelines. PubMed/Medline, the Web of Science, EMBASE, and the Cochrane Central Table 1). Selection of studies and data extraction For the selection of eligible studies for inclusion, two reviewers independently screened the search results, the first focusing on titles and/or abstracts and the second focusing on the full text. Any disagreements were resolved by consensus; when a consensus was not reached, a decision was made involving a third reviewer. Two reviewers independently extracted the following data using a standardized data extraction format, with inconsistent results solved through confirmation and discussion of the original paper: first author, publication year, study design, subjects' characteristics, types of interventions, and results of outcomes. Multiple supplement groups within one study were considered as individual data. Assessment of the risk of bias Two reviewers independently assessed the risk of bias among the included studies using the Cochrane risk of bias tool, which assesses the risk of bias in studies according to random sequence generation, allocation concealment, blinding of the subjects and personnel, blinding of the outcomes assessment, incomplete outcomes data, selective reporting, and other biases. Statistical analyses A meta-analysis was performed using Review Manager version 5.4 (Cochrane Collaboration, London, England) and R software version 4.1.0 (R Foundation for Statistical Computing, Vienna, Austria). The units of all evaluation markers were properly standardized. We used the mean change and standard deviation (SD) values of the markers to investigate the effect size of the collected data. For studies that presented only standard error (SE), SE was converted into SD by multiplying the square root of the sample size. Pooled data was analyzed using a fixed-effects model and the data were expressed as weighted mean difference (WMD) with 95 % confidence interval (CI) values for continuous outcomes. Moreover, subgroup analysis was performed based on clinical conditions of the subjects. The I 2 statistic test was used to estimate the percentage of heterogeneity between studies; heterogeneity was confirmed if the I 2 value was 50 % or more and, if heterogeneity was confirmed, data was analyzed by applying a random-effects model. A sensitivity analysis was conducted to estimate the effects of omission for each study. Publication bias was evaluated using funnel plot and Egger's weighted regression test, and when it was judged to be statistically significant, the effect was adjusted using the "trim and fill" method [33]. A p-value of less than 0.05 was considered to be statistically significant. Results As a result of searching the literature databases to select RCTs that evaluated the effects of P. ginseng on metabolic parameters, a total of 1334 studies were collected after excluding duplicates. Of these, 1276 papers were further excluded following the title and/or abstract review and 35 papers were additionally following the fulltext review; finally, 23 articles were included in this systematic review and meta-analysis (Fig. 1). A total of 27 P. ginseng group datasets were included given the approval for the inclusion of multiple supplement groups from a single study. Study description Eligible studies' characteristics are detailed in Table 1. The selected 23 studies were all RCTs, including 20 that were paralleldesign studies and three that were crossover-design studies. The characteristics of the subjects of each study are as follows: two studies included healthy subjects [34,35]; 11 studies included impaired glucose tolerance, or diabetic subjects [36e46]; two studies included prehypertensive or hypertensive subjects [47,48]; three studies included overweight or obese subjects [49e51]; one study included hyperlipidemic subjects [52]; three studies included subjects with metabolic syndrome [53e55]; and one study included postmenopausal subjects [56]. Of the 23 studies, five used P. ginseng [35,39,43,46,52], 17 used Korean Red Ginseng [34,36,38,40e42,44,45,47e51,53e56], and one used P. ginseng berry [37] as investigational products, respectively. All studies measured metabolic diseaseerelated markers such as blood glucose, blood pressure, body fat, and blood lipid levels. The mean dosage of the investigational products was 2.85 ± 2.00 g (range: 200 mge8 g), and the mean study period was 10.78 ± 5.18 weeks (range: 4e24 weeks). Changes in blood glucoseerelated markers Of the 27 P. ginseng group datasets, a total of 20 datasets were measured for blood glucoseerelated markers, with the resultant details as follows: 19 In all cases, the heterogeneity was judged to be low (I 2 values of 13 %, 0 %, 0 %, 23 %, 4 %, 0 %, and 18 % for FG, FI, 2-h PG, 2-h PI, glucose AUC, glucose insulin, and HbA1c, respectively), so it was analyzed using a fixed-effects model. Upon analyzing the effects size of P. ginseng supplementation, as compared with the placebo, it was revealed that the glucose AUC was decreased by 1.77 mmol/L*hr (95 % CI: À2.97 to À0.57) and the insulin AUC was decreased by 101.11 pmol/L*hr (95 % CI: À160.85 to À41.38), showing a statistically significant difference (p ¼ 0.0004 and p ¼ 0.0009). However, there was no statistically significant difference in FG, FI, 2-h PG, 2-h PI, or HbA1c between the P. ginseng and placebo supplementation groups (data not shown). In subgroup analysis, the effects of P. ginseng supplementation on glucose AUC and insulin AUC were stronger in diabetic subjects than in prediabetic subjects (Figs. 2A, 3A). Glucose AUC was robust in the sensitivity analysis but insulin AUC decreased by 8.77 when one study was omitted, resulting in a loss of significance (p ¼ 0.200) (Figs. 2B, 3B) [45]. Funnel plot and Egger's test revealed that there were no publication bias for glucose AUC and insulin AUC. Funnel plots of glucose AUC and insulin AUC are shown in Figs. 2C and 3C. Changes in blood pressureerelated markers Of the 27 P. ginseng group datasets, a total of 15 datasets were measured for blood pressureerelated markers, with the resultant details as follows: 15 included SBP (n ¼ 702) [34,36,38,41,42,45e48,53e55] and 14 included diastolic DBP (n ¼ 621) [34,36,38,41,42,45e48,53,54]. SBP with low heterogeneity was analyzed using a fixed-effects model (I 2 ¼ 33 %), and DBP with high heterogeneity was analyzed using a random-effects model (I 2 ¼ 51 %). Upon analyzing the effects size of P. ginseng supplementation, as compared with the placebo, it was revealed that the SBP was decreased by 3.23 mmHg (95 % CI: À4.19 to À2.27), showing a statistically significant difference (p < 0.00001). In subgroup analysis, the effects of P. ginseng supplementation on SBP was stronger in prehypertension, hypertension, and metabolic syndrome subjects (Fig. 4A). Meanwhile, the DBP decreased by 1.48 mmHg (95 % CI: À3.18 to 0.21) in the P. ginseng supplementation group as compared with in the placebo group, but this result was not statistically significant (p ¼ 0.09) (Fig. 5A). As a result of the sensitivity analysis, SBP decreased by 0.88 when one study omitted, resulting in a loss of significance (p ¼ 0.400) (Fig. 4B) [47]. DBP obtained significance by decreasing by À1.94, À2.05, and À2.06 when the three studies were omitted (p ¼ 0.03, p ¼ 0.01, p ¼ 0.01) (Fig. 5B) [41,42,48]. Egger's test revealed that there were publication bias for SBP and DBP (p ¼ 0.021, p ¼ 0.002). Six studies had to be trimmed and filled by trim and fill analysis to adjust the publication bias of SBP. As a result, the effect size increased and the direction of effect did not changed (MD ¼ À3.76, 95 % CI: À4.67, À2.84). In DBP, seven studies had to be trimmed and filled, resulting in an increase in effect size and no change in effect direction (MD: À3.783, 95 % CI: À5.380, À2.187). Trim and fill funnel plots of SBP and DBP are shown in Figs. 4Ce5C. In all cases, the heterogeneity was judged to be low (all I 2 values were 0%), so it was analyzed using a fixed-effects model. Upon analyzing the effects size of P. ginseng supplementation, as compared with the placebo, it was revealed that the % body fat was decreased by 2.11 % (95 % CI: À3.98 to À0.23), showing a statistically significant difference (p ¼ 0.03). Conversely, there were no statistically significant differences in body weight, BMI, or WC between the P. ginseng and placebo supplementation groups (data not shown). In subgroup analysis, the effect on % body fat was stronger in studies that enrolled obese subjects based on % body fat rather than BMI (Fig. 4) (Fig. 6A). As a result of the sensitivity analysis, % body fat decreased by 1.25 when one study omitted, resulting in a loss of significance (p ¼ 0.28) (Fig. 6B) [51]. Funnel plot and Egger's test manifested that there was no publication bias. The funnel plot of % body fat are shown in Fig. 6C. Changes in blood lipiderelated markers Of the 27 P. ginseng group datasets, a total of 18 datasets were assessed for blood lipiderelated markers, with the resultant details In all cases, the heterogeneity was judged to be low (all I 2 values were 0%), so the data were analyzed using a fixed-effects model. Upon analyzing the effect size of P. ginseng supplementation, as compared with the placebo, it was revealed that the TC was decreased by 0.17 mmol/L (95 % CI: À0.28 to À0.05), the TG was decreased by 0.11 mmol/L (95 % CI: À0.21 to À0.01), and the LDL-C was decreased by 0.24 mmol/L (95 % CI: À0.36 to À0.13), showing a statistically significant difference (p ¼ 0.005, p ¼ 0.030, and p < 0.0001). There was no statistically significant difference in HDL-C between the P. ginseng and placebo supplementation groups (data not shown). In subgroup analysis, the effect of P. ginseng supplementation on TC was stronger in prediabetic or diabetic subjects. The effect on TG was more significant overweight or obese subjects, and LDL-C was more significant in metabolic syndrome or postmenopausal women; overweight or obese subjects; prediabetic or diabetic subjects (Figs. 7A, 8A and 9A). Total cholesterol and LDL cholesterol were robust in the sensitivity analysis but TG decreased by 0.10, 0.09, 0.08, and 0.11 when the four studies were omitted, resulting in a loss of significance (p ¼ 0.06, p ¼ 0.08, p ¼ 0. 17 Discussion The present systematic review and meta-analysis were conducted to verify the effects of P. ginseng supplementation on metabolic diseaseerelated markers. For this, 23 RCT papers were collected and the changes in markers related to blood glucose, blood pressure, body fat, and blood lipids, which are major markers of metabolic diseases, were compared with those following placebo treatment using data from 27 ginseng supplement groups. As a result of this analysis, it was found that glucose AUC, insulin AUC, SBP, DBP, % body fat, TC, TG, and LDL-C were significantly decreased by P. ginseng supplementation. As previously reported, there were no significant changes in FG, PG, or HbA1c among the blood glucoseerelated markers [18]. FG, PG, and HbA1c have traditionally been used as key markers to determine whether blood glucose is well-regulated, but there are limitations in providing accurate information about blood glucose responses and changes after meals by presenting values only at a specific point in time [57]. On the other hand, glucose and insulin AUC have the advantage of being able to determine whether blood glucose and insulin are normally regulated by observing the pattern of changes in blood glucose and insulin for a certain period of time, not a specific time point after oral glucose tolerance test (OGTT); these parameters have therefore been considered key measurements in efforts to determine glucose intolerance in recent years [58]. It has already been found in preclinical studies that ginseng administration improves insulin sensitivity by enhancing insulin signaling [59,60]. Therefore, as our results have shown, significant decreases in glucose and insulin AUC can be an important basis for improving glucose intolerance by ginseng supplementation. In particular, as a result of subgroup analysis, it was found that the effect was stronger in diabetic patients than in prediabetic subjects. However, it is a concern that the range of study types and the size of the overall study population are still relatively limited, and since the importance of AUC measurement is increasingly emerging, clinical studies on this subject should be continued in the future. The main cause of various metabolic diseases, including metabolic syndrome, is insulin resistance, which is derived from obesity due to excess body fat [1,2,4]. Obesity is judged by weight, BMI, WC, and % body fat. Because weight and BMI are also affected by muscle mass, the use of WC or % body fat is considered more appropriate [61e63]. In this study, it was confirmed that the % body fat was decreased from the pooled data of three clinical trials that enrolled subjects who were overweight or obesity (BMI !23 kg/m^2 or % body fat !30 %). Although significant effects could not be confirmed among other markers, a significant decrease in % body fat was confirmed in a small sample size. Moreover, in subgroup analysis, the effect of reducing % body fat was stronger when obese subjects were selected based on % body fat rather than BMI. It is already known through several studies that P. ginseng and ginsenoside inhibit adipogenesis and lipid accumulation in adipocytes [64,65]. Therefore, if well-designed clinical studies that established appropriate inclusion criteria to determine obesity are sufficiently accumulated, significant results could be expected for other markers as well. In addition, P. ginsesng has been proven to reduce blood lipids in various clinical studies and, in this study, as in a previously reported meta-analysis, TC, TG, and LDL-C were decreased [30]. Subgroup analysis of subjects with hyperlipidemia showed no significant reductions in TC, TG, and LDL-C. However since only one study selected hyperlipidemic subjects, it is insufficient to evaluate the effect. On the other hand, as a result of analyzing subjects with metabolic diseases such as metabolic syndrome, menopause, obesity, and diabetes into subgroups, blood lipids were significantly decreased. Many studies have shown that obesity, diabetes, and menopause are highly correlated with hyperlipidemia [66e68]. Therefore, although hypolipidemic effect of P. ginseng was confirmed through this study, a clearer effect can be expected if more studies are conducted on subjects with dyslipidemia in the future. In this study, blood pressure was also significantly improved in the P. ginseng supplement group. In preclinical studies, ginseng administration decreased the blood pressure through the activation of endothelial nitric oxide synthase and the release of nitric oxide [69,70]. Similarly, SBP and DBP were decreased in this study, which analyzed various subject groups ranging from healthy individuals to hypertensive patients. Subgroup analysis showed a greater effect in subjects with prehypertension or hypertension and metabolic syndrome. Therefore, it was confirmed once again that the selection of appropriate study subjects is important. A comprehensive analysis showed that the major markers of metabolic syndrome and metabolic diseases were significantly improved when P. ginseng products were consumed for a long period lasting four weeks or longer. Based on this result, it can be expected that the intake of P. ginseng can play a sufficient role as an adjuvant for the prevention and improvement of metabolic diseases. To our knowledge, this is the first systematic review and meta-analysis to simultaneously evaluate the effects of P. ginseng on several markers related to metabolic diseases. However, the study subjects surveyed in this study were varied, ranging from healthy individuals to menopausal women and patients with obesity, diabetes, hypertension, and hyperlipidemia, so, while it is good to generalize these results, there was a limit to obtaining more specific results. Therefore, more clinical trials on metabolic syndrome should be accumulated in the future for further investigation. Conclusions A systematic review and meta-analysis were conducted by collecting studies that evaluated changes in metabolic diseaseerelated markers driven by the long-term use of P. ginseng in various study populations. Significant changes were found in markers related to blood glucose, insulin resistance, blood pressure, and blood lipids. Based on these findings, supplementation with P. ginseng could be adopted as adjuvant therapy for diabetes, hypertension, and hyperlipidemia. Through this study, P. ginseng supplementation has established an academic basis that it can be used as adjuvant therapy for diabetes, hypertension, and hyperlipidemia. Declaration of competing interest All contributing authors declare no conflicts of interest exist.
2021-10-15T15:12:27.337Z
2021-10-01T00:00:00.000
{ "year": 2021, "sha1": "0b45a750e0e6c4feb56553a643a6caded591dc3a", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.jgr.2021.10.002", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ef0c4a958c83a2c560b2e6877c9b158d776957dd", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
17757983
pes2o/s2orc
v3-fos-license
Study on a Single-Dose Toxicity Test of D-Amino Acid Oxidase (DAAO) Extracts Injected into the Tail Vein of Rats Objective: This study was performed to analyze the single-dose toxicity of D-amino acid oxidase (DAAO) extracts. Methods: All experiments were conducted at the Korea Testing & Research Institute (KTR), an institution authorized to perform non-clinical studies, under the regulations of Good Laboratory Practice (GLP). Sprague-Dawley rats were chosen for the pilot study. Doses of DAAO extracts, 0.1 to 0.3 cc, were administered to the experimental group, and the same doses of normal saline solution were administered to the control group. This study was conducted under the approval of the Institutional Animal Ethics Committee. Results: In all 4 groups, no deaths occurred, and the LD50 of DAAO extracts administered by IV was over 0.3 ml/kg. No significant changes in the weight between the control group and the experimental group were observed. To check for abnormalities in organs and tissues, we used microscopy to examine representative histological sections of each specified organ, the results showed no significant differences in any organs or tissues. Conclusion: The above findings suggest that treatment with D-amino acid oxidase extracts is relatively safe. Further studies on this subject should be conducted to yield more concrete evidence. LD50 of DAAO extracts administered by IV was over 0.3 ml/kg. No significant changes in the weight between the control group and the experimental group were observed. To check for abnormalities in organs and tissues, we used microscopy to examine representative histological sections of each specified organ, the results showed no significant differences in any organs or tissues. Sprague-Dawley rats were chosen for the pilot study. Doses of DAAO extracts, 0.1 to 0.3 cc, were administered to the experimental group, and the same doses of normal saline solution were administered to the control group. This study was conducted under the approval of the Institutional Animal Ethics Committee. microorganisms to mammals [1]. The enzyme D-amino acid oxidase (DAAO) was discovered in the porcine kidney, and since that time, it has been extensively studied as a model flavin-dependent oxidase. In mammals, DAAO is found at the highest concentrations in the kidneys, liver, and brain. In addition, DAAO catalyzes the oxidative deamination of a wide range of D-amino acids [2]. DAAO was first described by Krebs in 1935 [3] and has been found to be one of the most important enzymes for the maintenance of proper levels of D-amino acids [4]. The main role of DAAO in mammalian kidneys and liver cells is the detoxification of endogenous D-amino acids that accumulate in the organism during the course of racemization. Accumulation of D-amino acids in mammalian cells is one of the characteristics of organism aging. In recent years, the important role of DAAO in maintaining the necessary levels of D-serine in different brain tissues has been revealed. D-serine participates in the regulation of N-methyl-D-aspartate receptors (NMDArs) in the form of a free amino acid or a neuroactive peptide. There have been some suggestions that the dysfunction of NMDA-rs resulting from the erroneous expression of the DAAO gene is one of the possible causes of schizophrenia. The activity of DAAO in malignant kidney and liver cells was also shown to be much lower than in healthy ones, which can be used in the cancer diagnostics of those organs [5]. DAAO plays an important role in regulating the levels of D-serine, and its function is impaired by the presence of the D-serine mutation, which may contribute to the pathogenic process in Amyotrophic lateral sclerosis (ALS). Sasabe et al. did a study on the role of DAAO and D-serine in motorneuron physiology, as well as in ALS pathophysiology, and they showed that D-serine homeostasis was physiologically important in motorneuronal excitability and that the inactivity of DAAO was pathologically relevant to the vulnerability of motorneurons to excitotoxicity in ALS. This study also stressed the potential use of regulators of DAAO activity or D-serine antagonists as a therapeutic strategy for treating ALS [6]. Taken together, DAAO has potential as a novel therapeutic to treat various neural and psychiatric disorders. However, before clinical experiments can be performed, toxicity tests need to be conducted. Thus, this experiment was conducted to verify the toxicity of DAAO. The current research trend for single-dose toxicity testing of extracts is to study acute and subacute toxicity through Good Laboratory Practice (GLP). All the experiments for this research were conducted under the GLP at the Korea Testing & Research Institute (KTR), an institution authorized to perform non-clinical studies. Materials and methods The DAAO (0.1-0.3 cc, Sigma-Aldrich, St. Louis, MS, USA) extract was prepared in a clean room adhering to Korea-Good Manufacturing Practice(K-GMP) in a lab at the Korean Pharmacopuncture Institute. After the mixing process with pure water, the pH was controlled to between 7.25-and 7.35. NaCl was added to make a 0.9% isotonic solution. The completed extract was stored in a refrigerator. The animals used in this study were 6-week-old Sprague-Dawley rats. The mean weights of the rats were 200.8-233.9 g, and 156.7-183.4 g for the male and female rats, respectively. For all animals, a visual inspection was done and all animals were weighed using a CP3202S system (Sartorius, Germany). After 7 days of acclimatization, the rats' general symptoms and changes in weight were recorded. No abnormalities were found. The temperature of the lab was 22 3°C and the humidity was 50 20%. Enough food (Cargill Agri Purina) and UV-filtered water were provided. Groupings were done after 7 days of acclimatization. Animals were selected if their weights were close to the mean weight. In total, 20 male rats and 20 female rats were selected. The animals were distributed into 4 groups (5 mice per group) as follows ( Table 1). The expected dose for D-amino acid oxidase extracts was 0.1-0.3 cc, which was determined by "The Study on Acute and Subacute Toxicity and Anti-cancer Effects of Cultivated Wild Ginseng Herbal Acupuncture." [7]. In the control group, the same dose of normal saline solution was administered into a specific point of the tail vein by IV. This study was conducted under the approval of the Institutional Animal Ethic Committee. etc.) and the mortality were examined 30 min, and 1, 2, 3, and 4 h after the injection. From the 1st day to 14th day of treatment, the general symptoms were examined once a day. The weights were measured immediately before treatment, and at 7 and 14 days after treatment. After the termination of observation, all surviving animal organs and tissues were visually inspected and examined by microscopy. The weight results from the experiment were analyzed by using SPSS (version 10.0). Levene's test was conducted to evaluate the homogeneity of the variance and the significance. The One-way ANOVA test was conducted when a homogeneity of the variance was recognized, and the Scheffe's test was conducted post-hoc. Results In this study, no deaths or abnormalities occurred in any of the groups, and the LD50 of the DAAO extracts administered via IV was over 0.3 ml/kg ( Table 2, Table 3). In addition, no changes in weight were observed in any of the groups (Table 4). Finally, no meaningful changes in necropsy were noted, and histopathological examination of all of Group 1 (0.3 cc/head) found no significant changes related to injections in the brain, lungs, liver, kidneys and spinal cord (Table 5). Discussion Paul et al. did a study on the role of D-amino acids in amyotrophic lateral sclerosis, pathogenesis, and showed a potential role, such as that of D-serine in motor neuron disease/amyotrophic lateral sclerosis (ALS), for D-amino acids [8]. D'Aniello et al. did a study on the biological role of DAAO, and showed that the in vivo biological role of DAAO in animals is to act as a detoxifying agent to metabolize D-amino acids that may have accumulated during aging. If the ingested D-amino acids are not metabolized by these enzymes, they will accumulate in the tissues and may provoke serious damage [9]. Smith et al. did a study on the therapeutic potential of DAAO inhibitors. DAAO is a flavoenzyme that degrades Damino acids through the process of oxidative deamination. The physiological role of DAAO in the kidneys and the liver is detoxification of accumulated D-amino acids, and increased D-serine metabolism resulting from increased DAAO activity may produce a reduction in NMDA receptor activity. The NMDA receptor is thought to play a central role in the pathophysiology of schizophrenia. Taken together, these finding suggest that DAAO inhibitors might be useful as novel therapeutics to treat psychiatric and cognitive disorders [10]. Zhao et al. did a study on the potential role of DAAO in neuropathic pain in a rat model of tight L5/L6 spinal nerve ligation and showed that spinal DAAO contributed significantly to the development of central sensitizationmediated pain, suggesting that DAAO may be an important molecular target for the treatment of chronic pain of neuropathic origin [11]. Verrall et al. did a study on the neurobiology of DAAO, it's involvement in schizophrenia, and the therapeutic value of DAAO inhibition. That study characterized DAAO as an enzyme that degraded the NMDA-R coagonist D-serine and that had the potential to modulate NMDA-R function and to contribute to the NMDA-R hypofunction in patients with schizophrenia [12]. To assess the toxicity of DAAO, we need to study its acute and chronic harmful effects and its relations with the capacity-reaction more, and animal testing is the most fundamental and basic way to perform safety assessments [13]. The Korea Food & Drug Administration has testing protocol guidelines for the study of toxicity [14], and all experiments should be conducted following Good Laboratory Practice (GLP) regulations. In this study, the LD50 D-amino acid oxidase extracts were all about 0.3 cc/head in both male and female rats, which indicates that, compared to those in previous studies, this dose is safe to use and does not cause histological abnormalities. Conclusion The objective of this study was to analyze the single-dose toxicity of DAAO extracts. All experiments were conducted under the regulations of Good Laboratory Practice (GLP) at the Korea Testing & Research Institute (KTR), an institution authorized to perform non-clinical studies. The results showed that administration of 0.3-ml/kg DAAO extracts did not cause any changes in weight and did not result in any mortalities which indicates that DAAO administration can be used as a safe treatment.
2016-08-09T08:50:54.084Z
2013-06-01T00:00:00.000
{ "year": 2013, "sha1": "e46c8db2c2761c70d22f44e174e221bab1236e22", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.3831/kpi.2013.16.012", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f9f5aacf8f5a79fa9fad07870fb4bc067a57e454", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
256827559
pes2o/s2orc
v3-fos-license
Implication of cosmological upper bound on the validity of golden ratio neutrino mixings under radiative corrections We study the implication of the most recent cosmological upper bound on the sum of three neutrino masses, on the validity of the golden ratio (GR) neutrino mixings defined at high energy seesaw scale, considering the possibility for generating low energy values of neutrino oscillation parameters through radiative corrections in the minimal supersymmetric standard model (MSSM). The present study is consistent with the most stringent and latest Planck data on cosmological upper bound, $\sum |m_{i}|<0.12$ eV. For the radiative generation of sin$\theta_{13}$ from an exact form of golden ratio (GR) neutrino mixing matrix defined at high seesaw energy scale, we take opposite CP parity mass eigenvalues ($m_{1},-m_{2},m_{3}$) with a non-zero real value of $m_{3}$, and a larger value of $\tan\beta>60$ in order to include large effects of radiative corrections in the calculation. The present analysis including the CP violating Dirac phase and SUSY threshold corrections, shows the validity of golden ratio neutrino mixings defined at high seesaw energy scale in the normal hierarchical (NH) model. The numerical analysis with the variations of four parameters viz. $M_{R}$, $m_{s}$, $\tan\beta$ and $\bar{\eta_{b}}$, shows that the best result for the validity is obtained at $M_{R}=10^{15}$ GeV, $m_{s}=1$ TeV, $\tan\beta=68$ and $\bar{\eta_{b}}=0.01$. However, the analysis based on inverted hierarchical (IH) model does not conform with this latest Planck data on cosmological bound but it still conforms with earlier Planck cosmological upper bound $\sum |m_{i}|<0.23$ eV, thus indicating possible preference of NH over IH models. I. INTRODUCTION The values of neutrino oscillation parameters have been continuously updated with the advancement in the technology of neutrino oscillation experiments [1][2][3] and these updated experimental data are also required to compare with the theoretically predicted values. The latest Planck data on the cosmological upper bound on the sum of the three absolute mass eigenvalues given by |m i | < 0.12 eV [4], may be seriously considered while comparing with other neutrino oscillation parameters although there are also a lot of constraints associated with such cosmological probe [5]. The theoretical predictions of these neutrino oscillation parameters are in general defined at very high energy seesaw scale, and the experimental data on the other hand are defined at low energy scale of the order of 10 2 GeV. In order to make a bridge between these two energy scales, we need a set of renormalisation group equations (RGEs) for quantum radiative corrections [6,7]. We can use two different approaches for running the RGEs from high-energy scale to low-energy scale. In the first approach, the running of RGEs is carried out through the neutrino mass matrix m LL as a whole, and at every energy scale one can extract neutrino masses and mixing angles through the diagonalisation of the neutrino mass matrix calculated at that particular energy scale [8][9][10][11][12]. In the second approach, the running of RGEs can be carried out directly in terms of neutrino mass eigenvalues and three mixing angles with phases [13][14][15]. In both cases, the RGEs of all the neutrino parameters and the RGEs of various coupling constants are solved simultaneously and both approaches give almost consistent results [6]. For the present analysis, we shall use second approach which is more convenient to handle in the numerical analysis of RGEs of neutrino oscillation parameters. Various discrete symmetry groups like S 4 , A 4 , A 5 etc. which are defined at very high energy scale, can lead to various leptonic mixing matrices such as bi-maximal (BM), tribimaximal (TBM) and Golden ratio (GR) [16]. All these specific leptonic mixing matrices have their own respective leptonic mixing angles, and two of the mixing angles (θ 23 and θ 12 ) are in good agreement with the respective non-zero neutrino mixing angles at low energy scale. In all the above three leptonic mixing matrices, the three leptonic neutrino mixing angles are defined at very high energy scale, with reactor neutrino mixing angle (θ 13 ) equals to zero. The radiative magnification of reactor neutrino mixing angle (θ 13 ) is studied with various leptonic mixing matrices such as BM, TBM and GR [17,18]. GR neutrino mixing pattern has certain advantages over the other two neutrino mixing patterns BM and TBM in the evolution of mixing angles under radiative corrections as solar mixing angle (θ 12 ) is always found to increase with deccrease in energy scale. The generation of right order non-zero value of reactor neutrino mixing angle (θ 13 ) at low energy scale, consistent with the latest cosmological upper bound on the sum of three absolute neutrino mass eigenvalues, |m i | < 0.12 eV, is mainly addressed in the present study. Two cases of neutrino mass hierarchical models namely normal hierarchy (NH) and inverted hierarchy (IH) are considered when we take input mass eigenvalues at very high energy seesaw scale. A brief description of an exact form of golden ratio mixing matrix (U GR ) is given by [19], where φ has following properties : It also predicts sin θ 13 = 0, sin θ 23 = 1 √ 2 and tan θ 12 = 1 φ , leading to : and golden ratio is sometimes enforced by A 5 [20]. The U GR is a special case for the µ − τ symmetric mass matrix, For case B ± C − D = A, the mixing matrix goes to the tri-bimaximal mixing matrix (U T BM ) with tan 2θ 12 = 2 √ 2, and for case B ± C − D = √ 2A, the mixing matrix goes to U GR with tan 2θ 12 = 2 [21,22]. When D=0, the structure of the mass matrix predicts tan 2 θ 12 = m 1 m 2 ≈ 0.382, and m 1 and m 2 are two neutrino mass eigenvalues. To check the validity of GR neutrino mixings at high energy scale, we consider a large value of tanβ > 60 in order to include large effects of radiative corrections in the calculation of neutrino masses, mixing angles and to satisfy the latest cosmological upper bound |m i | < 0.12 eV [23,24] in both normal and inverted hierarchical mass models. The paper is organised as follows. In section 2, we briefly outline the main points on renormalisation group analysis for neutrino oscillation parameters with phases. In section 3, we present numerical analysis of RGEs for GR neutrino mixing matrix. In section 4, we give results and discussion. In section 5, we give summary and conclusion. PARAMETERS WITH PHASES We briefly present the main formalism for the evolution of neutrino oscillation parameters [25][26][27] and in its minimal supersymmetric extension, the MSSM where l C L is the charge conjugate of a lepton doublet. ε is the totally antisymmetric tensor in 2 dimensions, and a, b, c, d ∈ {1,2} are SU (2) L indices. The double-stroke letters and denote lepton doublets and the up-type Higgs superfield in the MSSM. The coefficients K gf are of mass dimension -1 and related to the Majorana neutrino mass matrix as where H = 174GeV is the vacuum expectation value of Higgs field (v 0 ). The most plausible explanation for neutrino mass is given by see-saw mechanism [28]. The neutrino mass matrix m LL (t) which is generally obtained from see-saw mechanism, is expressible in terms of K(t), the coefficient of the dimension five neutrino mass operator in the scale-dependent manner, t = ln(µ/1GeV), where the vacuum expectation value (VEV) is v u = v o sin β and v o =174 GeV in the minimal supersymmetric standard model (MSSM). After diagonalization of K(t), the above eq. (4) can be written in terms of mass eigenvalues as follows [13] This expression can be simplified as Now, considering the phases in neutrino mixing matrix, we parameterize the Pontecorvo-Maki-Nakagawa-Sakata (PMNS) matrix as, where s ij = sin θ ij , c ij = cos θ ij , δ= Dirac phase, α 1 =first Majorana phase, α 2 =second Majorana phase. Here, the three mixing angles are defined as tan θ 12 = |U e2 | |U e1 | , tan θ 23 = |U µ3 | |U τ 3 | and sin θ 13 = |U e3 |. The RGEs for K i (t) and v u (t) in the basis where charged lepton mass matrix is diagonal, for one-loop order in MSSM, in the energy range from M R to M SU SY , are given by [9,29,30] 1 and The RGEs for K i and v 0 in the basis where the charged lepton mass matrix is diagonal, for one-loop order in SM, in the energy range from M SU SY to M Z , are given by [9,29,30] 1 and where g 1 , g 2 are gauge couplings, and h t , h b , h τ and λ are top-quark, bottom -quark, tau -lepton Yukawa couplings and SM quartic Higgs coupling respectively. As VEV can affect mass terms in the RGEs, we have two possible set of RGEs of neutrino masses where one is scale-dependent VEV and other is scale-independent VEV. The RGEs of neutrino mass eigenvalues for both scale dependent VEV and scale independent VEV can be written as [15,31] d dt where, For scale-dependent VEV in the case of MSSM with µ ≥ m s , but, for SM case with µ ≥ m s , For scale-independent VEV in the case of MSSM with µ ≥ m s , For the present analysis, we adopt usual sign convention |m 2 | > |m 1 | and we shall use For a complete numerical analysis of the RGEs given in the above section, we follow here two consecutive steps: (i) bottom-up running [11] in the first step and then (ii) top-down running [12] in the next step. In the first step (i), the running of the RGEs for the third family Yukawa couplings (h t , h b , h τ ) and three gauge couplings (g 1 , g 2 , g 3 ) is carried out from top quark mass scale, At the transition point from SM to MSSM, the appropriate matching conditions without threshold corrections are given as follows [33], GeV is the VEV of the Higgs field [34]. For large value of tan β, there should be SUSY threshold corrections which would lead to the modification of down-type quark and charged-lepton Yukawa coupling constants at the matching condition of SUSY breaking scale (m s ) as follows [17,35,36], whereη b is a free parameter that describes the SUSY threshold corrections, cosβ = (1 + η l ) cos β in the redefinition of β →β and η l is a leptonic SUSY threshold correction parameter which is typically very small. Neglecting the effect of the leptonic threshold correction parameters in our parametrisation, it would simply mean that tanβ=tanβ. The latest experimental input values for physical fermion masses, gauge couplings and Weinberg mixing angle at electroweak scale (m Z ) [37] are given in Table I. The three gauge couplings, α 1 (m Z ) = 0.016943 , α 2 (m Z ) = 0.033802 and α 3 (m Z ) = 0.1179 at low energy scale (m Z ), are calculated by using latest PDG data given in Table 1, and SM matching relations Mass in GeV Coupling constant Weinberg mixing angle In terms of the normalized coupling constant (g i ), α i can be expressed as where i = 1, 2, 3 and it represents electromagnetic, weak and strong coupling constants respectively. We adopt the standard procedure to get the values of gauge couplings at top-quark mass scale from the experimental measurements at m Z , using one-loop RGEs for simplicity, assuming the existence of one-light Higgs doublet and five quark flavours below m t scale [11,32]. The evolution equation of gauge coupling constants of one loop for energy range m Z ≤ µ ≤ m t in SM is given by Similarly, the Yukawa couplings are also evaluated at top-quark mass scale using QCD-QED rescaling factors (η i ) in the standard fashion [32], which are given by following relations. The value of QCD-QED rescaling factors (η i ) and vacuum expectation (v 0 ) of Higgs field are given by η b = 1.53, η τ = 1.015 and v 0 = 174 GeV respectively [38,39]. The main concern in our work is to satisfy the latest upper cosmological bound on the sum of absolute neutrino masses, |m i | < 0.12 eV [23,24] with the generation of reactor angle, |U e3 | at low energy scale. IV. RESULTS AND DISCUSSION For top-down running of RGEs from high to low energy scale, Table 2 We have considered both normal and inverted hierarchical mass models for the numerical analysis. In the case of normal hierarchical mass model, all the low energy neutrino parameters are found to lie within 3σ range of NuFIT data [3] with |m i | < 0.12 eV as shown in Tables III-VI. We also check the case of inverted hierarchical mass model which fails to give the low energy neutrino oscillation parameters and |m i | < 0.12 eV within the experimental bounds. We also study the radiative generation of U e3 with initial conditions, ∆m 2 21 =0 at high energy scale, and a non-zero value of m 3 , but it fails to give low energy experimental values of neutrino oscillation parameters. These results are not presented in the present work. We observe that in both cases of normal and inverted hierarchical models, all neutrino mass eigenvalues are slightly increased in magnitude with the decrease in energy scale, whereas the atmospheric mixing angle (s 23 ) and solar mixing angle (s 12 ) are slightly deviated from the mixing angles at high energy seesaw scale i.e θ 23 > 45 0 for NH and θ 23 < 45 0 for IH. Our detailed numerical analysis shows that a larger value of tan β >60 and high energy scale (M R ) are preferred in order to satisfy the latest cosmological upper bound on the sum of three absolute neutrino mass eigenvalues, |m i | < 0.12 eV. This result requires an additional SUSY threshold free parameterη b in the range from -0.6 to +0.6 [17,35] arising from the threshold corrections of heavy SUSY particles [42][43][44][45]. It is found that all the neutrino oscillation parameters are consistent with low energy experimental data for Table 3 and Fig.1(a). Table 4 and Fig.1(b). Table 5 and Fig.1(c). Table 6 and Fig.1(d). The required values of coupling constants for various cases are given in Table 2. The main numerical results of our analysis on neutrino oscillation parameters with three phases and SUSY threshold corrections, are given in Tables III-VI are beyond the range of inputs assigned in Ref. [17]. Our numerical analysis is based on the evolution of RGEs of neutrino oscillation parameters and three phases, including the effect of scale-dependent VEV and SUSY threshold corrections. We have first found out the most suitable value of SUSY threshold parameter η b in the range from -0.6 to +0.6 [17,35], which is compatible with the low energy neutrino Planck bound |m i | < 0.23 eV [46]. Further analysis for TBM case [47,48], considering the effect of CP violating phases and SUSY threshold corrections, will be reported in future communication. To conclude, the present investigation indicates the sensitivity of the value of |m i | on the origin of neutrino masses and mixing angles. It is relevant in the context of the information related to the absolute neutrino masses that has been continuously updating with recent Planck data on the cosmological upper bound on the sum of three absolute neutrino masses |m i | < 0.12 eV. Neutrino mass model if any, is bound to be consistent with these upper bounds on absolute neutrino masses. While the existence of supersymmetric particles has been continuously ruling out in LHC, the supersymmetric breaking scale (m s ) still remains as an unknown parameter. We assume that the m s scale may lie somewhere in between 1 TeV and 14 TeV within the scope of LHC, and the present work is thus confined to the implication of SUSY breaking scale. It is a continuation of our previous investigation [33,49,50] on neutrino masses and mixings with varying SUSY breaking scale in the running of RGEs in both normal and inverted hierarchical neutrino mass models. The focus of the present work is the question of the validity of GR neutrino mixing at high energy scale, with the variation of m s scale and other input parameters tan β,η b and M R scale. It has profound implications to apply on other aspects of RGEs analysis such as low energy magnification of neutrino mixings in quark-lepton unification hypothesis at high energy scale in SO(10) model [31,51,52], radiative generation of reactor mixing angle and solar neutrino mass squared difference at low scale, and the question of radiative stability of neutrino mass models to discriminate between NH and IH models. These earlier good results may now be readdressed for further analysis at low energy scale, consistent with latest Planck data on cosmological upper bound on the sum of three absolute mass eigenvalues. The two-loop RGEs for the gauge couplings are similarly expressed in the range of mass scales m s ≤ µ ≤ M R as [33] d dt where, for SUSY case,
2023-02-14T02:16:13.202Z
2023-02-13T00:00:00.000
{ "year": 2023, "sha1": "cd096bb46ad33697cc1cb289923bac1f6b5b4ea0", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "cd096bb46ad33697cc1cb289923bac1f6b5b4ea0", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
216262077
pes2o/s2orc
v3-fos-license
Direct inversion of circulation from tracer measurements – Part 2: Sensitivity studies and model recovery tests The direct inversion of the 2D continuity equation allows to infer the effective meridional transport of trace gases in the middle stratosphere. This method exploits the information both given by the displacement of patterns in measured trace gas distributions and by the approximate balance between sinks and horizontal as well as vertical advection. Model recovery tests have shown that with the current setup of the algorithm, this method reliably reproduces the circulation patterns in the entire analysis domain from 6 to 66 km altitude. Due to the regularization of the inversion, velocities above about 30 km 5 are more likely underthan overestimated. This is explained by the fact that the measured trace gas distributions at higher altitudes generally contain less information and that the regularization of the inversion pushes results towards zero. Weaker regularization would in some cases allow a more accurate recovery of the velocity fields. However, there is a price to pay in that the risk of convergence failure increases. No instance was found where the algorithm generated artificial patterns not present in the reference fields. Most information on effective velocities above 50 km is included in measurements of CH4, CO, 10 H2O, and N2O, while CFC-11, HCFC-22, and CFC-12 constrain the inversion most efficiently in the middle stratosphere. H2O is a particularly important tracer in the upper troposphere/lower stratosphere. SF6 and CCl4 contain generally less information but still contribute to the reduction of the estimated uncertainties. Introduction Traditionally, the observational analysis of the strength of the Brewer-Dobson circulation relies on the concept of the mean age of stratospheric air (AoA; Waugh and Hall, 2002). The AoA is the average transport time of an air parcel from the stratospheric entry point to the measurement location and is estimated from the mixing ratio of an age tracer such as SF 6 . An alternative method, suggested by von Clarmann and Grabowski (2016, henceforth abbreviated vCG16), derives meridional circulation fields from two subsequent sets of global zonal mean vertically resolved pressure: temperature and mixing ratios of multiple long-lived trace gases by direct inversion of the continuity equation. This method is called Analysis of the Circulation of the Stratosphere Using Spectroscopic Measurements (ANCISTRUS). The resulting quantities are effective 2D velocities, that is to say, those 2D velocities which best describe the observed temporal changes in air density and constituent mixing ratio distributions by transport. They thus include all effects caused by longitudinal or temporal correlations between mixing ratios and velocities. The relationship of these effective 2D velocities to 3D velocities is discussed in the appendices of vCG16 and von Clarmann et al. (2019, henceforth vC19). Beyond this, the ANCISTRUS-derived effective velocities currently also include a contribution by physical mixing and thus are not directly comparable to the 2D residual circulation in the transformed Eulerian mean framework. Similar to in other applications of inverse modeling, such as retrieval of atmospheric state variables from radiance measurements (e.g., Rodgers, 2000) or data assimilation (e.g., Ide et al., 1997), each iteration of the inversion scheme in AN-CISTRUS consists of two steps: a forward modeling step and the inversion itself. In the forward modeling step, the current guess of the effective velocity field is applied to an initial field of measured atmospheric state variables (air density and mixing ratios of species) to solve the predictive version of the continuity equation. Sinks of trace gases due to photolysis, OH chemistry and O 1 D chemistry are taken into account following the approach described in vC19. Along with this, the partial derivatives of each atmospheric state variable with respect to each element of the velocity vector are calculated. In the inverse step, the predicted field of the atmospheric state variables is compared with its measured counterpart, and the weighted residual is minimized by inverting the continuity equation. The weights are represented by the inverse covariance matrix, including measurement uncertainties and prediction errors. To keep the inversion stable, a constraint is applied. The natural application of this method is the analysis of the Brewer-Dobson circulation (Brewer, 1949;Dobson, 1956). ANCISTRUS avoids certain drawbacks of the hitherto common method using the mean age of stratospheric air (Waugh and Hall, 2002) as a diagnostic of the circulation. No age spectra (Andrews et al., 1999;Waugh and Hall, 2002) have to be assumed. Intrusion of mesospheric SF 6 -depleted air does not cause artificial "overaging" of the air (Stiller et al., 2012;Reddmann et al., 2001;Ray et al., 2017) because for gases without a stratospheric sink, ANCISTRUS takes all information from mixing ratio differences within the analysis domain and not from the absolute abundances. Age-of-airbased methods exploit the measured mixing ratio difference between the stratospheric entry point and the measurement location, and the air might have been depleted in SF 6 during its potential detour through the mesosphere. The mesospheric loss of SF 6 increases the difference and makes the air appear older than it actually is. In contrast, ANCISTRUS exploits the measured difference in the mixing ratios of SF 6 between the endpoint and the starting point of a path element of the trajectory only in the domain considered. If the air parcel has re-entered the analysis domain after a possible detour through the mesosphere, any mesospheric loss has affected both the starting point and the endpoint of the path element and thus does not contribute to the difference. And finally, the method does not provide the integrated travel time of an air parcel only but provides time-resolved results. Applying ANCISTRUS to trace gas mixing ratios measured with the Michelson Interferometer for Passive Atmospheric Sounding (MIPAS;Fischer et al., 2008) results in circulation fields that include the expected features like tropical uplift, polar winter subsidence, stratospheric poleward transport, mesospheric pole-to-pole circulation, and elevated stratopauses (vC19). Furthermore, results proved to be stable in the sense that for each year -within the expected range of variability -similar circulation fields were found for any particular time of the year, although the estimates were independent from each other. The ANCISTRUS version used in this paper includes several updates with respect to the original method by vCG16. In particular, sinks of trace gases are considered and mixing coefficients are constrained to 0. The latter implies that resulting velocities are effective velocities that also account for the effect of eddy mixing and physical diffusion. Further details are reported in vC19. Application to trace gas distributions obtained from other satellite missions, such as the Microwave Limb Sounder (MLS;Waters et al., 2006) or the Atmospheric Chemistry Experiment -Fourier Transform Spectrometer (ACE-FTS; Bernath et al., 2005) is under consideration. Since chemical decomposition has been newly implemented in the most recent ANCISTRUS version, the effect of the consideration of sinks is investigated in Sect. 2. The purpose of this investigation is to find out how much information on the circulation is provided by the sinks and how much is provided by the displacement of mixing ratio patterns. In order to further increase the confidence in the new inversionbased method, in this paper we validate the inverse method by model recovery tests. For these tests, mixing ratio distributions are modeled using known effective velocities. These mixing ratio distributions are then fed into ANCISTRUS to test how well the initial velocity field is recovered (Sect. 3). These tests are complemented by an assessment of the dependence of the results on the regularization strength (Sect. 4). Further, we study the sensitivity of the model to the availability of various trace gas fields (Sect. 5). In the Conclusions (Sect. 6) we discuss the power and the limitations of the method as discovered in this work and make suggestions for further work. Sinks versus transported structures Two mechanisms link mixing ratio distributions with the circulation and thus allow us to retrieve information on the circulation from measured mixing ratio distributions. One mechanism is the interplay between the chemical destruction of trace gases and advection. Without advection, chemical sinks would remove those gases which have their sources at Earth's surface completely from the stratosphere, and the fact that -in the long run, and putting weak long-term trends aside -we observe approximately stationary trace gas distributions can only be explained by horizontal and/or vertical advection. Roughly speaking, with the assumption of a chemically stationary atmosphere in force, i.e., when mixing ratio distributions are assumed not to change with time, at each point of the atmosphere the loss by chemical decomposition is compensated for by advection of the related species. That is to say, if a molecule is destroyed, another molecule of this species must be brought to this point by transport if the stationarity condition shall be satisfied. This defines a circulation field corresponding to an equilibrium with respect to atmospheric composition. Mixing ratios changing with time can be understood as a perturbation of this equilibrium assumption, but the task could be conceived of as finding the equilibrium circulation where transport balances decomposition. Needless to say, this requires the modeling of sinks in the forward model that is used to predict the atmospheric state. In the current version of ANCISTRUS, the sinks of CCl 4 , CFC-11, CFC-12, CH 4 , CO, HCFC-22, H 2 O, and N 2 O are taken into account as described in vC19, while, due to its long stratospheric lifetime, SF 6 is regarded as inert in the given analysis range. For CO and H 2 O source reactions are also considered. For reasons discussed above, AN-CISTRUS is sensitive only to the decomposition of gases within the diagnosed latitude and altitude range but not to depletion at higher altitudes. Any depletion of, say, SF 6 on its way through the mesosphere before it subsides again into the stratosphere thus does not affect the ANCISTRUS results. The other mechanism by which trace gas distributions convey information on the circulation is the transport of structures. If, say, the maximum of the mixing ratio of a certain gas is at a certain location one day and 5 • further south a month later, this is best explained by a southward velocity of 5 • per month, assuming that this solution satisfies the continuity equation globally. The amplitude of the structures transported is affected by the sinks discussed above. A widely used method that uses this information pathway is the analysis of the ascent rate in the tropical pipe by means of the water vapor tape recorder (Mote et al., 1996). As opposed to both these simplified views where information pathways are assessed in isolation, both mechanisms contribute to the full picture. ANCISTRUS thus exploits both information pathways. In order to test the sensitivity of ANCISTRUS with respect to each of them, the following tests were performed: as a reference, we use a regular AN-CISTRUS result based on zonal mean MIPAS measurements of all nine trace gases from March to April 2005 (Fig. 1a) and for September to October 2010 (Fig. 2a). The choice of these years has no particular reason; the seasonal behavior of these years is well representative of that of the other years available. The months March-April and September-October were chosen because the velocity fields are more structured than at other times of the year and thus more interesting for test purposes. The circulation fields roughly match our expectations of a typical middle atmospheric meridional circulation. We see mesospheric and upper-stratospheric subsidence in local autumn. The mesospheric pole-to-pole circulation is more pronounced in September-October 2010 than in March-April 2005. Poleward transport in the lower and middle stratosphere is associated with the Brewer-Dobson circulation. Northern polar upwelling in March-April 2005 is particularly interesting: this is explained by the displacement of the polar vortex off the pole during the sudden stratospheric warming taking place at this time, which means that at the pole strongly subsided vortex air is replaced by less subsided air, resulting in a local (Eulerian) upwelling in a 2D perspective. Due to symmetry around the pole, in a 2D representation there is no horizontal velocity which could reproduce this phenomenon. This result, seeming counterintuitive at first glance, is not a weakness of the ANCISTRUS method but rather a characteristic of the representation of the 3D atmosphere in 2D in general. While the scientific interpretation of these fields of effective velocity is provided elsewhere (e.g., vC19), we are, within the framework of this technical study, not so much interested in the explanation of the atmospheric features but in the sensitivity of the inversion with respect to changes in the setup. Figures 1b and 2b show the respective AN-CISTRUS run without the consideration of chemical sinks. The structures and circulation patterns described before are still present, but the velocities have changed in a quantitative sense. An additional feature of equatorward transport at about 55 km altitude, 30 • S emerged in March-April 2005. As expected, the relevance of sinks is largest at higher altitudes, but in general it is moderate in the sense that minor inaccuracies in sink strengths are not likely to perturb the general picture of the circulation. By feeding ANCISTRUS with identical trace gas fields for the beginning and the end of the time interval under consideration, the equilibrium circulation was inferred, where sinks are completely balanced by advection (Figs. 1c,d and 2c,d). We performed two variants of this test. In the first variant, ANCISTRUS was fed with the actual trace gas measurements for the first month and with the same distribution for the second month. The goal was to emulate steady-state conditions and to remove all information contained in the transport of mixing ratio patterns. Here the general picture changes dramatically. Several features of the reference case are not seen anymore. These include the strong subsidence over the South Pole, the response to the stratospheric warming over the North Pole, and poleward transport below 20 km in both hemispheres, in March-April 2005 (Fig. 1c). The tropical upwelling reaches up into the mesosphere. For September-October 2010 the pole-to-pole circulation is no longer present (Fig. 2c). Two fairly symmetric circulation cells with maximum poleward effective velocities at 50-60 km dominate the velocity field. Again, the tropical upwelling reaches up into the mesosphere. Since, strictly speaking, monthly mean mixing ratios do not represent a genuine steady state but rather a snapshot of a transient state, we have repeated this test using annual mean mixing ratio distributions. Without information on monthly changes in the atmospheric state and no seasonal information on the mixing ratio distribution, the inferred circulation is fairly symmetrical, regardless of sinks being estimated with lifetimes typical for March-April ( Fig. 1d) or September-October (Fig. 2d). With this setup, the tropical upwelling again reaches up into the mesosphere, and the remaining patterns are two rather symmetric transport cells in each hemisphere, the stronger one around 50 km covering all hemispheric latitudes and a weaker one around 25 km, located in the subtropics. In summary, it is evident that both sources of information contribute to the resulting circulation field, and conditions. The color scales refer to (v φ degree −1 month) 2 + v z km −1 month) 2 for v φ and v z in units of degree per month and kilometer per month. Pink arrows refer to velocities higher than representable by the color scale chosen. it is necessary to exploit both of them to infer a realistic circulation field. Model recovery tests vCG16 have presented two series of tests. In a first step, they tested the implementation of the transport scheme used. Tests were chosen intentionally simple in order to make it possible to judge whether the algorithm does what it is supposed to, without involving the need of a separate model. If a structure, e.g., a mixing ratio maximum, is transported northward by 5 • in one month when the assumed uniform velocity field is 5 • per month, the success of the test can be directly judged. Diffusive and dispersive characteristics can be tested by analysis of the size of the transported maximum and side wiggles created during the transport. Neither an indication of any malfunction nor otherwise conspicuous features were found in a long series of these forward model tests of which a small subset was shown in vCG16. This kind of test is regarded as severe in the sense of Mayo (1996) because the probability that a flawed transport scheme would be detected is large. Thus, the likelihood that a model which passes these tests is flawed is small. Despite their simplicity, these tests are also general because the operations of the transport scheme are the same everywhere in the analysis space. We thus consider the transport scheme used by ANCISTRUS as valid. vCG16's second series of tests focused on the inversion scheme. Tests fully based on trace gas real measurements suffer from the fact that the corresponding true velocity fields are not known, and it is thus not clear what the resulting effective velocity fields should be compared to. Model recovery tests based on assumed velocity fields used as surrogate truth along with simulated measurements avoid this problem. Such a test is organized as follows. The assumed velocity field is taken as a reference field and is applied to a measured initial atmospheric state. The resulting solution of the forward transport problem renders the simulated state at a later time. Then the measured initial and the simulated later atmospheric state are fed into the inversion scheme as surrogate measurements, and the resulting velocity field, re- covered without using any information on the surrogate truth, is compared to that reference field used to simulate the later atmospheric state. For these tests, a sensible choice of the assumed velocity field is essential. Related tests by vCG16 were based on an ad hoc choice of the velocity field. Again, the broad functionality of the inversion scheme could be demonstrated, but a closer look revealed that these tests were only partially successful. The cause of the problems encountered was that the velocity fields used for testing were not solutions of the continuity equation. An inversion scheme that is based on the hard-wired constraint that the results must comply with continuity cannot reproduce velocity fields which were chosen in an ad hoc manner and are not compliant with continuity. Thus, spurious test results at the boundaries of the analysis field did not come unexpectedly and could not refute the validity of the algorithm. More severe tests must thus use a velocity field that satisfies the continuity equation. On the face of it, tracer and velocity fields from a chemistry-climate model or a chemistrytransport model would serve the purpose. The comparison of ANCISTRUS results with those from such a model, however, suffers from the fact that 2D velocities cannot be unambiguously compared to 3D model results because there is some room for interpretation of the 2D effective velocities. The latter include contributions from eddy transport and eddy mixing (see appendices in vCG16 and vC19). Furthermore, there exist some more technical problems: often the zonal mean mixing ratio fields from the climate model deviate in a sizable way from the MIPAS profile. In this case it is not clear what uncertainties shall be assigned to these mixing ratios from the model. Any rescaling of the assumed error variances would substantially change the weights of the measurements in the inversion, and the results would no longer be representative of the application of ANCISTRUS to MI-PAS zonal means. Beyond this, modeled trace gas fields are often less structured than the measured ones. The absence of prominent structures, however, means the absence of some useful information for ANCISTRUS, again leading to results not directly comparable to the application of ANCISTRUS measurements to MIPAS trace gas fields. T. von Clarmann and U. Grabowski: Inversion of circulation: sensitivity studies and model recovery The use of velocities from a model applied to MIPAS volume mixing ratios to generate mixing ratio fields at the second time step does not solve the problem either. The reason is this. As we have learned from the tests in Sect. 2, the velocities and the initial mixing ratio distributions cannot be chosen independently. For species with sinks in the stratosphere, it is not only the mixing ratio differences between the beginning and the end of a time step that depend on the velocities, but also the absolute concentrations and their spatial distributions. Inconsistencies between the velocity field and the mixing ratio distributions would thus lead to artifacts in the result of the test. A test where it is not possible to decide if any discrepancy between the reference velocity field and the retrieved velocity field is due to this type of artifact or to a possible malfunction of ANCISTRUS is not useful for validation purposes. Our way out is to use ANCISTRUS-generated effective velocity fields to simulate trace gas and density fields, apply ANCISTRUS to them, and test the resulting velocity field by comparison to the initial velocity field. The ANCISTRUSgenerated effective velocity fields satisfy the continuity equation. One might argue that this type of model recovery test is circular, but the circularity is related only to the forward transport model, which has already been tested independently. Further, this test of the inversion scheme takes place fully in a two-dimensional world and thus avoids any complication by the interpretation of 2D effective velocities and their relation to 3D model results. Results of our model recovery tests are shown in Fig. 3 for March-April 2005 (a, c, e) and for February-March 2010 (b, d, f) and in Fig. 4 for August-September 2010 (a, c, e) and September-October 2010 (b, d, f). Figures 5 and 6 with their reduced altitude range permit a closer look at the lower stratosphere. Panels a and b of the figures show the reference fields of effective velocity, panels c and d show the recovered fields, and the respective differences are shown in the panels e and f. The usual diagnostics were applied, and in none of the cases were any peculiarities detected. This provides evidence that the system of equations solved has an unambiguous solution. For the March-April 2005 case (Figs. 3a, c, e and 5a, c, e), ANCISTRUS reproduces all the patterns of the reference case: subsidence of mesospheric air into the stratosphere at Antarctic latitudes, stratospheric effective upwelling over the North Pole, the bifurcation of an upwelling circulation segment at 30 • N, 45 km altitude, and poleward transport in the southern hemispheric subtropics at 25 km altitude and in the northern hemispheric subpolar region at 15 km altitude. All these features are recovered at the correct altitudes and latitudes. At Antarctic latitudes around 55 km altitude effective velocities are underestimated by 8 %-9 %, while they are over-estimated at 40 • S, 40 km, by up to 20 %. The center of the circulation structure at tropical latitudes at around 45 km is shifted downward by 3 km. The largest relative deviations are found where the reference case contains circu-lation segments in opposite directions at adjacent altitudes. The Tikhonov regularization chosen is designed to keep velocity differences between adjacent model grid points small. Thus, this kind of smoothing error observed where the inversion cannot fully resolve the reference field does not come unexpectedly. Also the structures of the slow circulation patterns in the tropopause region and the lower stratosphere are recovered well (Fig 5a, c, e). Effective poleward velocities at 6 and 9 km altitude and the northward effective velocities in northern midlatitudes at 15 and 18 km are underestimated in some places. For the February-March 2010 test case, the situation is very similar to the one discussed above (Figs. 3b, d, f and 5b, d, f). Again, we see southern polar subsidence and the bifurcation of the upwelling circulation segment at 30 • N, 45 km altitude. Contrary to March-April 2005, we see subsidence also over the North Pole, which is an expected phenomenon in polar winter vortices. All major circulation patterns are recovered at the correct latitudes and altitudes. Peak velocities in the mesospheric branches of the circulation are underestimated by about 25 %, but in large parts of the analysis domain the inversion is successful also in quantitative terms, particularly below 40 km. Again, the largest discrepancies are found where opposite circulation directions are found at adjacent grid points: due to the smoothing regularization, the inversion does not resolve the small circulation feature at 20 • S, 45 km altitude. A more detailed view on the lower altitudes (Fig 5b, d, f) shows that the branches of the Brewer-Dobson circulation are recovered well (20-40 • S at 21-27 km altitude and in northern midlatitudes at altitudes between 18 and 27 km). The latitudes, altitudes, and velocity values of maximum poleward transport agree well. Also the position, altitude, and strength of tropical upwelling are almost perfectly recovered. Tests for August-September 2010 and September-October 2010 (Figs. 4 and 6) confirm the findings of the first two tests. All patterns and structures are recovered at the correct latitudes and altitudes. For August-September 2010, this refers to the bifurcation of upward and downward effective velocities in the southern polar upper stratosphere near 40 km; the huge area of large southward velocities at 33-60 km altitude between equatorial latitudes and about 70 • S; the local maxima of southward effective velocities at 24-27 km at about 10 • S and at 6-9 km at southern midlatitudes; and the position of the upwelling within the tropical pipe around 10 • N, a large area of high northward velocities peaking between 54 and 60 km in northern midlatitudes and feeding into northern polar subsidence. Peak velocities are underestimated by about 20 %. Quantitative deviations between the reconstructed field and the reference field are largest where velocity gradients are largest. For example, the bifurcation of tropical upwelling velocities between 40 and 50 km is not well resolved, due to the smoothing characteristic of the regularization. The September-October 2010 circulation (Figs. 4b,d,f and 6b,d,f) is characterized by a strong northward mesospheric pole-to-pole circulation which is connected to southward transport between 30 and 50 km in the entire Southern Hemisphere. The general structure of this circulation system and the positions of peak velocities are almost perfectly recovered, but peak velocities are underestimated by about 20 %, again due to the smoothing regularization. Poleward velocities at 6 and 9 km in southern midlatitudes, 21-27 km between 20 and 60 • N, and equatorward velocities at 15 km altitude in midlatitudinal and polar northern latitudes are all recovered. Most importantly, in none of the tests has the inversion scheme created artificial patterns which were not present in the reference case. No major pattern was removed. The small-scale circulation feature at 20 • S, 45 km altitude in February-March 2010 (Fig. 3b, d, f) is the only instance of a feature in the reference field which has not been reproduced. T. von Clarmann and U. Grabowski: Inversion of circulation: sensitivity studies and model recovery The role of the regularization strength In the previous section, the fact that large velocities are not fully recovered is attributed to the regularization of the inversion. ANCISTRUS uses a Tikhonov (1963)-type regularization which leads to the following object function being minimized: (1) (x − F (q; x 0 )) is the residual between the measured field x of atmospheric state variables and those predicted using the initial field x 0 and an assumed field of velocities q. All these fields are expressed as vectors of length m. S r is the m×m covariance matrix characterizing the uncertainties of the residual, under consideration of uncertainties of x and x 0 . L T 1 L 1 is the n × n regularization term, where L 1 is a first-order difference matrix of dimension (n−1×n), expressing the vertical and horizontal differences of adjacent values of horizontal and vertical velocities. These velocities are represented by the n-dimensional vector q. is a diagonal (n − 1) × (n − 1) matrix and controls the strength of the regularization and balances the units. The purpose of the regularization term is to prevent horizontal or vertical gradients of horizontal and vertical velocities from becoming unreasonably large, a typical characteristic of instable, oscillating solutions of ill-posed inverse problems. It goes without saying that the choice of the entries of directly affects the solution. Thus it is in order to test how sensitive the resulting velocity fields are on the choice of . We use September-October 2010 as a test case because the large velocity contrasts are a particular challenge for a Tikhonov-type smoothing regularization. ( Fig. 7d). An even weaker regularization of (c 1 × 3.0 × 10 −4 ; c 2 ×3.0×10 −3 ) gives room to some instabilities at the boundaries of the domain, particularly at the South Pole between 15 and 30 km altitude (Fig. 7f). Thus, we consider the nominal regularization strengths as adequate for routine process-ing. The damping of peak velocities is the price to pay for a robust inversion. With rare cases of non-convergence, good data coverage can be achieved, structures and patterns can safely be recovered, and outside the regions of peak velocities the results are robust even in a quantitative sense. The optimal choice of the regularization strength, however, is application-dependent, and for particular case studies, where convergence turns out not to be a problem, a weaker regularization may be more adequate. Sensitivity tests For several reasons, ANCISTRUS results are expected to depend on the selection of species used. First, species with different concentration profiles carry information on the circulation at different altitudes. Thus, omitting, e.g., CO and CH 4 and using only species with sizable concentrations in the lower stratosphere, like CCl 4 or CFC-11, will lead to heavily degraded results in the mesosphere. Second, the more species we have in general, the weaker the effect of regularization will be and thus more information can be retrieved, even if the additional information does not change the result in any appreciable manner. Thus, the sensitivity of results with respect to the omission of single species is worth testing. A low sensitivity to the omission of a single species shows the robustness of the methodology. The corresponding test was set up as follows: first, an AN-CISTRUS run was performed for a complete set of species. Then, a series of ANCISTRUS runs was performed, each with one gas omitted, similar to a jackknife method. The difference in velocities caused by the omission of a candidate species is a measure of the sensitivity of the retrieval to this species. These tests were performed for March-April 2005 (left panels of the relevant figures) and September-October 2010 (right panels of the relevant figures) for the omission of CFC-11, CFC-12, and HCFC-22 (Fig. 8), CCl 4 , SF 6 , and H 2 O (Fig. 9), as well as N 2 O, CH 4 , and CO (Fig. 10). contribute most in the polar spring stratosphere, where gradients between regions of old air depleted in these species and young air rich in these species are large (Fig. 8). Since mixing ratios of these species are low in the upper stratosphere and above, these species contribute most information below about 40 km. Particularly, CFC-11 contains considerable information on meridional effective velocities in tropical and midlatitudinal regions near 30 km in March-April 2005 (Fig. 8a). Its omission changes these velocities by 20 %-30 %. In contrast, in the region of apparent updraft in northern polar regions, its influence is only about 10 %. The sensitivity of horizontal velocities near 30 km to CFC-11 is confirmed by the September-October 2010 test case. The effect of the omission of CFC-12 is generally much smaller than of CFC-11 (Fig. 8c, d). This does not necessarily mean that this species carries less information but that its information is more consistent with that of the other species. For the analysis of the major warming event in the northern polar region in March-April 2005 (Fig. 8c), CFC-12 is, in contrast to CFC-11, more relevant for the inference of vertical than horizontal effective velocities. The same is true for HCFC-22 (Fig. 8f). In this particular test case, HCFC-22 is particularly important at altitudes from 6-12 km on southern midlatitudes. CCl 4 and SF 6 broadly contribute in the same regions as the species discussed before, but their contributions are generally smaller because measurement uncertainties are larger for these species and their weight in the inversion is thus lower (Fig. 9a-d). Except for polar winter conditions, the meridional effective velocities seem to be more sensitive to the omission of these species than the vertical effective velocities. Both in March-April 2005 andSeptember-October 2010, CCl 4 contributions are largest to horizontal velocities in the altitude regions of 21-30 km and 6-9 km (Fig. 9a, b). The contributions of SF 6 are even smaller than that of CH 4 and CO provide the bulk of information on the circulation in the upper stratosphere and mesosphere ( Fig. 10c-f). There, contributions exceed 50 % in wide regions. However, similar to N 2 O, they do also provide a lot of information at lower altitudes, which can hardly be appreciated due to the large range of values represented by the color scales of the figures. Overall, the effects of omission of certain species are generally minor to moderate and confined to specific regions, except for the upper stratosphere and mesosphere, where only a few species carry information, viz., H 2 O, N 2 O, CH 4 , and CO. The robustness of the inversion with respect to the omission of single species up to about 40 km indicates that either the MIPAS mixing ratio fields are not biased or that AN-CISTRUS is not overly sensitive to such biases. Since a major amount of information exploited by ANCISTRUS is not contained in the mixing ratios themselves but in the mixing ratio differences, biases, if existing, tend to cancel out. One might argue that the inclusion of species which contribute only little information, such as SF 6 or CCl 4 , is useless. Admittedly the information provided by these species does not change the results very much. However, the inclusion of these species reduces the estimated uncertainty of the retrieved effective velocities. Figure 11 shows the estimated standard deviations, representing the uncertainty of the retrieved horizontal (panels a, c, e) and vertical (panels b, d, f) velocities due to the propagated uncertainties of the mixing ratio fields, for an ANCISTRUS run with all gases included (panels a, b) and without CCl 4 (panels c, d). 1 The estimated uncertainties are reduced by an appreciable amount, mainly in the lower tropical stratosphere. This is more pronounced for the horizontal than for the vertical velocities. In the tropical middle stratosphere at around 30 km altitude, the inclu- sion of CCl 4 increases the altitude region where the standard deviation of v φ is below 0.06 • per month considerably. For tropical middle-stratospheric vertical velocities the altitude range where standard deviations are below 20 m per month increases similarly. The omission of N 2 O, chosen as an example of a gas which contributes more information, has a larger impact (Fig. 11e, f). Particularly at altitudes between about 30 and 50 km, both at polar and tropical latitudes, the standard deviations are up to a factor of 2 higher when N 2 O is omitted. Conclusions ANCISTRUS is a method to infer stratospheric circulation from measured tracer mixing ratios via the inversion of the 2D continuity equation. The primary area of application of this method is the investigation into the structure and possible changes in the Brewer-Dobson circulation. In order to validate ANCISTRUS, a series of tests have been performed. By comparison of its application to steady-state conditions to application with deactivated chemical sinks, the contributions of two information pathways were isolated. In the steady state, ANCISTRUS recovers a field of effective velocities which just compensates for the chemical sinks by advection. In contrast, the application with the sinks turned off exclusively exploits the information which is contained in the displacement of patterns of mixing ratios. It was shown that both mechanisms are important to retrieve the full picture and that the latter information pathway is particularly important. Model recovery tests were performed to test if AN-CISTRUS is able to retrieve a known assumed field of ef- fective velocities that was used to generate simulated mixing ratio measurements. Up to about 30 km altitude, AN-CISTRUS results have been shown to be fairly accurate in a fully quantitative manner. Above, less measurement information is available, and the peak effective velocities deviate from the reference velocities by up to several tens of percent. Still, structure and patterns are perfectly reproduced and can be regarded as robust. Only patterns of very small scales are not resolved. In no case did ANCISTRUS generate artificial structures not present in the reference data. The prevailing underestimation of peak velocities is attributed to the regularization term in the retrieval equation, which pulls values towards 0 in the case of insufficient measurement information. The choice of the regularization strength in the ANCISTRUS version tested here was conservative. A rather strong regularization was chosen to avoid ANCISTRUS producing artificial circulation patterns and to safely achieve convergence of the iteration. According to the terminology of test theory, it had been decided to instead accept type I errors, i.e., to reject a true result, and to safely exclude type II errors, i.e., nonrejection of a false result. The results of this study, however, indicate that there may still be room to fine-tune the regularization in order to achieve less damping of the peak velocities at higher altitudes in a fully quantitative sense. This, however, is deferred to a future paper. Finally, the information content of the various trace gases used so far in ANCISTRUS applications was investigated. It was found that gases whose omission changes the results only marginally still provide information in the sense that their inclusion reduces the estimated uncertainty of the resulting velocity field. Further, ANCISTRUS proved quite robust with respect to the omission of any single gas. In summary, with respect to the scientific analysis of patterns and structures, we regard the ANCISTRUS algorithm in its current setup as fit for purpose. Data availability. The data used and presented in this paper are available via the KITOpen depository under ID 1000127781, https://doi.org/10.5445/IR/1000127781 (Grabowski et al., 2020). Author contributions. TvC initiated the study, suggested the test procedures, provided the concept and software to deal with chemical sinks, and wrote the initial draft of the paper. UG performed and visualized the tests and was responsible for the maintenance of the ANCISTRUS software and its further development since its original publication in 2016. Both authors discussed and analyzed the results and worked on the final text of the paper. Competing interests. Thomas von Clarmann is editor of Atmospheric Chemistry and Physics but has not been involved in the evaluation of this paper. Beyond this, there is no potential conflict of interest.
2020-03-19T10:33:47.356Z
2020-01-01T00:00:00.000
{ "year": 2020, "sha1": "50319c5e15dbebdaa0c3fc17100524bacbc063c9", "oa_license": "CCBY", "oa_url": "https://acp.copernicus.org/articles/21/2509/2021/acp-21-2509-2021.pdf", "oa_status": "GREEN", "pdf_src": "Anansi", "pdf_hash": "99dd58350f2362f209d76c5e1112c64f42ad4baf", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
116425136
pes2o/s2orc
v3-fos-license
A Highly Efficient Single-Phase Three-Level Neutral Point Clamped ( NPC ) Converter Based on Predictive Control with Reduced Number of Commutations This paper proposes a highly efficient single-phase three-level neutral point clamped (NPC) converter operated by a model predictive control (MPC) method with reduced commutations of switches. The proposed method only allows switching states with none or a single commutation at the next step as candidates for future switching states for the MPC method. Because the proposed method preselects switching states with reduced commutations when selecting an optimal state at a future step, the proposed method can reduce the number of switchings and the corresponding switching losses. Although the proposed method slightly increases the peak-to-peak variations of the two dc capacitor voltages, the developed method does not deteriorate the input current quality and input power factor despite the reduced number of switching numbers and losses. Thus, the proposed method can reduce the number of switching losses and lead to high efficiency, in comparison with the conventional MPC method. Introduction Recently, multilevel converters have become popular in a variety of high-power systems owing to their low voltage stress, improved waveform qualities, and low electromagnetic interference (EMI) compared to two-level converters [1,2].Among several kinds of multilevel converters, three-level neutral point clamped (NPC) converters with relatively simple configurations have been realized for many application areas.In addition to three-phase NPC converters, single-phase three-level NPC converters have been employed for high-speed traction systems as well.In order to control single-phase three-level NPC converters, traditional carrier-based pulse width modulation (CBPWM) methods combined with linear proportional and integral (PI) controllers have been investigated to synthesize ac sinusoidal current waveforms with three-level NPC converters.Aside from their adjustable ac voltage and current synthesis, the NPC converters require balancing of the two dc capacitor voltages because of their structure, which has two split dc capacitors in the dc link.As a result, CBPWM methods with offset voltage injection to remove imbalance of neutral point (NP) voltage in the NPC converters have been often used [3][4][5][6][7]. Recently, model predictive control (MPC) methods have been studied for numerous power converters including three-level NPC converters [8][9][10].There have been several studies on MPC algorithms for single-phase NPC converters as well as three-phase NPC converters [11][12][13].In the MPC methods for single-phase NPC converters, a cost function to determine an optimal switching state generally consists of two terms combined with a weighting factor not only to control both the ac sinusoidal currents but also to balance the two capacitor voltages.The ac sinusoidal current is controlled by changing the converter voltage levels, whereas the NP voltage balance is adjusted by using redundant switching states that yield the same voltage level. The conventional MPC method selects an optimal switching state among nine switching states allowed by the single-phase three-level NPC converter on the basis of a cost function considering the ac source current and the NP voltage balance.Consideration of all possible switching states in the conventional method can choose a switching state involved in many commutations as an optimal switching state for the next step, which can lead to an increased number of switchings and corresponding switching losses [14][15][16][17][18].In addition, the conventional MPC method changes the optimal switching state by evaluating the capacitor voltage balance term using the redundant switching states to equal the two capacitor voltages [19,20] owing to a slight voltage difference even when the converter does not require a change in the voltage level.Thus, this operation can increase the number of switchings and switching losses as well [21,22].Several trials to reduce switching losses based on the model predictive control methods have been addressed for a variety of power converters in literatures.In [23,24], approaches to reduce switching losses of matrix converters have been addressed.Ref. [23] proposed a switching loss reduction technique by adding an additional term related with a number of future commutations to a cost function used to control the matrix converter.In [24], a trial to decrease switching losses of the matrix converter has been presented, where a cost function includes an extra term directly representing switching losses at next step by calculating switch currents and switch voltages.In [25], a model predictive control method for modular multilevel converters (MMCs) has been developed with a cost function which is aimed at the elimination of the MMC circulating currents, regulating the arm voltages, and controlling the ac-side currents.In addition, this strategy tried to reduce power losses by decreasing the submodule switching frequency.In [26,27], reduction techniques of switching losses for two-level voltage source inverters have been presented.Ref. [26] proposed a switching strategy based on the model predictive control method to clamp one phase with the largest load current among the three legs in the voltage source inverter every sampling period, which can successfully reduce switching losses of the voltage source inverter.In addition, a model predictive control method for the voltage source inverter has been developed to reduce switching losses by injecting future zero-sequence voltage [27].This approach decreased the switching losses by implementing optimal discontinuous pulse patterns to stop switching operations at vicinity of peak values of load currents.However, there has not been, to the authors' best knowledge, tried to reduce switching losses, using a trade-off between switching losses and capacitor voltage balancing in the three-level NPC converters, although several trials to reduce switching losses based on the model predictive control methods have been addressed for a variety of power converters. In this paper, a highly efficient algorithm with a reduced number of switching and low switching losses for single-phase three-level neutral point clamped (NPC) converters is proposed based on a model predictive control (MPC) method with a decreased number of commutations of switches.The proposed method pre-excludes, from the candidates for possible future switching states, the switching states that yield more than two commutations in the next sampling period.As a result, the proposed technique can reduce the number of switchings and switching losses by utilizing switching states involving no commutation or only one commutation during every sampling instant for single-phase three-level NPC converters.In addition, the developed method does not deteriorate the input current quality or input power factor despite the reduced switching numbers and losses.Although the proposed method slightly increases the peak-to-peak variations of the two dc capacitor voltages at the expense of reduced commutation, the increased voltage variation is not high.Thus, the proposed method can obtain high efficiency and low switching losses at the expense of a slightly increased peak-to-peak variation of the NP voltage.The performance of the proposed method with a reduced number of switchings and higher efficiency is evaluated in terms of the total harmonic distortion (THD) and peak-to-peak variations of the capacitor voltages.Simulations and experimental results are presented to verify the effectiveness of the proposed method. Single-Phase Three-Level NPC Converter and Model Predictive Control Method Figure 1 shows a circuit diagram for the single-phase three-level NPC converter.As shown in Figure 1, the single-phase three-level NPC converter has an input inductor L s and resistor R s as an ac side filter, as well as two capacitors C 1 and C 2 at the dc side.In addition, v c1 and v c2 are the dc voltage of each capacitor, and R L is a load resistor.Switches S aj and S bj (j = 1, 2, 3, 4) are Insulated Gate Bipolar Transistors (IGBTs) at the a-phase and b-phase, respectively.The switch states at each phase, produced by the converter, can be defined as a function of the switching status of the two upper devices as: The two switches S x1 and S x3 operate complementarily.Similarly, S x2 and S x4 work in a complementary manner.As a result, the switching status of the two lower devices is automatically determined by the upper switches.Owing to possible combinations of the switching states of (1) in the a and b phases, a total of nine operating states can be generated by the single-phase three-level NPC converter.On the basis of nine operating states, the phase switching state, upper device switching status, and converter input voltage v ab are listed in Table 1.The nine operating states yield five voltage levels for the converter input voltage v ab , which provides the single-phase NPC converter with redundancy. As shown in Table 1, the two states (1, 0) and (0, −1) for (S a , S b ) are redundant switching states that apply the same voltage level to the converter input terminal of the single-phase three-level NPC converter by assuming that the two capacitor voltages are well balanced.Likewise, the states (0, 1) and (−1, 0) for (S a , S b ) are also redundant because they yield the equal converter input voltage v ab .These redundancies can be utilized to balance the two capacitor voltages v c1 and v c2 .The currents i u and i l shown in Figure 1 can be expressed using the switching status and the source current i s as in ( 2) and (3).Thus, they can be obtained without additional measurements [16]: Energies 2018, 11, x FOR PEER REVIEW 3 of 28 Single-Phase Three-Level NPC Converter and Model Predictive Control Method Figure 1 shows a circuit diagram for the single-phase three-level NPC converter.As shown in Figure 1, the single-phase three-level NPC converter has an input inductor Ls and resistor Rs as an ac side filter, as well as two capacitors C1 and C2 at the dc side.In addition, vc1 and vc2 are the dc voltage of each capacitor, and RL is a load resistor.Switches Saj and Sbj (j = 1, 2, 3, 4) are Insulated Gate Bipolar Transistors (IGBTs) at the a-phase and b-phase, respectively.The switch states at each phase, produced by the converter, can be defined as a function of the switching status of the two upper devices as: The two switches Sx1 and Sx3 operate complementarily.Similarly, Sx2 and Sx4 work in a complementary manner.As a result, the switching status of the two lower devices is automatically determined by the upper switches.Owing to possible combinations of the switching states of (1) in the a and b phases, a total of nine operating states can be generated by the single-phase three-level NPC converter.On the basis of nine operating states, the phase switching state, upper device switching status, and converter input voltage vab are listed in Table 1.The nine operating states yield five voltage levels for the converter input voltage vab, which provides the single-phase NPC converter with redundancy. As shown in Table 1, the two states (1, 0) and (0, −1) for (Sa, Sb) are redundant switching states that apply the same voltage level to the converter input terminal of the single-phase three-level NPC converter by assuming that the two capacitor voltages are well balanced.Likewise, the states (0, 1) and (−1, 0) for (Sa, Sb) are also redundant because they yield the equal converter input voltage vab.These redundancies can be utilized to balance the two capacitor voltages vc1 and vc2.The currents iu and il shown in Figure 1 can be expressed using the switching status and the source current is as in ( 2) and (3).Thus, they can be obtained without additional measurements [16]: The capacitor voltage dynamics of the dc link are calculated by using differential equations: Using a constant sampling period T s , the capacitor voltage dynamics in the discrete-time domain are described as: Using (6), Equations ( 4) and ( 5) can be expressed in the discrete time domain as: The input current of the ac side shown in Figure 1 is expressed in the continuous time domain as: Equation ( 9) is expressed in the discrete time domain as: The ac source current at the next step, i s (k + 1), in (10) can have five possible movements owing to the five possible voltage levels for v ab (k).The single-phase three-level NPC converter needs to balance the two capacitor voltages by manipulating the phase switching states S a (k) and S b (k) shown in (7) and (8) as well as control the source current by changing the converter input voltage v ab (k) in (10).As a result, the cost function with two terms for the ac source current control part and the neutral point (NP) voltage control part of the two capacitor voltages is: where λ c represents the weighting factor of the capacitor voltage balancing term in the cost function.Moreover, the future ac reference current can be expressed with past and present currents from a Lagrange extrapolation as [28][29][30][31]: where i * s (k) is the present current, and i * s (k − 1) and i * s (k − 2) are the reference value of the one-step and two-step past ac source currents, respectively.The ac sinusoidal current is controlled by changing the converter voltage levels, whereas the NP voltage balance is adjusted by using redundant switching states that yield the same voltage level.As a result, the conventional MPC method changes the optimal switching state by evaluating the capacitor voltage balance term using the redundant switching states to equal the two capacitor voltages owing to a slight voltage difference even when the converter does not require a change in the voltage level.Thus, this operation can increase the number of switchings and the switching losses as well. Proposed MPC Method Based on Voltage Tolerance Band The conventional MPC method selects an optimal switching state among nine switching states allowed by the single-phase three-level NPC converter on the basis of a cost function considering the source current and the NP voltage balance.Consideration of all possible switching states in the conventional method can help to choose a switching state involved in many commutations as an optimal switching state for the next step.In addition, the conventional MPC method changes the optimal switching state by evaluating the capacitor voltage balance term using the redundant switching states to equal the two capacitor voltages owing to a slight voltage difference even when the converter does not require a change in the voltage level.Table 2 illustrates the number of commutations involved in switch transitions from the current step to the next step, which vary from zero to four. A switching operation with a number of commutations equal to that shown in Table 2, for example, implies that one switch turns off and another switch turns on at a switching instant.Likewise, two switches are off and two are on at a switching moment when the switching operation corresponds to a number of commutations equal to two in Table 2.The conventional method, which selects a next-step switching state depending on the cost function, does not consider the number of commutations. Thus, the number of switchings can increases in a case where an optimal switching state with many commutations is chosen at the next step.Figure 2 shows simulation waveforms obtained by the conventional MPC method for a single-phase three-level NPC converter.It is seen from Figure 2a that the conventional method, during the period with the converter voltage vab fixed to Vdc/2, repeatedly changes the switching states corresponding to an operating status between (1, 0) and (0, −1).This is the redundant state with respect to each other, although the switch transitions do not be required in terms of the ac source current control.These switching operations involve two commutations at every switching instant, as shown in Table 2.As a result, the number of switching operations substantially increases, whereas the two capacitor voltages perfectly match.Similarly, the simulation waveforms obtained by the conventional MPC method, especially during the period with the converter input voltage vab fixed to −Vdc/2, are depicted in Figure 2b.It is seen that the switch transition repeatedly occurs between (0, 1) and (−1, 0) in terms of the operating status, which also involves two commutations at every switching instant, as shown in Table 2. Therefore, it is noted that the two capacitor voltages are tightly balanced by repeatedly using the redundant switching states, at the expense of an increased number of switchings in the conventional MPC method.The proposed method pre-excludes, from the candidates for possible It is seen from Figure 2a that the conventional method, during the period with the converter voltage v ab fixed to V dc /2, repeatedly changes the switching states corresponding to an operating status between (1, 0) and (0, −1).This is the redundant state with respect to each other, although the switch transitions do not be required in terms of the ac source current control.These switching operations involve two commutations at every switching instant, as shown in Table 2.As a result, the number of switching operations substantially increases, whereas the two capacitor voltages perfectly match.Similarly, the simulation waveforms obtained by the conventional MPC method, especially during the period with the converter input voltage v ab fixed to −V dc /2, are depicted in Figure 2b.It is seen Energies 2018, 11, 3524 7 of 28 that the switch transition repeatedly occurs between (0, 1) and (−1, 0) in terms of the operating status, which also involves two commutations at every switching instant, as shown in Table 2. Therefore, it is noted that the two capacitor voltages are tightly balanced by repeatedly using the redundant switching states, at the expense of an increased number of switchings in the conventional MPC method.The proposed method pre-excludes, from the candidates for possible future switching states, the switching states that yield more than two commutations in the next sampling period.As a result, the proposed technique can reduce the number of switchings and the switching losses by utilizing switching states involving no commutation or only one commutation at every sampling instant for single-phase three-level NPC converters.Table 3 shows the switching states allowed in the proposed method, which are states with the number of commutations restricted to zero or one Table 3. Switching states allowed in proposed method. Current Operating Status Next Possible Operating Status Figure 3 shows simulation waveforms obtained by the proposed MPC method for a single-phase three-level NPC converter.It is seen from Figure 3a that the proposed method, during the period with the converter voltage v ab fixed to V dc /2, does not change the switching states.This is because the operating status (0, −1) corresponding to the redundant status of (1, 0) is not a possible state for the next state when the current operating status is (0, −1).Similarly, simulation waveforms obtained by the proposed MPC method, especially during the period with the converter input voltage v ab fixed to −V dc /2, are depicted in Figure 3b.It is also seen that there is no switch transition because the operating status (−1, 0) corresponding to the redundant status of (0, 1) is not a possible state for the next state when the current operating status is (−1, 0).As a result, the proposed method can reduce the number of switchings and the corresponding switching losses, whereas an NP voltage imbalance between the two capacitor voltages occurs.The NP voltage imbalance that occurs when the periods of the converter voltage are fixed at V dc /2 or −V dc /2 is resolved by selecting switching states to eliminate the imbalance afterward.Figure 4 depicts simulation waveforms obtained by the proposed MPC method during the period when the converter voltage v ab oscillates between V dc /2 and V dc .It is seen that the NP voltage imbalance is solved by the proposed algorithm, where the optimal states are 5, 4, 6, 4, 5, and so on, as shown in Figure 4. Figure 5 shows the capacitor voltage behavior in the switching states.Because switching states 4, 5, and 6 can increase or decrease the upper and lower capacitor voltages, the proposed method can successfully eliminate the NP voltage imbalance quickly.The performance of the MPC methods owing to its inherent operational principle is strongly influenced by the sampling frequency.The number of switchings by the conventional MPC and proposed methods as functions of the sampling periods are shown in Figure 6.It is seen that the number of switchings of the proposed method is lower than that of the conventional MPC method for all considered sampling periods.Increasing the sampling frequency increases the number of switchings, leading to an increasing difference in the number of switchings obtained from the two The performance of the MPC methods owing to its inherent operational principle is strongly influenced by the sampling frequency.The number of switchings by the conventional MPC and proposed methods as functions of the sampling periods are shown in Figure 6.It is seen that the number of switchings of the proposed method is lower than that of the conventional MPC method for all considered sampling periods.Increasing the sampling frequency increases the number of switchings, leading to an increasing difference in the number of switchings obtained from the two methods.The conventional MPC and the proposed methods are compared in terms of the THDs of the source current and the peak-to-peak capacitor voltages vs. the sampling periods shown in Figure The performance of the MPC methods owing to its inherent operational principle is strongly influenced by the sampling frequency.The number of switchings by the conventional MPC and proposed methods as functions of the sampling periods are shown in Figure 6.It is seen that the Energies 2018, 11, 3524 10 of 28 number of switchings of the proposed method is lower than that of the conventional MPC method for all considered sampling periods.Increasing the sampling frequency increases the number of switchings, leading to an increasing difference in the number of switchings obtained from the two methods.The conventional MPC and the proposed methods are compared in terms of the THDs of the source current and the peak-to-peak capacitor voltages vs. the sampling periods shown in Figure 6.It is observed that the proposed method results in almost the same THDs in the source current as those in the conventional MPC method.In addition, the peak-to-peak capacitor voltages of the proposed method are slightly higher than those of the conventional method, at the expense of a decreased number of switchings.Thus, it can be concluded that compared to the conventional MPC method, the proposed method can lead to a reduced number of switchings, which can lead to lower switching losses and a nearly equal THD of the source currents. Energies 2018, 11, x FOR PEER REVIEW 10 of 28 proposed method are slightly higher than those of the conventional method, at the expense of a decreased number of switchings.Thus, it can be concluded that compared to the conventional MPC method, the proposed method can lead to a reduced number of switchings, which can lead to lower switching losses and a nearly equal THD of the source currents.Loss analysis and stress distribution among the switching devices were further conducted on conditions with vs = 730 V, Vdc = 1000 V, and Pin = 10 kW.Losses resulted in each switching component by the conventional and the proposed methods are depicted in Figure 7.The proposed method yields reduced losses in all the switching component, including the IGBTs and the clamping diodes, in comparison with the conventional method.By comparing the conduction loss and the switching loss in Figure 7, the conduction losses generated by the two methods are almost the same.On the other hand, the switching losses of the proposed method are lower than those of the conventional method for all the components.Total efficiency of the conventional and the proposed method was 98% and 98. 7%, respectively.Regarding loss distribution shown in Figure 7, the two methods lead to more losses in the inner switches, Sa2 and Sb2, than the outer switches, which is general in the three-level NPC converters.However, it is seen that the losses by the proposed method are less concentrated on the inner switches than the conventional method, as shown in Figure 7. Loss analysis and stress distribution among the switching devices were further conducted on conditions with v s = 730 V, V dc = 1000 V, and P in = 10 kW.Losses resulted in each switching component by the conventional and the proposed methods are depicted in Figure 7.The proposed method yields reduced losses in all the switching component, including the IGBTs and the clamping diodes, in comparison with the conventional method.By comparing the conduction loss and the switching loss in Figure 7, the conduction losses generated by the two methods are almost the same.On the other hand, the switching losses of the proposed method are lower than those of the conventional method for all the components.Total efficiency of the conventional and the proposed method was 98% and 98. 7%, respectively.Regarding loss distribution shown in Figure 7, the two methods lead to more losses in the inner switches, S a2 and S b2 , than the outer switches, which is general in the three-level NPC converters.However, it is seen that the losses by the proposed method are less concentrated on the inner switches than the conventional method, as shown in Figure 7. Simulation and Experimental Results In order to demonstrate the proposed method, a single-phase three-level NPC converter with the proposed method was operated at vs = 110 V, Vdc = 150 V, Ts = 50 μs, RL = 100 Ω, Rs = 1 Ω, and Ls = 10 mH.The weighting factor λc = 0.5 in (11) was used for both the conventional and the proposed methods.Figure 8 shows simulation waveforms of the source current (is), source voltage (vs), line-to-line converter input voltage (vab), converter pole voltages (t), and frequency spectrum of the input current (is) obtained by the conventional and the proposed methods. It is seen that the proposed method, operated with only a consideration of the reduced number of commutations, and the conventional method, using all possible switching states, make the source voltage and the source current in phase.This yields a unity power factor.It is noted that the source current and the ac line-to-line converter voltage generated by both methods are almost the same.On the other hand, the converter pole voltages of the proposed method are different from those of the conventional method because of the reduced number of switchings.It is seen that the pole voltage of the proposed method has a lower number of commutations than the conventional method owing to the reduced switching operations of the proposed method.From the frequency spectrum waveforms, it can be shown that the two methods represent almost the same current THD Simulation and Experimental Results In order to demonstrate the proposed method, a single-phase three-level NPC converter with the proposed method was operated at v s = 110 V, V dc = 150 V, T s = 50 µs, R L = 100 Ω, R s = 1 Ω, and L s = 10 mH.The weighting factor λ c = 0.5 in (11) was used for both the conventional and the proposed methods.Figure 8 shows simulation waveforms of the source current (i s ), source voltage (v s ), line-to-line converter input voltage (v ab ), converter pole voltages (t), and frequency spectrum of the input current (i s ) obtained by the conventional and the proposed methods. It is seen that the proposed method, operated with only a consideration of the reduced number of commutations, and the conventional method, using all possible switching states, make the source voltage and the source current in phase.This yields a unity power factor.It is noted that the source current and the ac line-to-line converter voltage generated by both methods are almost the same.On the other hand, the converter pole voltages of the proposed method are different from those of the conventional method because of the reduced number of switchings.It is seen that the pole voltage of the proposed method has a lower number of commutations than the conventional method owing to the reduced switching operations of the proposed method.From the frequency spectrum waveforms, it can be shown that the two methods represent almost the same current THD values.Therefore, the proposed method can reduce the number of switchings and the switching losses without deteriorating the quality of the ac current waveform in comparison with the conventional method.Figure 9 shows simulation waveforms of the upper and the lower capacitor voltages (vc1 and vc2) and switching patterns of the four upper switches (Sa1, Sa2, Sb1, and Sb2) during the steady state as obtained by the conventional and proposed methods.In Figure 9a, obtained by the conventional MPC method using all possible switching states, the two capacitor voltages with the NP voltage controlled by the redundant switching states are almost equal with an avoidable oscillation at a certain voltage boundary ΔVC. In the proposed method, as shown in Figure 9b, the converter is operated with only a consideration of the reduced number of commutations.It is clearly seen from the switching patterns that the proposed method yields a reduced number of switchings compared with the conventional method.This can lead to a decreased number of switching losses and higher efficiency.In addition, in the proposed method of Figure 9b, the NP voltage balance is well regulated without a continuous Figure 9 shows simulation waveforms of the upper and the lower capacitor voltages (v c1 and v c2 ) and switching patterns of the four upper switches (S a1 , S a2 , S b1 , and S b2 ) during the steady state as obtained by the conventional and proposed methods.In Figure 9a, obtained by the conventional MPC method using all possible switching states, the two capacitor voltages with the NP voltage controlled by the redundant switching states are almost equal with an avoidable oscillation at a certain voltage boundary ∆V C . Energies 2018, 11, 3524 13 of 28 In the proposed method, as shown in Figure 9b, the converter is operated with only a consideration of the reduced number of commutations.It is clearly seen from the switching patterns that the proposed method yields a reduced number of switchings compared with the conventional method.This can lead to a decreased number of switching losses and higher efficiency.In addition, in the proposed method of Figure 9b, the NP voltage balance is well regulated without a continuous increase or decrease in the capacitor voltages, whereas the peak-to-peak ripple voltages of the two capacitors obtained by the proposed method are slightly increased compared with those of the conventional method.The number of switchings and switching losses of the proposed method were reduced by almost half in comparison with the conventional method.Figure 10 shows simulation waveforms of the two methods when imbalance conditions of the capacitor voltages, which were intentionally generated, occur.Both the conventional and proposed methods can balance the capacitor voltages, as shown in Figure 10.It is seen that the proposed method, using a reduced number of possible switching states for a reduced number of commutations, can yield an NP voltage balance at almost the same speed as the conventional method.Figure 10 shows simulation waveforms of the two methods when imbalance conditions of the capacitor voltages, which were intentionally generated, occur.Both the conventional and proposed methods can balance the capacitor voltages, as shown in Figure 10.It is seen that the proposed method, using a reduced number of possible switching states for a reduced number of commutations, can yield an NP voltage balance at almost the same speed as the conventional method.Effects of the control parameter λc on performance were investigated, where Figure 13 shows the average number of switching, the THD values of line current, and the peak-to-peak values of capacitor ripple voltage of the conventional and the proposed methods, as a function of the weighting factor λc varying from 0.05 to 2. It is shown from Figure 13 that the proposed method results in much lower average number of switching and almost same THD values of the line currents in comparison with the conventional method, over the range of the varying weighting factor.Figure 14.depicts simulation results of ac source current, frequency spectrum of source current, and two capacitor voltages obtained by the conventional and the proposed methods with weighting factor λc = 0.05, λc = 0.5, and λc = 2, respectively.The peak-to-peak value of the two capacitor ripple voltages of the proposed method is slightly increased compared with that of the conventional method, with the trade-off with the reduced number of switching and the consequently decreased switching losses.It is seen that the proposed method with the three different weighting factors regulates the sinusoidal input current well and maintains the two capacitor voltage balancing, even with the lower switching operations than the conventional method.Effects of the control parameter λ c on performance were investigated, where Figure 13 shows the average number of switching, the THD values of line current, and the peak-to-peak values of capacitor ripple voltage of the conventional and the proposed methods, as a function of the weighting factor λ c varying from 0.05 to 2. It is shown from Figure 13 that the proposed method results in much lower average number of switching and almost same THD values of the line currents in comparison with the conventional method, over the range of the varying weighting factor.Figure 14.depicts simulation results of ac source current, frequency spectrum of source current, and two capacitor voltages obtained by the conventional and the proposed methods with weighting factor λ c = 0.05, λ c = 0.5, and λ c = 2, respectively.The peak-to-peak value of the two capacitor ripple voltages of the proposed method is slightly increased compared with that of the conventional method, with the trade-off with the reduced number of switching and the consequently decreased switching losses.It is seen that the proposed method with the three different weighting factors regulates the sinusoidal input current well and maintains the two capacitor voltage balancing, even with the lower switching operations than the conventional method.Performances with larger input resistance and input inductance were investigated.Figure 15 shows the average number of switching, the THD values of line current, and the peak-to-peak values of capacitor ripple voltage of the conventional and the proposed methods, for several values of the input resistance and the input inductance.It is shown from Figure 15 that the proposed method results in much lower average number of switching and almost same THD values of the line currents in comparison with the conventional method, for the different input parameters.Figure 16.depicts Performances with larger input resistance and input inductance were investigated.Figure 15 shows the average number of switching, the THD values of line current, and the peak-to-peak values of capacitor ripple voltage of the conventional and the proposed methods, for several values of the input resistance and the input inductance.It is shown from Figure 15 that the proposed method results Energies 2018, 11, 3524 19 of 28 in much lower average number of switching and almost same THD values of the line currents in comparison with the conventional method, for the different input parameters.Figure 16.depicts simulation results of ac source current, frequency spectrum of source current, and two capacitor voltages obtained by the conventional and the proposed methods with the three different input resistances and input inductances.The peak-to-peak value of the two capacitor ripple voltages of the proposed method is slightly increased compared with that of the conventional method, with the trade-off with the reduced number of switching and the consequently decreased switching losses.It is seen that the proposed method with the three different input parameters regulates the sinusoidal input current well and maintains the two capacitor voltage balancing, even with the lower switching operations than the conventional method. Energies 2018, 11, x FOR PEER REVIEW 19 of 28 simulation results of ac source current, frequency spectrum of source current, and two capacitor voltages obtained by the conventional and the proposed methods with the three different input resistances and input inductances.The peak-to-peak value of the two capacitor ripple voltages of the proposed method is slightly increased compared with that of the conventional method, with the trade-off with the reduced number of switching and the consequently decreased switching losses.It is seen that the proposed method with the three different input parameters regulates the sinusoidal input current well and maintains the two capacitor voltage balancing, even with the lower switching operations than the conventional method. ( The single-phase three-level NPC converter operated with the proposed method was tested with a nonlinear load, which is a three-phase voltage source inverter with a fundamental frequency of 80 Hz as shown in Figure 17.For the purpose of comparison, the simulation results obtained by the conventional method were also included.It is seen from Figure 18 that the single-phase three-level NPC converter with the proposed method well regulates the sinusoidal source current in phase with the source voltage with a low THD value, even with a nonlinear load.In addition, the two capacitor voltages of the proposed method are balanced in a case of the nonlinear load as the same as the linear load, as shown in Figure 18.The single-phase three-level NPC converter operated with the proposed method was tested with a nonlinear load, which is a three-phase voltage source inverter with a fundamental frequency of 80 Hz as shown in Figure 17.For the purpose of comparison, the simulation results obtained by the conventional method were also included.It is seen from Figure 18 that the single-phase three-level NPC converter with the proposed method well regulates the sinusoidal source current in phase with the source voltage with a low THD value, even with a nonlinear load.In addition, the two capacitor voltages of the proposed method are balanced in a case of the nonlinear load as the same as the linear load, as shown in Figure 18.The single-phase three-level NPC converter operated with the proposed method was tested with a nonlinear load, which is a three-phase voltage source inverter with a fundamental frequency of 80 Hz as shown in Figure 17.For the purpose of comparison, the simulation results obtained by the conventional method were also included.It is seen from Figure 18 that the single-phase three-level NPC converter with the proposed method well regulates the sinusoidal source current in phase with the source voltage with a low THD value, even with a nonlinear load.In addition, the two capacitor voltages of the proposed method are balanced in a case of the nonlinear load as the same as the linear load, as shown in Figure 18.A prototype of a single-phase three-level NPC converter, shown in Figure 19, was fabricated in a laboratory to prove the proposed method.The conventional and proposed methods were implemented using a DSP board (TMS320F28335).To compare the performance of the two methods, experiments were conducted under the same conditions as the simulation.Figure 20 shows experimental waveforms of the source voltage/current, converter input voltage, each pole voltage, and an FFT analysis of the source current for the conventional method and the proposed method during steady-state conditions.A prototype of a single-phase three-level NPC converter, shown in Figure 19, was fabricated in a laboratory to prove the proposed method.The conventional and proposed methods were implemented using a DSP board (TMS320F28335).To compare the performance of the two methods, experiments were conducted under the same conditions as the simulation.Figure 20 shows experimental waveforms of the source voltage/current, converter input voltage, each pole voltage, and an FFT analysis of the source current for the conventional method and the method during steady-state conditions.As in the simulation, the proposed method shows almost the same source current and converter input voltage waveforms as the conventional method.In addition, the proposed method through the FFT analysis shows performance that is very similar to that of the conventional method.On the other hand, as shown in Figure 20, the proposed method has a quite different pole voltage from the conventional method owing to the reduced number of switchings. Figure 21 shows the upper and lower capacitor voltages, source current, and switching state in the steady state.As shown in Figure 21, the proposed method reduces the number of switchings in comparison with the conventional method.In addition, the NP voltage balance in the proposed As in the simulation, the proposed method shows almost the same source current and converter input voltage waveforms as the conventional method.In addition, the proposed method through the FFT analysis shows performance that is very similar to that of the conventional method.On the other hand, as shown in Figure 20, the proposed method has a quite different pole voltage from the conventional method owing to the reduced number of switchings. Figure 21 shows the upper and lower capacitor voltages, source current, and switching state in the steady state.As shown in Figure 21, the proposed method reduces the number of switchings in comparison with the conventional method.In addition, the NP voltage balance in the proposed method is well regulated without a continuous increase or decrease in the capacitor voltages, whereas the peak-to-peak ripple voltages of the two capacitors obtained by the proposed method is slightly increased compared with the conventional method.As in the simulation, the proposed method shows almost the same source current and converter input voltage waveforms as the conventional method.In addition, the proposed method through the FFT analysis shows performance that is very similar to that of the conventional method.On the other hand, as shown in Figure 20, the proposed method has a quite different pole voltage from the conventional method owing to the reduced number of switchings. Figure 21 shows the upper and lower capacitor voltages, source current, and switching state in the steady state.As shown in Figure 21, the proposed method reduces the number of switchings in comparison with the conventional method.In addition, the NP voltage balance in the proposed method is well regulated without a continuous increase or decrease in the capacitor voltages, whereas the peak-to-peak ripple voltages of the two capacitors obtained by the proposed method is slightly increased compared with the conventional method.In Figure 22, experimental waveforms of the two methods are shown when imbalanced NP voltage conditions of the capacitor voltages, which are intentionally generated, occur.Both the conventional and proposed methods can balance the capacitor voltages, as shown in Figure 22.This is the same as the simulation results of Figure 10.It is seen that the proposed method, using a reduced number of possible switching states for a reduced number of commutations, can yield an NP voltage balance at almost the same speed as the conventional method.Figures 23 and 24 show experimental waveforms of step changes of the load resistance and the dc load voltage obtained by the two methods.It is seen that the proposed method achieves dynamic responses as quickly as the conventional method despite the reduced number of possible switching states to decrease the number of switching losses.In Figure 22, experimental waveforms of the two methods are shown when imbalanced NP voltage conditions of the capacitor voltages, which are intentionally generated, occur.Both the conventional and proposed methods can balance the capacitor voltages, as shown in Figure 22.This is the same as the simulation results of Figure 10.It is seen that the proposed method, using a reduced number of possible switching states for a reduced number of commutations, can yield an NP voltage balance at almost the same speed as the conventional method.Figures 23 and 24 show experimental waveforms of step changes of the load resistance and the dc load voltage obtained by the two methods.It is seen that the proposed method achieves dynamic responses as quickly as the conventional method despite the reduced number of possible switching states to decrease the number of switching losses. Conclusions This paper proposed a highly efficient algorithm with a reduced number of switchings and low switching losses for single-phase three-level NPC converters based on an MPC method with a decreased number of commutations of switches.The proposed method pre-excludes, from the candidates for possible future switching states, the switching states that yield more than two commutations in the next sampling period.As a result, the proposed technique can reduce the number of switchings and the switching losses by utilizing switching states involving no commutation or only one commutation at every sampling instant for single-phase three-level NPC Conclusions This paper proposed a highly efficient algorithm with a reduced number of switchings and low switching losses for single-phase three-level NPC converters based on an MPC method with a decreased number of commutations of switches.The proposed method pre-excludes, from the candidates for possible future switching states, the switching states that yield more than two commutations in the next sampling period.As a result, the proposed technique can reduce the number of switchings and the switching losses by utilizing switching states involving no commutation or only one commutation at every sampling instant for single-phase three-level NPC Conclusions This paper proposed a highly efficient algorithm with a reduced number of switchings and low switching losses for single-phase three-level NPC converters based on an MPC method with a decreased number of commutations of switches.The proposed method pre-excludes, from the candidates for possible future switching states, the switching states that yield more than two commutations in the next sampling period.As a result, the proposed technique can reduce the number of switchings and the switching losses by utilizing switching states involving no commutation or only one commutation at every sampling instant for single-phase three-level NPC Conclusions This paper proposed a highly efficient algorithm with a reduced number of switchings and low switching losses for single-phase three-level NPC converters based on an MPC method with a decreased number of commutations of switches.The proposed method pre-excludes, from the candidates for possible future switching states, the switching states that yield more than two commutations in the next sampling period.As a result, the proposed technique can reduce the number of switchings and the switching losses by utilizing switching states involving no commutation or only one commutation at every sampling instant for single-phase three-level NPC converters.In addition, the developed method does not deteriorate the input current quality or input power factor despite the reduced switching numbers and losses.Although the proposed method slightly increases the peak-to-peak variations of the two dc capacitor voltages at the expense of reduced commutation, the increased voltage variation is not high.Thus, the proposed method can obtain high efficiency and low switching losses at the expense of slightly increased peak-to-peak variations of the NP voltage.Simulations and experimental results were presented to verify the effectiveness of the proposed method. Figure 2 . Figure 2. Simulation waveforms of conventional MPC method (a) during period with converter voltage vab fixed to Vdc/2 and (b) during period with converter input voltage vab fixed to −Vdc/2. Figure 2 . Figure 2. Simulation waveforms of conventional MPC method (a) during period with converter voltage v ab fixed to V dc /2 and (b) during period with converter input voltage v ab fixed to −V dc /2. Figure 3 . Figure 3. Simulation waveforms of proposed MPC method (a) during period with converter voltage vab fixed to Vdc/2 and (b) during period with converter input voltage vab fixed to −Vdc/2. Figure 3 . Figure 3. Simulation waveforms of proposed MPC method (a) during period with converter voltage v ab fixed to V dc /2 and (b) during period with converter input voltage v ab fixed to −V dc /2. Figure 4 . Figure 4. Simulation waveforms of proposed MPC method during period with converter voltage vab between Vdc/2 and Vdc. Figure 5 . Figure 5. Number of operating status and capacitor voltage behavior of proposed MPC method during period with converter voltage vab between Vdc/2 and Vdc. Figure 4 . Figure 4. Simulation waveforms of proposed MPC method during period with converter voltage v ab between V dc /2 and V dc . Figure 4 . Figure 4. Simulation waveforms of proposed MPC method during period with converter voltage vab between Vdc/2 and Vdc. Figure 5 . Figure 5. Number of operating status and capacitor voltage behavior of proposed MPC method during period with converter voltage vab between Vdc/2 and Vdc. Figure 5 . Figure 5. Number of operating status and capacitor voltage behavior of proposed MPC method during period with converter voltage v ab between V dc /2 and V dc . Figure 6 . Figure 6.Comparison results obtained by conventional MPC method and proposed method vs. sampling frequency: (a) number of switchings; (b) THD values of source currents; (c) current errors; and (d) peak-to-peak values of capacitor ripple voltages. Figure 6 . Figure 6.Comparison results obtained by conventional MPC method and proposed method vs. sampling frequency: (a) number of switchings; (b) THD values of source currents; (c) current errors; and (d) peak-to-peak values of capacitor ripple voltages. Figure 7 . Figure 7. Loss comparison of (a) conventional method (b) proposed method. Figure 7 . Figure 7. Loss comparison of (a) conventional method (b) proposed method. Figure 8 . Figure 8. Simulation results of ac source current (is), source voltage (vs), converter line-to-line voltage (vab), pole voltages (vaN, vbN), and frequency spectrum of input current (is) obtained by (a) conventional and (b) proposed methods. Figure 8 . Figure 8. Simulation results of ac source current (i s ), source voltage (v s ), converter line-to-line voltage (v ab ), pole voltages (v aN, v bN ), and frequency spectrum of input current (i s ) obtained by (a) conventional and (b) proposed methods. Figure 9 . Figure 9. Simulation results of upper and lower capacitor voltages (v c1 , v c2 ) and switching patterns of four upper switches (S a1 , S a2 , S b1 , S b2 ) during steady state in (a) conventional MPC method and (b) proposed MPC method. Figure 10 . Figure 10.Simulation results of capacitor voltages (vc1, vc2) and source current during imbalanced NP voltage conditions obtained by (a) conventional method and (b) proposed method. Figure 10 . Figure 10.Simulation results of capacitor voltages (v c1 , v c2 ) and source current during imbalanced NP voltage conditions obtained by (a) conventional method and (b) proposed method. Figures 11 Figures 11 and 12 show simulation waveforms of step changes of the load resistance and the dc load voltage obtained by the two methods.It is seen that the proposed method achieves dynamic responses as quickly as the conventional method despite the reduced number of possible switching states to decrease the number of switching losses. Figure 11 . Figure 11.Simulation results of capacitor voltages (vc1, vc2) and source current with step change of load resistor from 200 Ω to 100 Ω obtained by (a) conventional method and (b) proposed method. Figure 11 .Figure 12 . Figure 11.Simulation results of capacitor voltages (v c1 , v c2 ) and source current with step change of load resistor from 200 Ω to 100 Ω obtained by (a) conventional method and (b) proposed method. Figure 12 . Figure 12.Simulation results of capacitor voltages (v c1 , v c2 ) and source current with step change of dc voltage from 150 V to 120 V obtained by (a) conventional method and (b) proposed method. Figure 13 .Figure 13 . Figure 13.Effects of weighting factor λc varying from 0.05 to 2 on (a) average number of switching (b) THD of line current (c) peak-to-peak value of capacitor ripple voltage of the conventional and the proposed methods. Figure 13 .Figure 14 . Figure 13.Effects of weighting factor λc varying from 0.05 to 2 on (a) average number of switching (b) THD of line current (c) peak-to-peak value of capacitor ripple voltage of the conventional and the proposed methods. Figure 14 . Figure 14.Simulation results of ac source current (i s ), frequency spectrum of input current (i s ), and capacitor voltages (v aN, v bN ) obtained by the conventional and the proposed methods with weighting factor (a) λ c = 0.05, (b) λ c = 0.5, and (c) λ c = 2. Figure 15 . Figure 15.Effects of input resistance and input inductance on (a) average number of switching (b) THD of line current (c) peak-to-peak value of capacitor ripple voltage of the conventional and the proposed methods. Figure 15 .Figure 16 . Figure 15.Effects of input resistance and input inductance on (a) average number of switching (b) THD of line current (c) peak-to-peak value of capacitor ripple voltage of the conventional and the proposed methods. Figure 17 . Figure 17.Schematic with a three-phase voltage source inverter as a nonlinear load. Figure 16 . Figure 16.Simulation results of ac source current (i s ), frequency spectrum of input current (i s ), and capacitor voltages (v aN, v bN ) obtained by the conventional and the proposed methods with input parameters (a) R s 1 Ω and L s = 10 mH (b) R s = 2 Ω and L s = 20 mH, and (c) R s = 3 Ω and L s = 30 mH. Figure 16 . Figure 16.Simulation results of ac source current (is), frequency spectrum of input current (is), and capacitor voltages (vaN, vbN) obtained by the conventional and the proposed methods with input parameters (a) Rs = 1 Ω and Ls = 10 mH (b) Rs = 2 Ω and Ls = 20 mH, and (c) Rs = 3 Ω and Ls = 30 mH. Figure 17 . Figure 17.Schematic with a three-phase voltage source inverter as a nonlinear load. Figure 17 . Figure 17.Schematic with a three-phase voltage source inverter as a nonlinear load. Figure 18 . Figure 18.Simulation results with the three-phase voltage source inverter as a nonlinear load: ac source current (is), frequency spectrum of input current (is), line to line source voltages (vab), capacitor voltages (vc1, vc2), and three-phase load currents of the voltage source inverter (from top to bottom) obtained by (a) the conventional method (b) the proposed methods. Figure 18 . Figure 18.Simulation results with the three-phase voltage source inverter as a nonlinear load: ac source current (i s ), frequency spectrum of input current (i s ), line to line source voltages (v ab ), capacitor voltages (v c1, v c2 ), and three-phase load currents of the voltage source inverter (from top to bottom) obtained by (a) the conventional method (b) the proposed methods. Figure 19 . Figure 19.Photograph of prototype setup for single-phase three-level NPC converter. Figure 20 . Figure 20.Experimental results of ac source current (i s ) and source voltage (v s ), converter input voltage (v ab ), pole voltage (v aN , v bN ), and FFT analysis of source current (i s ) in (a) conventional and (b) proposed methods. Figure 21 . Figure 21.Experimental results of capacitor voltages (vc1, vc2), source current (is), and upper switch of a leg (Sa1, Sa2) during steady state in (a) conventional method and (b) proposed method. Figure 21 . Figure 21.Experimental results of capacitor voltages (v c1 , v c2 ), source current (i s ), and upper switch of a leg (S a1 , S a2 ) during steady state in (a) conventional method and (b) proposed method. Figure 22 .Figure 23 .Figure 24 . Figure 22.Experimental results of capacitor voltages (vc1, vc2) and source current during imbalanced NP voltage conditions obtained by (a) conventional method and (b) proposed method. Figure 22 .Figure 22 .Figure 23 .Figure 24 . Figure 22.Experimental results of capacitor voltages (v c1 , v c2 ) and source current during imbalanced NP voltage conditions obtained by (a) conventional method and (b) proposed method. Figure 23 .Figure 22 .Figure 23 .Figure 24 . Figure 23.Experimental results of capacitor voltages (v c1 , v c2 ) and source current with step change of load resistor from 200 Ω to 100 Ω obtained by (a) conventional method and (b) proposed method. Figure 24 . Figure 24.Experimental results of capacitor voltages (v c1 , v c2 ) and source current with step change of dc voltage from 150 V to 120 V obtained by (a) conventional method and (b) proposed method. Author Contributions: All authors contributed to this work by collaboration.Acknowledgments: This research was supported by the National Research Foundation of Korea (NRF) grant funded by the Korean government (MSIP) (2017R1A2B4011444) and the Human Resources Development (No.20174030201810) of the Korea Institute of Energy Technology Evaluation and Planning (KETEP) grant funded by the Korea government Ministry of Trade, Industry and Energy.Conflicts of Interest:The authors declare no conflict of interest. Table 1 . Nine operating states, phase switching state, upper device switching status, and converter input voltage of single-phase three-level NPC converter. Table 2 . Number of commutations involved in switch transitions from current step to next step in conventional MPC method. Current Operating Status Next Possible Operating Status Figure 20.Experimental results of ac source current (is) and source voltage (vs), converter input voltage (vab), pole voltage (vaN, vbN), and FFT analysis of source current (is) in (a) conventional and (b) proposed methods.
2019-04-16T13:29:14.176Z
2018-12-18T00:00:00.000
{ "year": 2018, "sha1": "43fa46e8449053b44872f99bfca89a942f7b3d97", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1073/11/12/3524/pdf?version=1545125506", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "43fa46e8449053b44872f99bfca89a942f7b3d97", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
17069944
pes2o/s2orc
v3-fos-license
An unusual origin of the double left testicular artery in a male cadaver: a case report Introduction Variations in the number and course of the testicular arteries, often coexisting with variations of the other branches arising from the abdominal aorta, are still reported to be of interest to urology surgeons. Case presentation During a routine dissection course, an unusual origin of the double left testicular artery was observed in the cadaver of a 68-year-old Caucasian man who donated his body to the Institute of Anatomy. Conclusions A deeper understanding of the variations of the testicular arteries is necessary for all physicians whose practice is related to the testicles and their vascular stalk. Introduction The testicular arteries are known to originate from the ventrolateral aspect of the abdominal aorta and descend obliquely to the pelvic cavity. Variations in the number and course of the testicular arteries, often coexisting with variations of the other branches arising from the abdominal aorta, are reported to be less frequent than the variations of the homologous veins [1,2]. The developmentally-based variation of origin dealt mostly with the debranching from the renal artery, either main or accessory [3][4][5][6][7]. Although other anatomical peculiarities were related to the same cadaver, such as an early division of the right axillary artery, the presence of the axillary arch on the right side, and the evidence of the accessory renal artery of the same side of the double testicular artery, our aim was to describe, according to the literature, the rarest of the evidenced anatomical variations. Case presentation During routine dissection classes, two left testicular arteries were revealed in the cadaver of a 68-year-old Caucasian man who donated his body through the body donation program: one was medial, originating from an accessory renal artery; the other was lateral, debranching from the common trunk together with the left inferior suprarenal artery. The medial artery was smaller in caliber (0.9mm) and followed the course of the regular testicular vein. The lateral left testicular artery was slightly larger (1.2mm) and pierced the tissue of the suprarenal gland immediately after debranching. The lateral artery ran in front of the left renal vein, arched laterally afterwards, ventrally crossed the inferior pole of the left kidney, and followed the lateral border of the psoas major muscle. After a short pathway on the anterior surface of the psoas major, taking the medial course, it joined the medial artery at the entrance of the funiculus spermaticus. The lateral artery was accompanied by two veins, tributaries of the left suprarenal vein. The right testicular artery had the origin and pathway described in most anatomy textbooks ( Figure 1). In the man's medical chart, no sign of any kind of urological disorder was noted, and no fertility problem was noted in his personal medical record. It was not the only anatomical variation found: a division of the axillary artery to the superficial and deep brachial artery was present in the right axilla, and on the left side an axillary arch was revealed. Discussion We present a case of double left testicular artery in which one branch originated from the left renal accessory artery and the other branch arose from the common trunk with the left inferior suprarenal artery. The special feature of this case is that the testicular artery on the left side is double, and none of the two arterial vessels arose from the abdominal aorta. Yet cases concerning the origin of the double left testicular artery from the common trunk with the superior adrenal and the inferior mesenteric artery have been presented [8]. The prevalence of variations upon gonadal arteries was reported to be 16 out of 180 specimens obtained from human fetuses and to be more frequent in men than in women [9]. The variations of the testicular artery could be divided in two groups: (a) variations in the branching level, regarding the vertebral column, or arising locus of the renal arteries (above or below the branching level of the renal arteries) [6] and (b) variation in number, origin, and course; this variation appeared to be more frequent on the right side [9,10]. Also, common origin with the inferior suprarenal artery has been reported [1,8]. Testicular arteries were documented to rise from lumbar, renal, accessory renal, middle, and, occasionally, superior suprarenal arteries [4,7,11]. In a recent case presentation, Paraskevas et al. [12] (2011) described a high origin (above the expected level of branching) of the left testicular artery in a common trunk with the inferior phrenic artery. The bilateral branching of the ovarian arteries from the accessory renal arteries has also been revealed [13]. Most of the variations, including those described in this paper, have their roots in the embryology of the testes and the contemporaneous blood supply of each phase of development: mesonephros itself was described to have irrigation from nine mesonephric arteries: superior or cranial, middle, and inferior or caudal (three in each group). The caudal group gives rise to the testicular arteries, whereas the middle forms renal arterial vessels [3]. In our case, both vessels on the left side seemed to originate from the middle group. Double testicular arteries have been obtained in two out of 32 overlooked specimens of human fetuses [14], and both of the testes had an abdominal localization. Whether the arteries supplying undescended testes have variations in number and origin remains unclear, although a different anastomotic network is present in the undescended testicles [15]. A deeper understanding of these variations and their special relationships to adjacent vessels is especially significant in avoiding sometimes serious complications in clinical operation and other procedures, such as the Fowler-Stephens technique [16], and in recognizing the causes of genital disorders, such as cryptorchidism, and their adequate treatment in order to preserve the functionality of the undescended testis. Conclusions Although many kinds of variations of the origin, direction, and pattern of the testicular arteries have been described, this is a rare case of double testicular artery on the left side. Neither of the two branches originated from the abdominal aorta but from the left inferior suprarenal artery and from the left renal artery. The awareness of such variation is important for all surgeons whose interest is related to the testicular blood vessels or, generally, blood vessels of the retroperitoneum. Consent Written informed consent was obtained from the patient's next of kin for publication of this manuscript and accompanying images. A copy of the written consent is available for review by the Editor-in-Chief of this journal.
2017-06-22T20:16:11.930Z
2012-08-31T00:00:00.000
{ "year": 2012, "sha1": "fc9c23d30eebc811ecaca355c9a90aaa76c1abd1", "oa_license": "CCBY", "oa_url": "https://jmedicalcasereports.biomedcentral.com/track/pdf/10.1186/1752-1947-6-267", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "57e622477d7b520bc45883f55349ed03551211c6", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
261784611
pes2o/s2orc
v3-fos-license
Increasing knowledge in the health protocol of COVID-19 prevention with health education in boarding schools ABSTRACT INTRODUCTION Coronavirus disease (COVID- 19) is an infectious disease caused by the SARS-CoV-2 virus.COVID-19 can spread like other viruses in general, through splashing the patient's saliva when coughing, sneezing, talking, or touching the eyes, nose, or mouth after handling items that a person's saliva has sprinkled person's saliva has splashed with the Coronavirus. 1 The number of COVID-19 cases globally was 109,594,835 on February 18, 2021.The number of instances that died was 2,424,060, and around 223 countries or regions were affected by COVID-19.Meanwhile, COVID-19 data according to the Indonesian Ministry of Health (2021), on February 18, 2021, the number of cases in Indonesia was 1,252,685 positive COVID-19, 1,058,222 recovered, and 33,969 died. 2 Standard recommendations to prevent the spread of infection are wearing masks, washing hands with soap and running water/hand sanitizer, and maintaining a minimum distance of 2 meters.Prevention can be done before and after eating washing hands regularly, and applying cough and sneezing etiquette.Avoid close contact with anyone showing symptoms of a respiratory illness, such as coughing and sneezing.The other protocol is to maintain a minimum distance of 2 meters between each patient and other patients, including health workers using personal protective equipment rationally and consistently; hand hygiene will help reduce the spread of infection. 3he impact of the pandemic has caused activities that involve groups of people to be limited.The consequences are going to school, working, worshiping, etc.The government has appealed to work, study and worship from home to reduce the number of patients exposed to COVID- 19 After the new normal has been established by the Minister of Education and Culture, contemporary schools' faceto-face teaching and learning process is permitted for educational institutions in the green zone and only for upper secondary and junior secondary levels.Schools' health protocols are rules to prevent the spread of the COVID-19 disease caused by the Coronavirus in educational institutions.Face-to-face learning is carried out through two phases: the transition period and the new normal.The transition period lasts for two months from the start of face-to-face learning in education units. 4uidelines for implementing learning in the academic year and the new academic year during the COVID-19 period: Every school that has opened the masks, for example, when visiting outside parties such as family.When interviewed by the researcher, 8 out of 10 students did not regularly wash their hands with soap and running water because they did not understand this.One solution that can be given is to provide health education to students regarding health protocols during the COVID-19 pandemic.The provision of schooling for boarding school students has been investigated on changes in dermatitis knowledge, with the results of the study getting an average knowledge of preventing dermatitis before being given health education of 8.37 and after being given education an average value of 11.18.There is an effect of health education on students' knowledge about the prevention of dermatitis.Based on the problems that can cause COVID-19 transmission, researchers feel the need to conduct education to increase knowledge of the COVID-19 prevention health protocol. METHOD The methodology of this research is action research.This study aims to determine changes in knowledge of COVID-19 prevention health protocols.The first stage in action research is situational analysis; at this stage, researchers visit boarding schools first to see the conditions for implementing health protocols that have been implemented.The second stage is action planning, where the researcher prepares an action plan for the problems found in the previous stage; at this stage, an activity plan is made to increase knowledge about the COVID-19 protocol.The third stage is action-taking.The researcher conducts two meetings; the first meeting is conducted by providing education related to health protocols, which is preceded by measuring the level of prior knowledge.The second meeting is conducting demonstrations of health protocols.The fourth step is evaluating activities carried out by measuring knowledge after carrying out activities. The sampling technique used is simple random sampling.The sample used in this study was 100 people.The total respondents are the man because boarding schools only allow the male gender to be involved due to the stricter boarding school rules for girls.The instrument in this study was a questionnaire designed by the researcher.The questionnaire has been tested for validity and reliability in other boarding schools with the same characteristics: school accreditation, the average number of students, geographical location, and available facilities.The validity results in this study on the three types of questionnaires were declared valid with a significance value of <0.05 and reliable on the three types of questionnaires. The research process begins by explaining the research information to the research sample target; after getting clarity from the research, the researcher gives an informed concern that the research respondent must sign.Questionnaires were given to respondents who signed the informed concern and asked to complete the questionnaire.The researcher ensured that all respondents had filled out all questionnaires and continued by providing education regarding health protocols as direct benefits that research respondents could feel.This research has been declared to have passed the ethics test for research approval with registration number 5968/ UN22.9/PG/2021. Education is provided by researchers who are also nurses.The health education program was going into two meetings.The first meeting in education discussed providing educational information with three major themes, namely using masks, washing hands, and maintaining distance.The second meeting was implemented by giving demonstrations on how to use masks correctly, cough etiquette, handwashing with soap, and simulations of keeping a distance in activities in boarding schools. RESULT The first result of this study is the characteristics of the research respondents.Characteristics of research respondents are age, gender, and grade. Based on the analysis of the results in Table 1, the characteristics of the respondents found that the total respondents are the man; this is because boarding schools only allow the male gender to be involved due to the stricter boarding school rules for girls.The highest number of students was aged 16 years, as learning process at school must prepare handwashing facilities with running water or hand sanitizer and disinfectant.While at school, students and teachers are required to wear masks.Everyone entering the school will also have their temperature checked using a thermal gun.There is a time difference for teaching and learning activities, distance in class during the transition period, and primary and secondary education must maintain a minimum length of 2 meters. 4ccording to the Joint Decree report, schools can open face-to-face simultaneously or gradually.The decision allows local governments to open schools with due observance of the health protocol.Each school that implements a face to face learning has to meet several points, such as the school has to include the number of students with a maximum of 50% during learning, learning schedules that are differentiated by taking turns or by shifts, students' medical conditions (absence of comorbid diseases) or families with COVID-19 symptoms are required to use masks during teaching and learning activities, always wash hands with soap and running water or hand sanitizer, maintain distance, avoid crowds, and consent from parents or guardians regarding face-toface schools. 4,5chools in Indonesia have an education system that requires students to live in dormitories; boarding schools are identical to schools under the auspices of religious foundations.Since the pandemic lasted almost a year, several boarding schools in Indonesia have conducted face-to-face learning.The results of the observations of this boarding school researcher divide their education into three times, namely in the morning general science, during the day religious knowledge, and in the evening special religious book science.The school has teaching and learning activities in the afternoon and evening.The teachers who teach come from within the school, while in the morning, it will be delivered by teachers outside the school, which causes exposure from outside and is at risk for transmitting the Coronavirus. The results of observations made by researchers at one of the boarding schools show that when students leave the boarding school, they still do not use ORIGINAL ARTICLE much as 45%, while the least was aged 13 years, as much as 6%.Various grades are involved in this activity, but the most is grade 7, 35%, and the least is grade 8, 32%. Before being given education in this activity, a questionnaire was given to measure the level of knowledge.The results of the level of student knowledge related to health protocols before education were carried out as follows (Table 2). Before being given education, the level of knowledge had an average value of 54.25, with a maximum score of 50, namely 40 people.The lowest score before being educated was 25, while the highest score was 75. After being given an education, the level of knowledge has an average value of 87, which is an increase from before education.The highest score is 87, which is as many as 51 people.The lowest score after being educated was 62.50, while the highest score was 100 (Table 3). Before and after being educated on the level of knowledge, it was continued with statistical testing of paired T-tests, the test results as follows. Based on the table above, the paired T-test results with 95% confidence obtained a significance value of 0.000 where p < 0.005, which means that there is an effect of changes in knowledge between before and after the provision of health protocol education preventing COVID-19 in boarding schools (Table 4).The implementation of the activities carried out can be illustrated in the following Figure 1 and 2. DISCUSSION This study involved 100 students who attended boarding schools.Based on the results obtained, the overall number of respondents consists of male students.A large population can be why more men apply health protocols than women.Research by Riyadi & Larasaty (2020) states that the average male respondent from among the young in applying health protocols is higher than the compliance of female and older respondents. The number of respondents in the study aged between 13-16 years, based on research, stated that the high level of adolescent knowledge about COVID-19 was not accompanied by adolescent than the rope, removing the mask properly, and cleaning it with soap or antiseptic every time it is used again. 13Knowledge is the reason for implementing a behavior; lack of knowledge related to the spread of the COVID-19 virus causes a lack of public awareness in using masks. 14Implementing good student health protocols is supported by the discipline carried out in boarding schools which require students to use masks during teaching and learning during the COVID-19 pandemic. The school where the research location is located provides masks for students who do not have masks, so students who do not have masks or lose their masks during teaching and learning should be able to take advantage of the mask facilities provided.According to research, health protocol implementation is also implemented quite well, such as wearing masks according to government recommendations. 15Instinct/ Instinct has a role in humans to carry out actions or activities. 16People think that using masks can benefit against viruses, so using a good mask such as covering the nose and mouth is very important. The use of masks to implement health protocols during teaching and learning during the COVID-19 pandemic varies; some use cloth masks, and some use disposable masks.Cloth masks made with one layer of polyester material and four layers of filters can prevent the entry of the COVID-19 virus up to 95%. 13 According to research, the higher the level of education, the higher the understanding of knowledge on controlling and preventing transmission of COVID-19, especially 3M.So that if a person's education is higher, more and more health protocols will be implemented. 17Washing hands with soap and running water before and after eating and washing hands regularly can prevent the spread of COVID-19. 18Based on the results of this study, students' knowledge regarding handwashing was also low before being given education.Washing hands with soap and running water can avoid hands contaminated with dirt that sticks to the hands from the fingertips to the tribes and arms. 19Soap has the benefit of being able to break down hydrophobic compounds such as oil or fat, while 62-71% ethanol can reduce viral activity. 20Several factors, including knowledge, motivation, and support from the surrounding environment, can cause this noncompliance.In addition, considering that adolescents experience relatively rapid physical, mental and cognitive development, family support is needed for readiness at an early age.Early adolescence is a child aged 13 to 17 years, and the period after that until 18 years is late adolescence.At the stage of adolescent development, there are changes in the soul, mind, and emotions.Adolescents can integrate with adult society at this stage, develop selfawareness, and evaluate their obsessions and ideals. 9espondents in this study consisted of grade 7 to grade 9 junior high schools, the majority in this study were grade 7. Class determines a person's level of education.The level of education itself can affect a person's knowledge.The level of junior high school education has higher knowledge than children who occupy elementary school, and the higher the grade will increase.The average level of knowledge of COVID-19 adolescents is high, so the knowledge of adolescents classified as high will affect a person's actions in complying with existing rules. 4he results of this study indicate that the level of knowledge before being given education has an average value of only 54.25, with the highest score being 50, which is 40 people.At the same time, there is still the lowest score before being educated at 25, while the highest score is 75-student knowledge related to health protocols.The research instrument that the researcher uses describes how the knowledge of using masks, washing hands, and keeping a distance can explain the level of knowledge in each of these components. Knowledge of the application of health protocols is an essential part before students apply them.Many students have low knowledge scores in the health protocol section, such as masks.Most students still do not understand how to use a good and correct mask, so many students put the mask on the part affected by the mouth, touch the table and put it back on.According to research on factors that can lead to non-compliance with the application of health protocols, using masks, washing hands, and keeping a distance, one of which is using masks, is because some do not understand the health protocol problems of the COVID-19 pandemic such as the dangers that can arise from the transmission and the benefits. 10Whereas based on the research results, it is said that preventing the spread of COVID-19 infection can be done by wearing a medical mask. 11The lack of obtaining complete information related to COVID-19 is one of the obstacles to not implementing health protocols. Masks can protect from the spread of COVID-19 infection, which has 2 types: medical and respiratory masks. 12Cloth masks are the most widely used masks by students, so the things that need to be considered are hand hygiene, placing the mask very carefully, not touching other ORIGINAL ARTICLE Therefore washing hands with soap and running water is highly recommended during this COVID-19 pandemic. The COVID-19 pandemic is a new thing for the community, so lifestyle habits in preventing the spread of COVID will affect a person's lifestyle. 16Habits and awareness of each student, Handwashing facilities are an obstacle because they only have 2-3 handwashing stations.The lack of adequate facilities is a factor in implementing the behavior of washing hands with soap and running water at the research site. Health education on health protocols for preventing and controlling COVID-19 can increase knowledge and understanding of the application of health protocols. 17he role of schools is very much needed related to the application of handwashing during teaching and learning during the pandemic.This is due to the need for students' education about the importance of washing hands with soap and running water during the COVID-19 pandemic.Washing hands with soap and running water can make hands clean and eliminate other bacteria that infect the body because soap contains special ingredients such as mollient, triclocarban, triclosan, alcohol and others. 19According to handwashing with soap and running water is essential. 21andwashing is one of the body members that often touch objects contaminated with dirt that can contain bacteria or viruses, so the hands can be an intermediary for these microbes to enter the body and attack the immune system. Maintaining a minimum distance of 2 meters is one of the health protocols in preventing the spread of COVID-19 transmission.Keeping a distance is one form of anticipation so that people cannot easily contact others at risk of the COVID-19 virus.The behavior of maintaining a minimum distance of 2 meters is a behavior that is quite difficult to apply at the research location.Habits make it challenging to maintain a minimum distance of 2 meters. The COVID-19 health protocol to maintain a distance is significant, especially in the classroom, a gathering place for education.The behavior of applying student health protocols during teaching and learning during the COVID-19 pandemic, which is quite good, can be caused by the knowledge factor that makes students behave well in implementing health protocols during teaching and learning.Research states that the behavior of people with good knowledge affects people's attitudes regarding COVID-19 prevention in a good category as well. 22Health protocols for wearing masks, washing hands with soap and running water, and maintaining a minimum distance of 2 meters are some efforts to prevent the spread of COVID-19.This effort requires the community to be highly disciplined and must be applied consistently at all times.Community compliance can be caused by several factors, namely caring attitudes and public awareness.This was revealed in the study, which stated that the factors that could affect community compliance in implementing the COVID-19 health protocol behavior were perceptions of susceptibility, severity, benefits, barriers, instructions for action, as well as selfefficacy in implementing health protocols for preventing the spread of COVID-19. 23fter getting an overview of knowledge before being given education, education is given to all students involved in this research.This education uses video media and leaflets.Health workers influence COVID-19 prevention behavior so that providing information through media such as leaflets, posters, and others can impact COVID-19 prevention behavior. 24fter the education was carried out, the results of the level of knowledge were obtained with an average value of 87, which was an increase from before the education was given.The highest score is 87, which is as many as 51 people.The lowest score after being educated was 62.50, while the highest score was 100.This increase can be said to be the impact of the education provided; this is also in line with the research which states that health counseling activities to prevent COVID-19 can increase knowledge and children's understanding of the mode of transmission, danger or impact of COVID-19 and how to avoid it. 25ducational exposure to hand washing is significant.Research provides education for early childhood, such as playgroup, kindergarten, or elementary school. 18ildren are given examples and taught to wash their hands with soap and to run water or use hand sanitizer correctly and at the right time. The results of the study before and after education were statistically tested using a paired T-test with 95% confidence to get a significance value of 0.000 where p<0.005, which means that there is an effect of changes in knowledge between before and after giving education on health protocols to prevent COVID-19 in boarding schools. The results of this study are certainly in line with other studies which show that changes in knowledge can occur due to the education provided.Through health education, necessary information will reach the client to increase his knowledge.One's knowledge can influence the mindset in a positive direction to grow healthy behavior or habits. 24he effectiveness of education in school settings in this study is also in line with research conducted on school-age children on the prevention of sexual violence, where the results showed a change in the increase in children's knowledge regarding the prevention of sexual violence in children. 26Other studies also explain that providing educational programs can change children's knowledge and attitudes and practices at school. 27,28 CONCLUSION The knowledge of students in boarding schools about the knowledge of the implementation of the COVID-19 prevention health protocol in the school environment resulted in a lower level of knowledge than after being given information in the form of health.There was an effect of education on increasing knowledge between before and after being given education on implementing the COVID-19 prevention health protocol.Future research is expected to involve female students so that the effect of education can also be seen for the entire gender. ACKNOWLEDGMENT The Researcher wants to thank all participants, especially the head school who has given permission to intervene in their school. Figure 2 . Figure 2. Implementation of research activities.compliance in implementing the COVID-19 health protocol. 6-8Several factors, including knowledge, motivation, and support from the surrounding environment, can cause this noncompliance.In addition, considering that adolescents experience relatively rapid physical, mental and cognitive development, family support is needed for readiness at an early age.Early adolescence is a child aged 13 to 17 years, and the period after that until 18 years is late adolescence.At the stage of adolescent development, there are changes in the soul, mind, and emotions.Adolescents can integrate with adult society at this stage, develop selfawareness, and evaluate their obsessions and ideals.9Respondents in this study consisted of grade 7 to grade 9 junior high schools, the majority in this study were grade 7. Class determines a person's level of education.The level of education itself can affect a person's knowledge.The level of junior high school education has higher knowledge than children who occupy elementary school, and the higher the grade will increase.The average level of knowledge of COVID-19 adolescents is high, so the knowledge of adolescents classified as high will affect a person's actions in complying with existing rules.4The results of this study indicate that the level of knowledge before being given education has an average value of only 54.25, with the highest score being 50, which is 40 people.At the same time, there is still the lowest score before being educated at 25, while the highest score is . The Ministry of Education and Culture issues Circular Letter Number 3 of 2020 on Education Units and Number 36962/MPK.A/HK/2020 concerning the Implementation of Education in the Coronavirus Disease (COVID-19) Emergency Period, so learning activities are carried out online in the context of preventing the spread of COVID-19.
2023-09-14T15:25:31.458Z
2023-08-28T00:00:00.000
{ "year": 2023, "sha1": "eb27357dc15c38ed1ccd4ec730defc562ef96690", "oa_license": "CCBYSA", "oa_url": "https://jurnal.ugm.ac.id/jcoemph/article/download/71980/37356", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "a4d5a68f3c43c20e443a0e02816f6d6e9ee5e100", "s2fieldsofstudy": [ "Education", "Medicine" ], "extfieldsofstudy": [] }
131836648
pes2o/s2orc
v3-fos-license
Effect of maturity stage on the chemical composition of argan fruit pulp – Argan tree, a species endemic to Southern Morroco, is well known for its kernel oil used in cosmeticsandhealth-food,butthecorrespondingpulpattractedlessinterestfromresearchersandlittleisknown aboutitschemicalcompositionandevolutionduringmaturation.Thepulpofarganfruitsmonthlyharvested duringtheripeningperiodbasedonfruitcolor(ApriltoJuly),wasanalyzed.Withprogressingripenessvarious changeswereobservedinthechemicalcomposition,suchas(i)afour-foldincreaseoftotalsolublesugars content(glucose,fructoseandsaccharose),andofFe(75 – 165ppm), but also (ii) a drop of many components, such as proteins (10.1 – 6.4%), and cell wall polymers, lignin (14.9 – 5.9%) and hemicellulose and cellulose. Hexane-solublecompounds foundinsubstantialamount(10.7%inApril)alsodecreasedwithtime:thepulpoil peak (8.3%) was in April and June, and that of polyisoprene in June (3.6%). Therefore the stage of maturity (harvest date) is to be considered, without affecting the quality of the argan oil. Introduction The argan forest was declared as a Biosphere Reserve by UNESCO in 1998 due to its peculiar situation, endemic to the barren lands of Southwest Morocco where it covers an area of 800 000 ha (Morton and Voss, 1987). The argan tree (Argania spinosa (L.) Skeels; Sapotaceae) is a slow-growing species, as a shrub or up to ten meters high. Its deep root system helps stabilizing the ground against rain-or wind-induced erosion while its shadow maintains soil fertility by preserving moisture and favoring the growth of different kind of domestic cultures (Charrouf et al., 2002). Leaves are also used as hanging forage for cattle that frequently freely grazes in the forest, argan trees bear fruits that are collected by women, once ripe and fallen on the floor between June and September . Argan fruit has the size of a big olive and can be round, ovoid, conical or spindleshaped depending on genotypic factors Gharby et al., 2013). Just like olive, the fruit is mainly made of two parts, the pulp (mesocarp) which is covered by a skin (epicarp), and the kernel (endocarp) (Fig. 1). Argan oil is produced from the kernel under edible and cosmetic grades Zaanoun et al., 2014). Recently, the production process of argan oil has been modified towards a semi-industrialized method (mechanical cold pressing), and applied in newly developed cooperatives to produce and commercialize virgin argan oil of certified quality (Cayuela et al., 2008;Marfil et al., 2008;Matthäus et al., 2010). Since this edible oil is not refined, fruit quality and processing directly impact on its quality (Cayuela et al., 2008;Marfil et al., 2008;Matthäus et al., 2010). Non-roasted kernels result in argan oil for cosmetics, whereas slightly and carefully roasted kernels deliver the highly prized edible argan oil. It is copper-colored, with a slight hazelnut taste, and is the basic ingredient of the Amazigh diet and its potential health benefits are numerous . Both oils are marketed after simple decantation and filtration; they are not refined and therefore they are classified as virgin oils, just like virgin olive oil. Argan oil composition is well documented (Cayuela et al., 2008;Marfil et al., 2008;Matthäus et al., 2010;Gharby et al., 2011Gharby et al., , 2014Zaanoun et al., 2014). It is characterized by high levels of linoleic and oleic acids, and it is rich in polyphenols and tocopherols . In addition, the presence of minor compounds such as sterols, carotenoids, xanthophylls, and squalene contributes to its nutritional value and health properties, and to its dietetic and organoleptic characteristics (Cabrera-Vique et al., 2012). Previous results from our team show that the composition of argan (kernel) oil prepared from fruits collected from the same location does not vary significantly over the three-month April to July period. But a detailed investigation clearly showed that oil extracted from fruits collected in July (fully ripe fruits), differs on the level of minor lipid components . As the pulp of the argan fruit is palatable to cattle and provides a cheap feed for goats and other farm animals, its agricultural impact is of importance. Chemical analysis of argan fruit pulp has already shown the presence of lipids (Charrouf et al., 1991;Charrouf and Guillaume, 1999), polyisoprene (Pioch et al., 2011), saponins (Charrouf and Guillaume, 1999), and phenolics (Chernane et al., 2000;Charrouf et al., 2007). Pioch et al. (2011) started to set a processing chain aiming at valorizing the pulp, and they pointed the influence of water content on mechanical depulping efficiency of freshly harvested whole fruits. Whereas these fruits are currently harvested after remaining on the ground for several weeks, it would make sense to pick them at optimum date to ensure proper conservation and processing, optimum organoleptic and commercial quality (Sowmya et al., 2012). This might not be relevant in case of fruits used for kernel oil extraction, but this looks wise when aiming at valorizing the pulp out of its current use as cattle feed. Indeed, overexploitation for pasture and domestic cooking, land clearance for agriculture, climate change, resulted in a decrease in tree density and whole forest area (Alifriqui, 2004;Zugmeyer, 2006), but recent plans to plant again this endangered species, ask for an improved use and valorization of all derived products. Therefore the aim of the present work was to look for encouraging results to value the pulp without affecting the quality of the argan oil, knowing that the pulp already use is dried for at least 3 weeks which will lose its quality and deteriorated its chemical composition, and to investigate chemical changes occurring in argan pulp as a function of ripening stages during the last months of the one year long maturation. Fruits were collected at different stages of maturity and the pulp was analyzed. Fruit collection Round argan fruits (20 kg) were collected from four selected trees in Ait Melloul area (12 km south of Agadir, Souss-Massa region (Morocco). Harvest dates were April, May, June, and July 2007. Care was taken to collect a representative sample, taking into account the variability of maturity within each tree (based on fruit color). After 3 h of collection, fruits were peeled manually (we face some problems in peeling even if the fruit was dried), weighed, and processed for in view of analysis. Moisture content in pulp Pulp moisture content was determined by adapting the Association of Official and Analytical Chemists (AOAC) method 934.06 to 5 g of argan pulp and using a Jouan Quality Systems oven (EU 115 Classe O, France) (Boland, 1998). Lipid and polyisoprene content in pulp Determination of oil content in argan pulp was performed following the DIN EN ISO 659:2009method (ISO 659, 2009). Twenty grams of pulp were placed in a Soxhlet apparatus and extracted with hexane for 8 h. The organic phase was then concentrated under vacuum and dried for 5 min in an oven at 105°C. The hexane extract was dissolved in 50 mL of hexane and then 20 mL of absolute alcohol was added to coagulate the polyisoprene, the latter is washed with hexane and alcohol 3 times, then polyisoprene was isolated and dried at 50°C for 4 h and weighed; the content of polyisoprene was computed as the percent ratio of polyisprene weight against the dry weight of starting pulp sample. Samples were stored at 4°C under nitrogen until analysis. Acidity of lipid fraction Oil acidity was determined by titration of a solution of oil in ethanol with ethanolic KOH and is expressed as percent of oleic acid (ISO 660, 2009). Fatty acid composition of lipids Fatty acids composition was determined using method ISO 5508 (1990). In brief, fatty acids (FAs) in above lipid extract were converted to fatty acid methyl esters (FAMEs) before analysis by shaking a solution of 60 mg of oil in 3 mL of hexane with 0.3 mL of 2 N methanolic potassium hydroxide. FMAEs were analyzed by gas chromatography (Varian CP-3800, Varian Inc. Middelburg, The Netherlands) equipped with a flame ionization detector (FID). The column used was a CP-Wax 52CB (30 Â 0.25 mm i.d.; Varian Inc.). The carrier gas was helium, and the total gas flow rate was 1 mL/min. The initial column temperature was 170°C, the final temperature 230°C, and the gradient was 4°C/min. Injector and detector temperature were 230°C. Data were processed using Varian Star Workstation v 6.30 (Varian Inc, Walnut Creek, CA, USA). The results were expressed as relative percentage of area of each individual FA pick. Lignocellulosic components in pulp Pulp samples were analyzed for content of NDF (Neutral detergent fiber), acid detergent fiber (ADF) and acid detergent lignin (ADL). These parameters were determined according to the methods of Van Soest et al. (1991) using an ANKOM 220 Fiber Analyzer (ANKOM Technology Corporation, NY, USA). Hemicellulose was calculated as NDF -ADF and cellulose as ADF -ADL (Rinne et al., 1997). Sugars in pulp A weighed amount (1 g) of argan fruit pulp (not dried), was suspended in water (100 mL), sonicated for 10 min, and the mixture was centrifuged; the aqueous extract was then filtered, diluted 1:100 with water prior to injection. Sugars concentrations were determined using highperformance anion exchange chromatography (HPAEC), on a Carbopac PA-1 analytical column (4 Â 250 mm). Detection was performed with a pulsed amperometric ED50 detector (Dionex Corp, Sunnyvale, CA, USA). A volume of 25 mL was injected. Concentration of each carbohydrate was based on pick area (Chromeleon management system; Dionex) and using calibration curves obtained with corresponding sugar standards (Sigma) (Saint Louis, USA). The elution was achieved isocratically with a 18 mM NaOH solution at a flow rate of 1 mL/min (Raessler et al., 2010). Other components Crude protein (CP) content was determined by the Kjeldahl method (calculated as N Â 6.25), ash content after combustion at 550°C using the AOAC 1990 protocol. Sodium, calcium, copper, iron, phosphorus, magnesium, manganese, potassium and zinc, were analyzed by atomic absorption spectrophotometry. One gram of pulp was weighed and dried at 105°C for 24 h in porcelain cup, before charring it in a muffle oven at 550°C for 4 h. After cooling, 5 mL of an hydrochloric acid solution at 20% (v/v) were added. Then the mixture was boiled and the content was filtered into a 100 mL flask competed with deionized water. Mineral elements were determined by atomic absorption spectrometry using a 5000 Perkin Elmer (Überlingen, Germany) with graphite furnace system HGA600 using palladium nitrate as matrix modifier. Results and discussion To our knowledge, argan trees were not really selected for fruit production, although some remarkable trees are known . Therefore trees were selected according to fruit shape, as previously described by Gharby et al. (2013). Despite the complexity of argan fruit flowering and maturation cycles, as a broad line, fruit harvested in April, May, June, and July are generally considered by people native of the argan forest as unripe, intermediate, nearly ripe, and fully ripe, respectively. The fruits were collected in a plateau region, near Ait Melloul city, in Morocco, at beginning of every month during the ripening season as described in the previous section and according to Gharby et al. (2013). Fruits were peeled three hours after collection for separating pulp and kernel manually, and the pulp was stored at 4°C until analysis for avoiding degradation (fruits infested by Ceratitis capitata worm were discarded). A portion of each collected sample was analyzed for moisture content. Chemical analysis of the pulp was performed with native or dried pulp upon the case as detailed in the preceding section, to determine the main components. Results are expressed in % of pulp dry weight unless otherwise specified. Water-insoluble components were extracted with hexane, namely neutral lipids (vegetable oil) and polyisoprene, both known as compounds extractable from argan pulp (Pioch et al., 2011); then the polymer was coagulated by adding ethanol to allow separating the lipids, and the corresponding acidity and fatty acid composition were determined. Watersoluble sugars, proteins, cell-wall lignocellulosic components (cellulose, hemicellulose, lignin) and mineral components (ash content and some mineral elements) were also analyzed. Evolution of moisture, lipid and polyisoprene during ripening The moisture content did not vary significantly (p < 0.05) as visible in Table 1, except between June and July, from an average 81% down to 77% at end of survey period. This is the mark of ripening, a reduction in moisture content being generally observed for example for olives (Gracia and León, 2011), almonds (Hawker and Buttrose, 1980) or avocados (Kruger et al., 2000). Accordingly, in the mean-time harvested argan fruits showed a significant change of external color from green to yellow. Considering the hexane-soluble fraction, the content of oil and polyisoprene in pulp of unripe fruits (green colored peel) was respectively 8.0 and 2.6% (Tab. 1, April; % of pulp dry weight). Then it decreased to 6.7 and 2.0% during the next month (May), but reached its highest value in June (8.3 and 3.6%) when fruit are considered almost ripe, before falling to the lowest level at end of survey (July; oil 5.7%; polyisoprene 2.7%). The difference between these monthly values being statistically significant, one can conclude that the stage of maturity does influence the lipid content. This is not surprising, to some extent, because in the case of olives for example another oleaginous pulpthe lipid content increases continuously during ripening (Gracia and León, 2011). However, here the oil content does not follow a simple pattern. This cannot be the result of water uptake, owing to the steady water content already commented; even, the noted small loss of water in July should play against any decrease of oil content. Therefore this variation of oil content can reasonably result from biosynthetic features. Now regarding polyisoprene, the other pulp component soluble in hexane, we also note in Table 1 that the percentage follows the same pattern as for the oil. But the computed oil to polyisoprene ratio does not stay constant: the highest value (average 3.2) is found in samples harvested in April and May while it drops to 2.2 in June and July samples. Once more this looks to be linked to an evolution of biosynthetic pathways, possibly due to the stress imposed by the very dry season in the area, although the reason and governing parameters remain unknown. It has also to be taken into account that the content of oil and polyisoprene in pulp reached the maximum value in June (total 11.9 ± 0.6% dry weight). Therefore it might be that the supposed end of ripening stage (July regarding collected samples) would be actually an over-ripe stage, if considering the hexane-extractible fraction. The acidity of extracted pulp oil goes up and down between 8.0 and 16.4% over the whole period (Tab. 1). It is somewhat surprising to note a quite high value in June (14.8%) while it was "only" 8.0% in May. However this pattern follows the one noted for oil content, therefore this might be due to the evolution of biosynthesis of acylglycerols over the harvest period. Although not having the explanation about the variation of these parameters vs. harvest date, these results very high acidity and fluctuationsask for carefully investigating the influence of harvest procedure, date and conditions on lipid biosynthesis over a longer period of time. Need to recall that in a previous investigation we noted no evolution of the acidity of the corresponding argan kernel oil during the same period, and it was less than 0.2%, indicating that the biosynthesis of triacylglycerols in kernel was almost complete at this stage. Considering pulp oil composition, the major fatty acids are linoleic and palmitic (close to 29 and 25% respectively), oleic acid comes third (13-15%), with a slightly lower percent of linolenic (∼12%). In spite of the relatively high percentage of saturated acids, the presence of both C18:2 and C13:3 series makes this oil quite interesting as an edible oil. This composition is different from the one found in the corresponding kernel oil in which found major fatty acids were oleic and linoleic, 51.3 and 30.4% respectively . The fatty acid composition of the pulp oil did not change over the surveyed period, as also previously noted for the kernel oil. Therefore, if considered as a double source of edible oils, argan fruits could be harvested before being considered fully ripe and fallen on the ground in case of targeting the production of edible oils. Available literature data about argan pulp oil shows a quite large variation although all samples containing mainly C18 fatty acids (Tab. 2). Fellat-Zarrouck et al. (1987) found a quite similar composition between pulp and kernel oils, oleic being the major acid (42%), with equal amounts of linoleic and palmitic acids (∼18.8%), contrary to the present case showing linoleic as major acid. There are also some differences among the minor fatty acids; for example myristic C14:0 found by Fellat-Zarrouck et al. (1987) (4.3%), not seen in present case, whereas we detected lauric C12:0 (1.5%) and erucic acid C22:1 (0.9%). Variations between minor acids are not surprising, whereas the shift of oil type between myristic C14:0, oleic C18:1 or linoleic C18:2, asks for an increased work on larger series in order to confirm such changes of major fatty acid that could mark quite differing populations among argan trees. Evolution of cell wall components and soluble sugars As already mentioned, the pulp is currently used as feed for cattle. Thus the chemical composition of hexane-extracted argan fruit pulp over the ripening period is given in Table 3. Regarding fibers analyzed according to Van Soest et al. (1991), one notes in Table 3 that NDF (neutral detergent fibers or whole lignocellulosic fibers) does not exceed 31%, meaning that fibers and already presented hexane-extractible amount about 40% or less; thus other components make the main contribution, and some of them will be discussed later on. NDF content drops from 30.9 to 19.1% over the period, with no evolution between May and June samples. Other measured data -ADF (acid detergent fibers) and ADL (acid detergent lignin)follow the same pattern with time. However, when considering the individual components computed from above raw data, worth noting that whereas lignin is by far the main compound in April sample (14.9%), the three components lignin, hemicellulose, cellulosefall into the same range in July (5.9; 6.3; 6.8% respectively). Thus not only the fibers become minor contributors to the pulp, but also their composition changes; in fact the cellulose percentage does not move over the period, resulting in a more prominent cellulosic character of fibers in ripe fruits. This result about cellulose is close to those of Battino (1929) and Dupin (1949) (5.7 and 5.9%), while a higher value (12.9% fresh pulp) was possibly found by Fellat-Zarrouck et al. (1987) because of variable moisture content (20-50%). Lignin is an indigestible component but provides a smooth gastrointestinal tract and greatly reduces the risk of diarrhea. To our knowledge, no work has dealt with the determination of lignin from the pulp of argan fruits. The low content of lignin in July justifies the use of the argan fruit pulp as cattle feed, although the total fiber content indicates that other components are expected to increase the fodder value of argan pulp. NDF and ADF contents of argan fruit pulp are lower than the findings of Alcaide et al. (2003) for olive pulp, who reported that NDF and ADF contents of extracted olive pulp was 64.1 and 51.6%, respectively. Proteins, which are valuable components in fodder, here computed from the nitrogen analysis, are quite low and their content decreases from 10.1 to 6.4% over the survey period (Tab. 3). This falls in the range found by Battino (1929) and Dupin (1949): 6.2 and 5.1% respectively (after correction for moisture content). To our knowledge there is no literature reference about the evolution of proteins with time in argan pulp, but Ajana et al. (1999) reported a drop from 4.6 to 2.3% for Moroccan Picholine olives, thus a similar evolution. Given the relatively low content of proteins in July, sugars another group of nutrients in fodderwere also investigated. The total soluble sugars increased drastically between April and July, from 3.9 up to 15.7% (Tab. 4), thus becoming the first class of investigated pulp components, before any individual cell wall polymers (cellulose, hemicellulose, lignin), and close to the total fiber content (19.1%). A closer look shows that this class comprises essentially three components; glucose, the main one in all samples, increases from 2.2 up to 5.6%. Saccharose, the second ranked in April experiences the strongest change from 1.0 to 7.5%, becoming the main sugar in July. Fructose stays third ranked but also increases, from 0.6 to 3.6%. Rhamnose was detected only in April sample. The fact that xylose or mannose, galactose, arabinose were not detected does not play in favor of these soluble sugars originating from hemicellulose hydrolysis. Indeed Table 3 shows that hemicellulose stayed rather stable, with a drop of only 1.9% over the period. Same remark applies to cellulose, the drop being even smaller (only 1.0%). Thus saccharose and its two constituents fructose and glucose are likely to result from de novo synthesis during the very last months of argan fruit ripening, similarly to other fruits, as usual in many fruit pulps. Sandret (1956) pointed soluble sugars as main components of total carbohydrates in pulp. Also, Aboughe-Angone et al. (2008) investigated extensively the insoluble sugars fraction, and concluded that the major constituents in pulp cell walls are a galacto-xyloglucan among the polysaccharides, together with arabinogalactan-proteins, galactose and arabinose making 70% of total oligosaccharides in pulp. As a matter of fact Heuzé et al. (2015a) reported average total sugars and gross energy content in pulp close to those of destoned olive cake (15.3% and 18.2 kJ/kg; 14.5% and 20.6 kJ/ kg) and to oil palm press fibers (18.3 kJ/kg), both derived from the pulp of an oleaginous fruit similar to argan fruit. Our results complement scarce information available from literature about pulp composition, and confirm the reported high nutritional value of argan fruit pulp (estimated to be 70-100% of that of barley by Sandret (1956)), among other co-products of argan tree cultivation and processing chain (Heuzé and Tran, 2015b). Also we noted that late harvest maximizes the soluble sugars content but minimizes the content of sugars-containing cell wall polymers. Ash and micronutrients Last investigated pulp fraction deals with phosphorus and metals in the mineral residue or ash which can bring useful nutriments in feeding livestock or as fertilizer. There is a slight decrease between April and July (7.6 to 5.5%; Tab. 3). The same values were also found by Fellat-Zarrouck et al. (1987) and Dupin (1949), (respectively 4.1 and 4.6%), however Battino (1929) showed a remarkable lower content compared to our results (0.2%). These values are higher than those in cakes of peanut, cottonseeds and soybeans (3.7%), and olive fruits (2.7-5.5%) (Ajana et al., 1999). Conclusion While most research efforts concerning the argan fruit for decades were focused on the kernel and its derived high value oil ("argan oil") owing to the increasing uses in food and cosmetics, there was a lack of information about the pulp. This preliminary research work starts to fill the gap: main constituents were monitored during fruits ripening, when color turns from green to yellow, just before falling on the ground. Significant variations were found for oil and polyisoprene, the last one being at maximum in June, whereas the former was as high in April as it was in June, but here again the July sample was not suitable for maximizing pulp oil yield. Same remark applies to proteins and to hemicellulose and cellulosea source of fermentable sugarshighest in April, staying at high level until June, but noticeably lower in July. The sole exception concerns the soluble sugars of which total content showed a four-fold increase during the surveyed period. Therefore the supposed fully mature stage in July, based on color change, does not look to be preferred time for an optimized multiple-products valorization of the pulp (for example pulp oil, latex and fodder for goats), the innovative target that led to this study. An earlier harvest would not impede kernel oil yield as already experienced, and would still provide the extracted pulp cake as fodder for local farmers. These results ask for a detailed investigation (i) of lipid and polyisoprene biosynthesis, in order to optimize harvest date and to maximize extraction yields, but also (ii) of the factors acting on the found variability of monitored pulp components when comparing our results to those in literature.
2019-04-26T13:58:56.201Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "a48fec7f3c06fa1024641f2e08d96d2a85b236c3", "oa_license": "CCBY", "oa_url": "https://www.ocl-journal.org/articles/ocl/pdf/2019/01/ocl180034.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "a6229b17a8282a7326f7bf64a24810c7af144ac8", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Chemistry" ] }
117446097
pes2o/s2orc
v3-fos-license
Forces and fluctuations in planar, spherical and tubular membranes Lipid membranes constitute very particular materials: on the one hand, they break very easily under microscopical stretching; on the other hand, they are extremely flexible, presenting deformations even at small scales. Consequently, a piece of membrane has an area excess relative to its optically resolvable area, also called projected area. From a mechanical point of view, we can thus identify three tensions associated to lipid membranes: the mechanical effective tension $\tau$, associated to an increase in the projected area and to the flattening of the fluctuations; the tension $\sigma$, associated to the microscopical area of the membrane and thus non measurable, but commonly used in theoretical predictions; and its macroscopical counterpart measured through the fluctuation spectrum, $r$. Up to now, the equality between these quantities was taken for granted when analyzing experimental data. In this dissertation, we have studied, using the projected stress tensor, whether and under which conditions it is justified to assume $\tau = \sigma$. We studied three geometries (planar, spherical and cylindrical) and obtained the relation $\tau \approx \sigma - \sigma_0$, where $\sigma_0$ is a constant depending only on the membrane's high frequency cutoff and on the temperature. Accordingly, we conclude that neglecting the difference between $\tau$ and $\sigma$ is justifiable only to membranes under large tensions: in the case of small tensions, corrections must be taken into account. We have studied the implications of this result to the interpretation of experiments involving membrane nanotubes. Regarding $r$, we have questioned a former demonstration concerning its equality with $\tau$. Finally, the force fluctuation for planar membranes and membrane nanotubes was quantified for the first time. Avec le montage expérimental utilisé pour extraire des tubes de membrane, nous pouvons non seulement mesurer la force nécessaire pour l'extraire, mais aussi la déviation quadratique moyenne de cette force dans la direction de l'axe du tube. Cette quantité pourrait fournir des informations supplémentaires sur les caractéristiques mécaniques des membranes. Utilisant les outils diagrammatiques introduits précédemment, nousévaluons alors la déviation quadratique moyenne de la force nécessaire pour extraire un tube dans le chapitre suivant. Nous prédisons une dépenvi Introduction In this section we will present membranes, first from a biological and historical point of view (section 1.1) and secondly from a modern physical perspective (section 1.2). In section 1.2.1, we define the quantities that are the focus of this work: the mechanical tension τ , the Lagrange-multiplier σ and its measurable counterpart r. The main theoretical models for membranes, as well as their validity, are presented in section 1. 3. There we derive the first fundamental results for planar membranes in contact with a lipid reservoir. Section 1.4 summarizes the most current experimental techniques used to characterize the mechanics of membranes. Finally, the stress tensor for planar membranes is introduced in section 1.5. Biological membranes During the last four hundred years, the image of the cell has become more and more complex [8] (see Fig. 1.1). As experimental techniques evolved, many questions were answered -and many others were raised. In particular, we have learned a lot about the cell's boundary. We will start thus by a brief non-exhaustive historical review (further details can be found on [8], [9], [10], [11]). Up to the 19th century, living beings were believed to have a sponge-like microscopical structure. There would be two continuous substances: a membranous meshwork, as one can see in inset 1.1(a), and a fluid filling the communicating cells. The meshwork was considered the true essential constituent, while the fluid had a mere nourishing function [8]. At that time, the term membrane named already the cell's boundary, although it corresponded more to what we nowadays call the cell wall, the rigid cellulose structure that encapsulates plant cells. This image changed in 1807, when Link showed that cells were in fact separated. He observed that colored fluids did not diffuse through the surrounding cells, as one would expect with the former theory. He concluded thus that the essential component of life was the unitary cell itself. Due to the low numerical aperture of the microscopical objectives available at that time (see inset 1.1(c) for a typical image), animal cells were also believed to have a membrane, i. e., a cell wall. Cells from both animals and plants had then the same features: a nucleus, an aqueous plasma and a cell wall. This apparent universality was an important support to the cellular theory proposed by Schwann and Schleiden, which stated that every living being was constituted by cells. 1665. Inset (b) left shows the first drawing of the nucleus accurately made by Leeuwenhoek in 1719, while he studied the salmon red blood cells (which are nucleated in fishes). The right photo shows a contemporary optical microscope image of the same cells for comparison. The next inset shows a modern photograph of plant tissue taken with roughly the same technology as in 1828, five years before Brown, best known for his observations of the Brownian motion, named the nucleus. Inset (d) depicts the chromosomes inside the nucleus (Flemming). Finally, sub-figure (e) shows a nowadays electron micrography of a plant cell [12]. We can see the complicated internal structure and identify some organelles. Some years later, the histologist and physiologist William Bowman [13], best known for his work in nephrology (the Bowman's capsule is named after him), represented for the first time an actual cell membrane. In his 1840 work, he studied the striated muscle cells [14]. He noticed that by stretching those cells, he could disrupt their internal fibers leaving a transparent sheath called sarcolemma intact (see Fig. 1.2). This sheath, in that time thought a cell wall, is actually what we currently call the cell membrane. Subsequent experiments on the effects of osmotic pressure on plant cells showed that the cell could pull away from the cell wall. This phenomenon is called plasmolysis. It is caused by the selective permittivity of the cell membrane, which is permeable to water, but not to ions, sugars and other water soluble molecules. Although specialist of that time could have interpreted it as an indirect evidence of a membrane encapsulating the cell, they blamed osmotic effects on vacuoles and proposed the naked-cell theory: the cell was defined as a small naked lump of protoplasm with nucleus, in Schultze's words (1860). It could eventually be encapsulated by a non-essential skin (de Bary, 1861) [8]. [14]. The disrupted fibers are enclosed by a sheath called sarcolemma: the cell membrane. The first one to realize that there was effectively a skin of different nature around the protoplasm was Ernest Overton in 1895 [15]. He noticed that plant cells under a 8% sugar solution suffered plasmolysis. This indicated that sugar molecules could not easily penetrate the cell even though the protoplasm was composed by water. He repeated the experiment using successive solutions of alcohols, ether, acetone and phenol with the same osmotic pressure as the sugar solution. He remarked that in some of these cases there was no plasmolysis, depending on the substance solubility in water. He concluded that water insoluble substances penetrated easily the external part of the protoplasm (Grenzschicht), whereas water soluble molecules, as sugar, did not. He inferred thus that the boundary region was distinct from the 1. 1. BIOLOGICAL MEMBRANES rest of the protoplasm (see Fig. 1.3) and that it was impregnated by a substance of the same nature of fatty oils. Indeed, today we know that biological membranes are mostly composed by amphiphilic lipids, having a hydrophobic tail and a hydrophilic head. This feature make them self-assembly in aqueous media in large bilayers, even though there are no chemical bonds between them. This explains the relative impermeability of membranes to water soluble substances. The most abundant kind of lipid present on membranes are phospholipids. These molecules have one or two long hydrocarbon chains, which may contain only simple bonds (saturated) or double bonds (unsaturated). As we shall see in section 1.2, the stability of the bilayer is assured by the fact that these molecules have an effective shape close to a cylinder, whose dimensions are typically 0.5 nm for the radius and 1.0 − 1.5 nm for the length [16]. In addition to phospholipids, membranes may also contain cholesterol, an amphiphilic lipid that unlike phospholipids has a ring-like tail. [15]. The cell wall (indicated by c.m.) and the cell membrane (indicated by pl. ext.) are represented as different entities. Note that Overton decided arbitrarily the thickness of the membrane. The first indirect estimation of the membrane thickness was made by Hugo Fricke in 1925. He measured the capacitance of blood cells and, supposing that they were composed by lipids, deduced a thickness of 3.3 nm [17]. This is a remarkable result, since posterior direct measures give a thickness between 5 nm and 10 nm [18]. Meanwhile, Gorter and Grendel gave a fundamental step towards the comprehension of how lipids arrange themselves within the membrane [19]. They extracted the lipids from a known number of red blood cells. Using a Langmuir trough, they measured the area covered by lipids and it corresponded to twice the estimated surface area of the erythrocytes. They deduced that the membrane was constituted by a bilayer of lipids whose polar heads pointed outward. In 1932, a puzzling experiment suggested that there was more than lipids on a membrane. Kenneth Cole studied urchin eggs by compressing them between a plate and a gold fiber with a known force, much in the same way as in modern experiments [20] (see Fig. 1.4). He deduced the tension of the egg's membrane by studying its degree of flattening. His measures yielded a tension of 0.08 dyn/cm, which is a hundred times smaller than the tension of oily films [21]. This was a surprising result, if one believed the membrane to be constituted only by lipids. Two years later, Danielli and Harvey solved the paradox. They centrifugated smashed mackerel eggs in order to separate lipids from the aqueous phase. First, they measured the surface tension of the oily phase and obtained ≈ 9 dyn/cm. Then they added the aqueous phase and observed a tension lowering. By studying the time evolution and the influence of temperature on their mixture, they deduced that proteins were responsible for the tension lowering [22]. Danielli and Davson proposed in the following year the first model of the membrane structure: every plasma membrane would have a core of lipids bordered by two monolayers of lipids whose polar head pointed outward and the whole would be coated by a layer of proteins [23] (see Fig. 1.5). It was an important step, since today we know that proteins are responsible for almost every membrane function but enclosing, such as active transport of molecules, binding to cytoskeleton and reception of chemical signals from the outer environment. 1. 1 In the 1950s the new technology of electron micrography allowed to make the first direct images of the cell membrane (see Fig. 1.6). The use of permanganate fixation was also important, since it stained only the hydrophilic head of lipids. Robertson observed a three line pattern of about 7.5 nm corresponding to a simple bilayer. This excluded the possibility of a bulk of lipids in membranes. He observed the three lines pattern not only on the cell boundary, but also encapsulating organelles from different animals and bacteria [18]. Moreover, using innovative staining processes, he showed the asymmetry of some membranes' coating, the external surface containing also carbohydrates. These new features were incorporated in the unit membrane model: every biological membrane shared the same architecture -a lipid bilayer, asymmetrically coated by proteins and carbohydrates. Figure 1.6: Electron micrography of the cell membrane [24]. The hydrophilic heads of lipids are colored, while the hydrophobic tails are not. This results on the characteristic three lines pattern seen above. examined by a transmission electron microscope. In the late 1960s, da Silva and Branton showed that on biological membranes, these fractures tended to pull apart the lipid bilayer [25], [26]. They obtained the micrography shown in Fig. 1.7(a), which suggested that proteins were actually embedded in the lipid bilayer. Besides, another work using fluorescent labeling showed that proteins diffuse, implying that membranes were in fact fluid [27] (see Fig. 1.7(b)). (a) Micrography of a fractured human erythrocyte membrane. The left surface corresponds to the external fracture (EF) and the right corresponds to the protoplasmic fracture (PF). The tiny particles on both surfaces measure between 5 and 10 nm and correspond to proteins. Note that they are more numerous on the PF face due to the presence of peripheral proteins attaching the membrane to the cytoskeleton. (b) Original picture of the fluorescence labeled antigens experiment which showed that proteins diffuse on the cell membrane [27]. Antigens were labeled in red (lower half) and green (upper half). After some minutes, the colors were uniformly distributed over the cell. Finally, bearing in mind these experiments, Singer and Nicholson proposed the mosaic fluid model of membranes (see Fig. 1.8 for a sum-up), which is the basis to the modern vision of biological membranes. They made the distinction between peripheral proteins, i. e., those loosely attached to the membrane like those that bind the membrane to the cytoskeleton, and the integral proteins, which are embedded in the lipid bilayer. They postulated that lipids and proteins diffuse freely inside the membrane's surface, as a two-dimensional liquid. Consequently, membranes should have no long-range order. They noted that the membrane leaflets were probably asymmetrical with respect to lipid and protein composition due to the energy barrier of moving the polar head from the aqueous interface into the bilayer interior. Later experiments confirmed that asymmetry was indeed present on biological membranes [28]. 1 The model was elegant, but experiments rapidly showed that it was oversimplified. Biological membranes are not so homogeneous as the fluid mosaic model implies. Already in the 80's, indirect measures showed that polarized epithelial cells present membrane domains, i. e., the cell membrane that faces a cavity has different composition from the other faces [29]. Moreover, biological membranes present also smaller domains, ranging from dozens to hundreds of nanometers [30]. A simple statistical reasoning suggests that heterogeneities should indeed be expected: a random lipid and protein distribution means that the pairwise interactions between lipid-lipid, lipid-protein and protein-protein should be within thermal energies, which is rather improbable, given their wide variety [31]. At this point, model membranes, i.e., lipid bilayers reconstituted in laboratory to mimic biological membranes (see section 1.2), confirmed and gave new insight to the question of lipid phase separation. A mixture of phospholipids in the gel/liquid phases segregates, as shown in Fig. 1.9(a) [32]. At the interface between these phases, a line tension builds up and they tend to separate in order to minimize energy. More interestingly regarding cells, it is possible to have phase separation between two liquid phases: the liquid-disordered and the liquid-ordered [16]. Experimentally, this may be achieved in a mixture of phospholipid and cholesterol [33], [34] (see Fig. 1.9(b)). Besides the segregation of lipids, it was also shown that some proteins cluster in model membranes. In these cases, the aggregation depends on the length of the lipids constituting the bilayer where proteins are embedded [35]. In addition, modern techniques, such as single particle tracking, show that lipids and proteins in living cells can have anomalous movements, such as directed or confined motion and anomalous diffusion, possibly due to the cytoskeleton or to restrictions imposed by lipid domains [36] (see Fig. 1.10). 1. 1. BIOLOGICAL MEMBRANES 9 CHAPTER 1. INTRODUCTION (a) typical trajectories of gold particles attached to certain proteins on a cell surface (followed for 30 s). Trajectory A corresponds to the stationary mode, B, E and F to simple diffusion, C to directed diffusion and D to restricted diffusion [37]. (b) Here, single particle tracking is used to study a model lipid monolayer divided in two phases: the liquid-ordered (dark gray) and liquid-disordered (light gray). The arrow in (a) indicates the polystyrene bead that was tracked. Fig.(b) shows the bead's random walk and (c) shows a detail of this walk. Remark that the bead remains on the liquiddisordered phase [38]. Finally, modern fluorescence techniques allow the direct visualization of domains in living cells [39]. In Fig. 1.11, Fig. 1. 12.A and Fig. 1. 12.B, one can see patches whose overall structure is different. The discriminating feature is the GP (generalized polarization): red for liquid-ordered phase, richer in cholesterol, and blue the liquid-disordered phase. The images show clearly that there is coexistence of liquid phases in cells. Moreover, it is possible to detect certain types of proteins. In Fig. 1. 12.C and in Fig. 1. 12.D, transferrin receptor and caveolin-1 are respectively shown by fluorescence. Fig. 1. 12.E and Fig. 1. 12.F show the merge of C and D with B, respectively. We can see that the transferrin receptor is found mostly on the liquid-disordered phase, while caveolin-1 is found mostly on the liquid-ordered phase. Indeed, for a long time liquid structures called rafts composed mostly by gly-10 1. 1. BIOLOGICAL MEMBRANES CHAPTER 1. INTRODUCTION cosphingolipids, cholesterol and some proteins were suspected to exist [16]. These rafts would help protein sorting and be involved in cell signaling. These images present however only some evidence of their existence and the subject is still debated [40], [41]. Figure 1.11: GP images of living macrophages (mouse RAW 264.7 and human T HP − 1, respectively): red stands for the liquid-ordered phase and blue to the liquid-disordered. The circled area indicated in B shows the pixilation of the image (167 pixels inside the circle). Note in A that liquid-ordered phase tend to be observed on the tip of filopodia [39]. Figure A shows the GP image and B shows the corresponding dual-colored image. Figs.C and D show the fluorescence images for transferrin receptor and caveolin-1, respectively. Figs.E and F correspond to the superposition of B with C and D, respectively: light blue patches indicate colocalization with liquid-disordered phases and yellow patches indicates colocalization with liquid-ordered phases [39]. In order to account for some of these results, refinements of the fluid mosaic model were proposed. For instance, to explain protein clustering Mouritsen proposed the Mattress Model [42] (see Fig. 1.13). The model comes from the observation that the bilayer thickness may be smaller or larger than the length of the hydrophobic part of embedded protein. This mismatch would expose hydrophobic parts of the protein or of the lipids, which would in consequence deform. The deformation would give rise to a line tension which would tend to cluster proteins and aggregate some kinds of lipids. Another model was proposed by Erich Sackmann to explain the confinement of proteins observed during single particle tracking. It stresses the interactions between the membrane and the cytoskeleton. To the moment however, there is no model that accommodates all recent results about biological membranes. Model membranes and mechanical probing As we have seen in the last section, biological membranes are very complex. Simpler membranes reconstituted in laboratory called model membranes are thus doubly interesting. First, they give insight to the comprehension of phenomena in living cells. Model membranes are advantageous because they have both chemical composition and environment controlled, allowing reproducible experiments. Moreover, these membranes are generally in thermodynamic equilibrium, which is impossible to achieve in living cells. By consequence, these experiments may be described using conventional statistical mechanics tools. Secondly, model membranes have technological interest in their own: they are used to improve drug delivery, to build micro-chambers for chemical reactions [43], [44] and even to build bio-electronic devices [45] (see Fig. 1.14). (b) Artistic representation of a bioelectronic device composed by a nanowire 30 nm wide (gray) covered by a lipid membrane (blue/orange). In this membrane, proteins that control ion passage were incorporated (pink) (image by Scott Dougherty, [45] Model membranes are prepared by dissolving phospholipids in an aqueous solution. In order to minimize the exposition of their hydrophobic tails to water, they self-assemble in a large variety of forms, from small micelles to vesicles and bilayers, depending on temperature, on concentration and on the effective shape of phospholipids, which is a measure of their average cross section of as a function of how profoundly buried they are on a membrane. In Fig. 1.15(a), we can see examples of effective shapes. Note that to form a bilayer, lipids must have roughly a cylindrical shape. In Fig. 1.15(b), we can see an asymmetrical bilayer, which naturally tends to bend. We remark that the leaflets' asymmetry is stable over time, since spontaneous passage of lipids from one monolayer to the other, known as flip-flop, is very slow in pure lipid bilayers (of the order of several hours [46]). Indeed, there is a high energetic barrier for the hydrophilic head to traverse the hydrophobic core of the membrane. An essential point is the very weak water solubility of phospholipids. This implies that once a structure such as a vesicle is formed, the number of phospholipids it contains is constant. Besides, phospholipids do not resist to stretching, as we will see in section 1.2.1 and thus the total area of these structures is also constant. 15: Inset (a) shows the effective shape of some phospholipids: in pink, phosphatidylcholine (PC), in blue lysophosphatidylcholine (LPC), with only one hydrophobic tail, and in green arachidonic acid (AA), with an unsaturated tail. The upper surface corresponds to the hydrophilic head. In the center, we can see some self-assembled structures. Inset (b) shows an asymmetrical bilayer whose composition leads to a natural bending tendency. In this work, we are interested in the mechanical properties of liquid membranes. To this aim, three structures are usually studied: planar membranes, vesicles and membrane tubes. Planar membranes are also called black film membranes (BLM). They are used since the 60's and their name come from the destructive interference that a light beam suffers due to the thinness of the lipid membrane. The experimental set is constituted by two aqueous chambers separated by a plate (see Fig. 1.16(a)). This plate, usually made of hydrophobic materials to assure the adherence of lipid molecules, has a hole ranging from micrometers to several millimeters (see Fig. 1.16(b)). A bilayer can be deposed over this aperture through a variety of techniques [47]. BLMs are widely used to characterize the electrical properties of membrane spanning proteins, since one can control the composition of both aqueous solutions. It has also been used in single particle tracking [48]. Sadly, the technique presents many disadvantages for mechanical probing. First, one cannot control the tension of the frame: it depends on the film deposition. If the tension is too small, the membrane fluctuates a lot and is unstable. So, usually the film is relatively tense. If however it is too tense, a minimum osmotic difference between the two cavities leads to the rupture of the membrane [49]. Another problem is the film deposition technique, which may involve solvents that contaminate the membrane [50]. The most popular objects used for membrane mechanical probing are uni-lamellar vesicles, which are self-assembled bags of a single bilayer containing fluid. They are obtained from several techniques and range in size from a few tens of nanometers to tens of micrometers. In the last case, they are also called GUV (giant uni-lamellar vesicles) and they are of special interest, since they have roughly the same size of cells, they are easy to manipulate and they are directly visible with light microscopy techniques [52]. Moreover, they are stable and they can be deflated by changing the osmotic pressure. Each vesicle has a constant surface, since phospholipids are weakly soluble in water and their volume is also constant, as long as the osmotic pressure is kept constant. They appear in a variety of fluctuating shapes (see Fig. 1.17), whose average form depends on the enclosed volume, total area and the asymmetry between the leaflets that form the bilayer [16], [53]. Note however that these images are coarse-grained, as the wave length of light, about half a micron, is much bigger than the membrane thickness. So, only low-frequency fluctuations are visible. In the last ten years, another structure used to characterize a membrane are membrane nanotubes, such as those seen of Fig. 1.14(a). These tubes are formed when a point force is applied to a lipid bilayer. Their radii range from a few to hundreds of nanometers. We will discuss them in detail in section 1.4.4. How do you characterize mechanically a membrane? The first way to characterize the mechanical behavior of a material is by studying how it behaves under a reversible deformation, i. e., by studying its elastic deformations. On a mesoscopic scale, i. e., on length scales bigger than the material thickness, but smaller than the persistence length, which we will define in the following, one can imagine three of these deformations: bending, stretching and shearing. For thin interfaces, such as lipid membranes, bending means changing the curvature of a piece of material keeping its area constant (see Fig. 1.18(a)), stretching means increasing the average area per molecule that composes the material by applying a tangential stress (Fig. 1.18(b)) and shearing means changing the shape without changing its area ( Fig. 1.18(c)). As lipid membranes are composed by two leaflets, one should also consider the friction between these layers. In the following, we will deal only with static measures, so friction will not be important. In liquid interfaces, such as membranes in the liquid state, molecules are free to move. Consequently, there is no resistance to shearing and we will not study this kind of deformation. The resistance to stretching is measured by the compression modulus K. It is defined by the amount of energy E K per unit area needed to increase a piece of surface A 0 of ∆A: Similarly, the capacity of bending is measured by the bending rigidity modulus κ and the Gaussian curvature modulus κ G defined by where E curv is the energy per unit area needed to bend, R 1 and R 2 are the two principal curvature radii seen on Fig. 1.19, H is the mean curvature, given by and H 0 is the spontaneous mean curvature. Due to the liquidity, the spontaneous mean radius R 0 is isotropic and H 0 = 1/R 0 . Equivalently, the bending rigidity of a material is reflected by its persistence length ξ, defined as the length beyond which the correlation in the direction of the tangent to the surface is lost. For a free membrane, it relates to the bending modulus through where a is a molecular length of the same order of the lipid's length, c is a constant, k B is the Boltzmann constant and T is the temperature. As a consequence of the Gauss-Bonnet theorem, the total contribution of the Gaussian curvature for closed surfaces is constant: where the integral runs over a closed surface S and g is the genus number, which describes the topology of the surface. As we will not consider topology changes in this work, we will not consider this contribution to the energy of closed membranes henceforth. Now, let's see the figures for a typical phospholipid membrane. We will describe in section 1.4 how these quantities are measured. First, membranes are extremely flexible, with κ ≈ 20 k B T [55], which is about a quarter of million smaller than the bending rigidity of a sheet of polystyrene of the same thickness. This implies that they fluctuate a lot even in small scales, as it has indeed been measured through NMR and X-rays techniques on stacks of bilayers [56]. Secondly, they have a high compressibility modulus K ≈ 240 mN/m [55], which means that the stretching due to thermal fluctuation is ∼ 10 −8 % and thus negligible. Measures indicate that membranes rupture under a charge of only τ rup ≈ 10 mN/m [55], which means that it can only stretch about 4% before breaking apart. Indeed, as phospholipids are bonded to each other only by entropic forces and not by chemical bonds, it is relatively easy to break cohesion. Therefore, under a stress smaller than the rupture charge, it is a good approximation to consider the total area of the membrane constant, as we have discussed in last section. Throughout this work, unless explicitly said, we shall place ourselves under this condition and thus only the bending energy will be considered. The great flexibility of membranes has an important experimental consequence: one cannot measure the true surface of a bilayer. Indeed, up to the moment, we are not able to control exactly the number of phospholipids within a membrane. Moreover, they fluctuate on a nanometric scale, not resolvable with optical microscopy techniques. Thus, it is useful to define two macroscopic quantities: the excess area and the effective or entropic mechanical tension. The excess area α is simply a measure of the average membrane crumpling which is not optically resolvable. It is defined by where A is the microscopic membrane area and A p is the optically resolvable area, which we will also call in the following the projected area (see Fig. 1.20). Experimentally, we have access only to variations on the excess area. The membrane area A corresponds to the surface area of the colored membrane, while the macroscopically resolvable projected area A p is a 2 . On the right, we see an illustration of the diminution of the excess area due to a mechanical tension applied on the same membrane patch: the area A remains the same, but the projected area is now a · b. The entropic mechanical tension τ is a measurable macroscopic averaged tension associated to the diminution of the excess area, i.e., to the flattening of fluctuations. It is a pure entropic force, arising from the diminution of accessible configurations. It is defined as where F = −k B T ln(Z) is the free-energy of the membrane, Z is the partition function and the symbol N p indicates that the derivative is taken under the condition 1 .2. MODEL MEMBRANES AND MECHANICAL PROBING that the total number of lipids is constant. In the following, as we shall consider tensions a lot weaker than the rupture tension, it is justified to consider that the total mechanical tension corresponds simply to τ . How different are membranes from liquid interfaces? The aim of this section is to highlight the differences between membranes and other macroscopic materials. First, it is easy to understand why membranes behave differently from solid membranes, such as rubber membranes, since molecules of the later are not free to move. The difference is much subtler with liquid interfaces. Indeed, both present two-dimensional disorder, high deformability, form thin films (see Fig. 1.21) and in both cases the term surface tension is currently used in the literature. We shall see, however, that this expression has a different meaning in each context and that liquid interfaces are fundamentally different from membranes. The surface tension γ on liquids is a constant of the material, depending only on the molecular composition and on temperature. At microscopical level, molecules of liquids, such as water, are strongly chemically bonded to each other and these bonds are energetically favorable. Molecules on the surface have less neighbors and are thus energetically costly: the surface tension tends to make the interface as small as possible, which results in a certain interfacial stiffness. [57]. We just do not observe this phenomenon in everyday life due to gravity. In terms of free-energy F , the surface tension is conjugated to the total area of the interface: where V indicates that the derivative is taken at constant volume. In lipid membranes, however, if there is no stretching, there is no contribution to the free-energy coming from the interface area. So, γ → 0: the lipids will form a bilayer without a bulk of lipids. A noteworthy confusion in literature arises from misleadingly calling the mechanical tension τ also surface tension. The tension τ is also associated to a surface, but to the surface of the projected area. Moreover, it is not a material constant since it has an entropic origin. In the following, we will avoid the use of the ambiguous expression surface tension. Finally, another obvious difference is the bending rigidity. Lipids in a membrane are arranged in a particular ordered way due to the amphiphilic nature of lipids, which leads to a bending rigidity. In liquids, it is not the case. We can see a 20 1 1.22: while liquid drops present a sharp contact with a substrate, membrane vesicles have a rounded contact region. Besides, one cannot expect to extract tubes by applying point forces to liquid films, as one does in membranes (see section 1.4.4). (a) Metallic liquid drop over a solid substrate. Note the sharp edges at the contact between the drop and the solid [58]. (b) An optical micrography of two vesicles adhering to a pure glass substrate, which reflects the vesicles [59]. The rounded shape of the vesicle near to the glass is due to the bending rigidity. A model for model membranes Here we present the three main theoretical models for liquid lipid bilayers. They describe membranes in a length scale much larger than the membrane thickness, so that it can be seen as a mathematical surface. They differ mainly in the description of the membrane's two-leaflet structure [60]. In the following three descriptions, the microscopic area A is kept constant. In the case of vesicles, there is an additional constraint on the volume enclosed by the surface. Spontaneous curvature (SC) model This model was introduced by Helfrich is 1973 [61] and is the very simplest one. The membrane is seen as an infinitely thin surface and its internal bilayer structure is described by a spontaneous mean curvature H 0 . The Hamiltonian of a bilayer is simply given by the bending energy E curv where the integral runs over the membrane surface S. 1 Bilayer couple (BC) model In this model, the two leaflets of a bilayer may respond differently to an external perturbation, such as chemical substances, while remaining coupled [62]. It was first introduced in 1974 to explain qualitatively experiments on red blood cells [63], [64]. It had been observed that erythrocytes treated with amphiphilic drugs change of shape, becoming more cup-like or instead, spiked (see Fig. 1.23). The authors proposed that spike-inducing drugs tended to bind to the external leaflet of the bilayer, while the cup-inducers binded mainly to the cytoplasmic leaflet. Each monolayer would thus have a different area, which would force a curvature. As flip-flop transitions are very slow, this area difference would be constant over time. Another evidence in favor of this model comes from vesicles: if the SC model were correct, vesicles composed by a single kind of phospholipid and similarly prepared should behave the same way, since the natural bending tendency comes exclusively from the chemical asymmetry of the monolayers. Experiments show the contrary: vesicles prepared the same way have different preferred curvatures, possibly due to the fact that the two monolayers had different areas when they closed to form a vesicle [65]. [63]. In the first case, red blood cells become spiked, while in the later they adopt a cup-like shape. The model proposes that the preferred curvature of a membrane depends on two contributions: the spontaneous curvature of each monolayer that adds up to a local spontaneous curvature of the membrane H 0 and the area difference between the two monolayers, which gives a non-local contribution [66]. The energy in this model is still given by equation ( 1.9), but there is an additional constraint in the areadifference between the neutral surfaces of the outer and inner leaflet, the neutral surface being the imaginary surface within a bent leaflet where there is no compression or extension. This means keeping a hard constraint on where D is the distance between neutral surfaces and M is the total mean curvature. The area-difference elasticity (ADE) model The ADE model is the generalization of the two preceding models. It was introduced in the 90's to explain the budding transition of some vesicles, i. e., when a vesicle adopts the shape of a parent vesicle attached through a neck to a smaller vesicle (see Fig. 1.24). The BC model predicts that this transition should be continuous, while in some experiments discontinuous transitions were observed [65]. In this case, the pear-like vesicle is instable, but it is not always the case [53]. The ADE model accounts for the fact that small relative compressions or extensions of the bilayer have an energetic cost comparable to the bending energy. Instead of a hard constraint on the area-difference between leaflets, the area-difference is regulated through an additional quadratic term in the energy, leading to where ∆A 0 is the optimal area difference, also defined through 12) where N out/in is the number of lipids on the outer/inner monolayer and φ 0 is the equilibrium density of lipid molecules. In the limiting case whereκ → 0, we recover the SC model, whereas in the limitκ → ∞, we recover the BC model. Validity of models The question of which model describes the best a bilayer was not easy to answer. The main difficulty when studying vesicles comes from the fact that the three models have the same equilibrium shapes [65], [67] (one can see a map of these shapes for the ADE model on Fig. 1.25 [66]). To complicate things, in certain cases, such as quasi-spherical vesicles, even the thermal fluctuations of the three models is the same [60]. 1 1.25: Phase diagram of the stationary shapes of vesicles as predicted from the ADE-model. The quantity v is the volume-to-area ratio and ∆a 0 is the effective area difference (dimensionless) [66]. The three models predict however different stabilities for equilibrium shapes. For instance, the SC model predicts that pear-like vesicles should be always unstable, the BC model predicts these shapes to be always stable and the ADE model predicts stability for large values ofκ. Another difference is the nature of shape transitions induced by changes in temperature or osmotic pressure, which are in general continuous in BC and ADE and discontinuous SC. A careful work by Döbereiner et al. [53] showed that experimental data was indeed compatible with the theoretical phase diagram for ADE-model shown in Fig. 1. 25. This result was corroborated by good experimental and theoretical compatibility of the analysis of the stability and of trajectories on this phase diagrams. Nowadays, the ADE-model is accepted as the best description for closed bilayers. There are however some situations where using SC is justifiable. First, as we have 24 1. 3. A MODEL FOR MODEL MEMBRANES CHAPTER 1. INTRODUCTION said, the models are equivalent on the study of thermal fluctuations of quasi-spherical vesicles. It is justifiable and simpler to use the SC model in this case. Besides, there is no area-difference between monolayers when these are both in contact with the same lipid reservoir and thus SC is suitable also in this case. From canonical ensemble to macrocanonical The Hamiltonian presented in the previous sections have an additional constraint in the number of lipids per membrane, or equivalently, in the total surface. In statistical mechanics, the ensemble of these constrained configurations is known as the canonical ensemble. It is a standard procedure to pass to the macrocanonical ensemble and let the area fluctuate. Physically, it means that the system we are interested in is in contact with a large reservoir of lipids, which may be a justified supposition in some cases. In this ensemble, one can control the average area by adding a term to the Hamiltonian. The constant σ is a Lagrange-multiplier analogous to a chemical potential used to impose a certain value to the average area a posteriori, once where F is the free-energy. The Lagrange-multiplier σ has the dimension of a tension, so sometimes it is called the surface tension, term we will avoid here. It is important to note that it is not generally measurable in experiments. It has however a measurable macroscopic counterpart r, that we will define in the next section. Fluctuation spectrum for small fluctuations We suppose here that we study a symmetrical planar bilayer with squared projected area A p ≡ L 2 , in contact with a lipid reservoir and well described by the SC model. The energy is given by This energy will be used many times throughout this work. We will call it simply the Helfrich Hamiltonian. Consider now that the membrane is reasonably tense, so that fluctuations are small. Its position can be described by its height h(r) with respect to a plan Π parallel to the average plan, where r = (x, y) (see Fig. 1.26). This is known as the Monge gauge. Under these assumptions, eq.(1.15) becomes xy and L ≡ κ∆ 2 − σ∆ is the operator associated to the quadratic terms of the energy. In order to evaluate averages involving h(r), in field theory one usually adds a term proportional to an imaginary external field m(r) to the Hamiltonian, obtaining the Hamiltonian ( 1.17) The corresponding free-energy is with the partition function given by the functional integral ( 1.20) and where δF /δm(r) represents the functional derivative of the free-energy with respect to the field m at the point r. One has now to choose an appropriate measure D[h], which is in general a complex task [2], [68]. Up to first order on the temperature and up to second order on h, it is justified to consider simply a discretization of the projected plan on N 2 squares of areaā 2 and let where h px,py is the height at the point r px,py = p xā e x + p yā e y , λ is a length introduced to render the measure dimensionless and both p x and p y ∈ {1, · · · , N} [2]. This measure, which we will call naive as in ref. [2], yields no supplemental term to the Hamiltonian. Evaluating the Gaussian integrals on eq.( 1.19), one obtains where δ(r) is Dirac's delta function. Using the Fourier transform with q = 2π/L (n, m), n, m ∈ N and 26) where N max = L/(2ā) corresponds to smallest possible wave-length, eq.(1.24) yields Γ n,m = 1 σq 2 + κq 4 . ( 1.27) From eq.(1.20) and eq.(1.21), one obtains respectively h(r) = 0 and the correlation function ( 1.28) Note that the Gaussian curvature contribution vanishes. Applying the Fourier transform as defined above to h(r) and h(r ′ ) in eq.(1.28), one obtains ( 1.29) where the wave vectors range from q min = 2π/L to an upper cut-off Λ = 2πN max /L ≈ 1/ā. Throughout this work, we shall consider that numericallyā ≡ a, where a is of the order of the membrane thickness. Remark that since h(r) is real, we have h −n,−m = h * n,m , where the symbol * stands for the complex conjugate, yielding |h −n,−m | = |h n,m |. 1 Similar calculations can be carried out for non-planar membranes, yielding a correlation function similar eq.(1.28). Consequently, by measuring the fluctuations of a membrane, one could deduce its bending rigidity and the tension σ. The problem is that experimentally we have access only to a coarse-grained vision of the membrane. One has thus to consider that the values deduced from the fluctuation spectrum are in fact renormalized values, which we will call κ eff for the bending rigidity and r for the effective tension. Renormalization calculations [69] indicate that where L is the size of the membrane and a is a microscopical cut-off. Experimentally, the dependence of κ eff as a function of L is very difficult to measure, since the dependence is logarithmic. Numerical simulations however confirmed the logarithmic dependence on L [70]. For a rough numerical estimate, if we consider a vesicle with radius R ≈ 10 µm and we consider the cut-off of the order of the membrane thickness a ≈ 5 nm, we obtain κ eff ≈ κ − 2k B T . The correction is thus one order of magnitude smaller than typical values of κ. Henceforth, we will assume κ eff ≡ κ. The distinction between r and σ will however be kept and discussed throughout this paper. Experiments Here we present some current experimental apparatus and techniques used to measure the relevant mechanical parameters κ, K, τ , r and α. We will not mention neither measures ofκ, since we will focus on symmetrical membranes, neither measures of κ G , since we will work with closed membranes of fixed topology (in the case of non-closed membranes, we will show it is not relevant in section 1.5). In section 1.4.5, we sum up these experiments. Micropipette experiments In these experiments, a micropipette of some micrometers of diameter is held in contact with a vesicle. One increases the membrane tension by decreasing the pressure P 1 out on the pipette. A portion of the membrane of length L is then sucked inside the pipette and the optical resolvable surface A p increases (see Fig. 1.27). Usually, successive measures with increasing pressure are taken. The first configuration, when the vesicle is just grabbed by the pipette and L is small, is the reference configuration. The projected area of this configuration is A i p , R 1 is the radius of the micropipette and R 2 is the radius of the vesicle in the reference configuration. Under the condition of constant volume and R 2 ≪ R 1 , the percent difference on the projected area of a posterior measure whose projected area is A f p is given by where ∆L is the length variation of the cylinder sucked inside the pipette relative to the reference measure and γ is a corrective factor which arises when L is non zero in the reference configuration [72]. Meanwhile, the average applied tension can be related to the difference of pressure ∆P = P 2 out −P 1 out through the Young-Laplace equation [73]. For a very thin interface under tension τ and whose principal curvature radii are R ′ and R ′′ , it states (1.32) For the system shown in Fig. 1 For very small pipettes or for vesicles under very weak tension, this relation must be corrected [74]. Through this technique, one can apply a wide range of tensions on membranes, from very small ones (∼ 10 −9 N/m) up to rupture tensions (∼ 10 −2 N/m) [55], [75]. Theoretically, calculations in the macrocanonical ensemble predict two regimes: one at low tension, where it comes from the flattening of fluctuations and thus [52], [73] (see section 2.4 for a detailed derivation) where σ i/f is respectively the Lagrange-multiplier for the initial/final configuration; and one at high tension, where it arises mainly through stretching and thus Even though these previsions involve the non-measurable Lagrange-multiplier σ, in experiments it is currently assumed that σ ≈ τ [72], [73], [76]. As a consequence, by plotting τ as a function of the variation of the projected, one can measure κ and K (see an example in Fig. 1 Fig.(b) shows the same data in the linear scale. The area compressibility K is obtained through a fit in the region of high tensions using eq.(1.37) (K = 450 ± 85 mN/m here). Remark that it was assumed that σ ≈ τ . Contour analysis experiments The aim of these experiments is to determine r and κ by studying the mean squared amplitude of fluctuating modes, as seen on section 1. 3. 6. Some experiments were made in planar geometry, using BLM. As explained before, these membranes tend to be too tense. Indeed Adhesion of vesicles The adhesion of membranes is very important for tissue formation. At the cellular level, the adhesion between the membrane and the cytoskeleton helps to regulate the formation of vesicles and lamellipodia [79]. It also plays an important role in the exocytosis and endocytosis [80]. In the context of the physics of liquids, the adhesion of liquid drops to solid substrates is traditionally used to study tensions. Inspired by these experiments, one finds in the literature a wide variety of papers on the adhesion of vesicles among themselves [81], [82] and on the adhesion of vesicles with a solid substrate [83], [84], [85], [86]. Here, we will focus on experiments dealing with the interaction of vesicles of radius R ves with a flat solid substrate. The vesicle, usually filled with an aqueous solution fluid denser than the suspension medium, is attracted towards the bottom 32 1. 4. EXPERIMENTS CHAPTER 1. INTRODUCTION surface. At equilibrium, it adopts a deformed shape as shown in Fig. 1. 22, with a flat region near the substrate. This region, with a radius R a , is considered adhered to the substrate. Frequently, the bottom surface is composed of glass, since the technique of RICM (reflection interference contrast microscopy) is very popular to obtain images of the adhesion region. The adhesion is ruled by the interplay of attractive and repulsive interactions, such as: • the short-range van der Waals attractive potential between the membrane and the substrate. It can be corrected to take into account the screening due to the presence of ions in the suspension medium; • the attractive gravitational potential; • the repulsive effective interaction coming from the reduction of the entropy of the membrane. Indeed, the substrate imposes a spatial restriction that limits the membrane fluctuations; • the short-range steric repulsion coming from the lipids; • in some cases, the substrate can be coated by polymers [87]. One must thus consider a supplemental steric repulsion coming from the polymer coating of the substrate; • it is also possible to cover the substrate with a piece of membrane containing proteins that attach to specific proteins embedded in the vesicle's membrane [85], [88]. In this case, there are supplemental attractive interactions. Two examples of resulting potentials can be seen in Fig. 1. 30. CHAPTER 1. INTRODUCTION (a) An adhesion potential including the gravity contribution V grav , a Van der Waals contribution V VdW and an entropic repulsive term V steric [84]. In this case, we find only one shallow minimum. Depending on the resulting potential, one can find two types of adhesion: 1. Weak adhesion: in this case, the adhering patch fluctuates strongly at a distance s(x) well above the surface, as shown in Fig. 1.33(b). It corresponds to a shallow minimum of the adhesion potential (see Fig. 1.30). When it is a local minimum, it is also said that the vesicle is in a pre-nucleation state. In this case, one can measure the fluctuation spectrum of the adhering region. Considering the rest of the vesicle as a lipid reservoir and approximating the 34 1. 4. EXPERIMENTS CHAPTER 1. INTRODUCTION energy of adhesion as quadratic near s , which is justified given Fig. 1. 30, the Hamiltonian up to order two is given by [84] where h(x) = s(x)− s , V ′′ is the coefficient of the harmonic approximation of the adhesion energy and S adhe is the projected surface of the adhering portion of the vesicle. Following the same reasoning presented in section 1.3.6, the mean square amplitude of each mode is given by ( 1.39) Note that we have substituted σ by its macroscopical counterpart r, which is experimentally measurable (see further details in the end of section 1.3.6). By a measuring the fluctuation spectrum, it is possible thus to determine r, κ and V ′′ . 2. Strong adhesion: the vesicle is very near the substrate (less than 10 nm of distance). The membrane fluctuations are barely detectable. The adhesion energy is higher, corresponding to a deep minimum of the adhesion potential. In Fig. 1.31, we show some typical RICM images from an adhering vesicle that has weakly and strongly adhering patches. From these images, the height profile of the adhesion region s(x) can be reconstructed. One can subsequently study the mechanics of the membrane. Adhesion mechanics Let's first recall some results concerning liquid drops. As discussed in section 1.2.2, the contact angle θ c that liquid drops do with solids substrate is very sharp. It is defined by the solid-liquid tension γ SL , the solid-gas tension γ SG and the liquid-gas solid γ ≡ γ LG . The mechanical equilibrium, illustrated in Fig. 1.32, gives the Young relation (1.40) The energy variation per unit of contact area between the liquid and the solid, also known as the adhesion energy per unit area, is given by the Young-Dupree relation: (1.41) Adhering vesicles were first theoretically studied in details by Seifert et Lipowsky in 1990 [80]. They considered a free-energy containing a contribution from curvature, a term −W A (πR 2 a ) corresponding to the adhesion energy plus the area and volume constraints. By minimizing the free-energy, they derived the equilibrium shapes, that shared two features: • a contact angle θ c = π due to the bending rigidity; • a curvature at the contact given by They argued that, in general, one could not expect to use the Young-Dupree relation to link W A and the lateral tension τ of the membrane due to the effects of the bending rigidity. In the limit, however, of small bending rigidity, the vesicle becomes a spherical cap for an internal pressure bigger than the outer, with a rounded contact 36 1. 4. EXPERIMENTS CHAPTER 1. INTRODUCTION region of length R c < R ves . One can thus define an effective contact angle θ eff (see Fig. 1.33(a)) that obeys an analogous of the Young-Dupree equation In principle, for a vesicle under these conditions, one could measure R c and θ eff and thus deduce W A and τ . In practice, however, as we will see later, measuring these quantities can be very tricky and imprecise. An alternative was thus proposed by Bruinsma [89]. He studied the equilibrium of the forces due to the bending rigidity and tension near the rim of the contact region and obtained where h(x) is the height of the membrane, x = 0 at the contact point and The characteristic length λ separates two regions: for x < λ, the bending rigidity dominates and the membrane is thus curve; for x > λ, the tension dominates and the membrane approaches a straight line (see Fig. 1.34). Experimentally, from the height profile of the membrane, one can obtain θ eff and λ. Using eq.(1.45) one is able thus to deduce τ and subsequently W A through the Young-Dupree's relation [88]. This method, however, presents a serious limitation: if the tension is very large, the length λ is undetectable [86]. In the next chapter, we will discuss some experiments using these techniques. Nanotube extraction Membrane nanotubes, also called tethers, are cylindrical structures whose radius range from a few up to hundreds of nanometers, while their length can reach tens of micrometers. In living cells, they are formed by localized forces generated by molecular motors or by polymerizing cytoskeleton filaments, such as microtubules. These tethers are suspected to play a major hole in the intracellular transport of vesicles [91] and in the communication between cells, since they form also between different cells and proteins were shown to pass through these tunneling nanotubes (TNT) [92] (see Fig. 1.35). Recently, it has been found some evidence that TNT may even be crucial in the HIV virus spreading [93]. In vitro tube extraction was used to evaluate the adhesion energy between the cell cytoskeleton and the cell membrane. In these experiments, it was also shown that these tethers do not contain cytoskeleton [79], [95]. Here we will restrain ourselves to experiments of tube extraction from model membranes (see Fig In typical experiments with GUVs, one cannot optically resolve the tube, even though its length is readily measurable (see Fig. 1.36(a)). The force needed to extract the tube f can be directly measured by the force applied over the glass bead (for an optical tweezer) or over the magnetic bead. Moreover, if the vesicle is held by a micropipette, as in the experimental apparatus shown in Fig. 1.36(b), subfigure b, the tension τ is measured through the applied pressure from eq.(1.35) [96]. Another technique consists in extracting nanotubes with controlled length from BLMs. With this configuration, it is possible to apply a difference of electrical potential between the interior and exterior of the tube. One can thus deduce the radius of the tube and the tension τ of the membrane [97]. From the theoretical point of view, as these tubes are so thin, it is reasonable to consider the adjacent GUV or BLM as a lipid reservoir. For a symmetrical membrane, the tube free energy is thus given by eq.(1.15) plus a contribution coming from the force f that holds the tube. For a cylindrical tube of radius R and length L, one has (a) Typical sequence of nanotube extraction from a GUV. The image was obtained through differential interference contrast microscopy to enhance contrast, since the tube is not optically resolvable [98]. (b) Some methods used to extract tubes from GUV: (a) vesicles under hydrodynamic flow [99], (b) vesicles held by micropipette and attached to a mobile glass or magnetic bead [96], [100] and (c) nanotube extraction with molecular motors [101]. The equilibrium radius R 0 and force f 0 are given by the minimization of eq.(1.46) with respect to R and L respectively, yielding ( 1.47) and ( 1.47) shows clearly that the radius is determined by a competition between the tension, which tends to create a very thin tube, and the bending rigidity, which opposes to high curvatures [102]. Interestingly, the result given in eq.(1.48) highlights the difference between a membrane and a liquid interface: if we had a tube constituted by a film of liquid, we should expect f = 2πR 0 γ. The factor 2 in f 0 comes from the curvature energy present only in membranes. One must keep in mind that these results hold only if thermal fluctuations are neglected and the tube is a perfect cylinder. We will discuss this point in chapter 5. The vertical axis shows the force needed to extract the tube while the horizontal axis shows the square root of the tension τ measured through the difference of pressure in the micropipette. The data show a good linear fit, indicating that in this experiment, it was apparently justified to neglect thermal undulations and suppose σ ≈ τ . The bending rigidity obtained through the fit is approximately 25 k B T [96]. By using eq.(1.47), we can estimate that the tube radius are rather large, ranging from a thousand to two hundred nanometers in this particular experiment. From eq.(1.48), we see that one could obtain κ by measuring f 0 and σ. In GUV experiments, it is usually assumed that the measured force to extract the tube f is well approximated by f 0 , which means neglecting thermal undulations. Besides, the tension τ measured through the micropipette aspiration is assumed equivalent to σ. 1 1.37 and deduces κ by a linear fit. The other way to obtain κ, using eq.( 1.47), is explored in the BLM experiments by once more assuming that τ ≈ σ. A recent experiment by Bashkirov [97] found very thin tubes of about 10 nm thick, indicating a very tense membrane, and deduced values to κ compatible with previous results. Sum-up In table ??, we see a sum-up of the techniques presented in this section and used to mechanically probe membranes. In the last column, we can see the usual assumption made in order to deduce results from the third column. It generally involves the three tensions τ , σ and r. In this work, we will try to examine in detail these assumptions. In special, we will quantify the difference between τ and σ for the three mainly studied geometries and discuss under which conditions these suppositions are justifiable. In chapter 5, we will also examine the role of thermal fluctuations on the force needed to extract a tube. Technique Direct measure Used to deduce Usual assumption micropipette ∆P , A p τ , ∆α, κ, K σ ≈ τ contour analysis |h(q)| 2 r, κ adhesion θ eff , |h(q)| 2 , s r, κ, W adhes τ ≈ r tube extraction (GUV) ∆P , L, f τ , κ τ ≈ σ and f ≈ f 0 tube extraction (BLM) ∆V , L R 0 , κ τ ≈ σ and R ≈ R 0 In the second column, the quantities directly measurable are listed, while in the third column we list the quantities deduced from a fit or from the use of theoretical equations. In the last column, we present the main assumptions made to obtain the results from the previous column. Stress tensor for a planar membrane Here we introduce the stress tensor for planar membranes. In particular, we will derive the projected stress tensor. This tool is very useful, since it allows not only the direct calculation of the average mechanical tension τ , but also the evaluation of the fluctuation of this tension due to thermal fluctuations, which has never been done. The derivation presented here will inspire our derivation of the projected stress tensor for other geometries in the following chapters. Note that even though one has no reason not to consider the Gaussian curvature on the energy for open membranes, we will show that the stress tensor does not depend on it. Stress tensor on the local frameΣ Consider a local frame on a membrane (X, Y, Z), whose the first two axes are parallel to the principal curvature directions and the third one is parallel to the normal of the membrane. Consider now an imaginary infinitesimal cut of length dℓ ′ and normal ν = ν X e X + ν Y e Y that separates the membrane on regions 1 and 2 (see Fig. 1.38(a)). The region 1 exerts a force dφ 1→2 over the region 2 given by ( 1.49) This relation defines the local stress tensorΣ, a tensor with 3 × 2 = 6 components, since the vector ν has only two components. For the Helfrich Hamiltonian, one has [103], [4] where C X and C Y are the principal curvatures parallel to e X and e Y respectively, C = C X + C Y and ∂ i stands for the derivative with respect to i. The imaginary infinitesimal cut of length dℓ in green, whose normal ν is contained in the (X, Y ) plane, separates the regions 1 and 2. Region 1 exerts a three-dimensional force dφ 1→2 over 2. (b) Components of the stress tensor on the local frame Projected stress tensor Σ Due to thermal fluctuations, both the tangent frame and dℓ ′ are not constant. It is thus convenient to introduce the projected stress tensor Σ, which relates the force through an imaginary infinitesimal projected cut to the force dφ 1→2 through where dℓ is the length of the cut's projection on a reference fixed plane Π, (x, y, z) is a orthonormal basis and m = m x e x + m y e y is the normal to the cut's projection on the plane pointing towards region 1 (see Fig. 1.39). As before, Σ is a 6-component tensor. The advantage of this definition is that one can evaluate the average of the force exerted through two regions by simply evaluating Σ . It gives thus a straightforward tool to evaluate τ . We derive Σ by studying the work needed to produce a deformation [4]. An alternative derivation is given in appendix A. First, we consider a piece of membrane weakly departing from a plane described in the Monge gauge by its height h(r) = h(x, y), so that we can neglect derivatives of order higher than two on h. The general energy is thus given, up to order two, by where Ω is the domain of the projected plan over which the membrane is defined, ∂Ω(x, y) being the curve that delimits Ω (see Fig. 1.40). In this section, latin indices ∈ {x, y}, h i ≡ ∂h/∂i, h ij = ∂ 2 h/(∂i∂j) and summation over repeated indices will be implicit. Imagine now that we impose an arbitrary small displacement δa = δa x e x + δa y e y + δa z e z to every point of the membrane, so that the h(r) → h ′ (r) = h(r) + δh(r). Besides, imagine that this displacement keeps the normal along the boundaries of the membrane constant, so that torques perform no work (see On one hand, at equilibrium, the energy variation reads where ds is a length of an infinitesimal element of the curve ∂Ω(x, y). On the other hand, we have in terms of the stress tensor δH = ∂Ω δa · Σ · m ds . (1.54) One can then obtain Σ by comparing eq.(1.53) and eq.(1.54). In order to do so, one must express δh and δh j over the boundary in terms of δa. As shown in Fig. 1.40, we have h ′ (r +δa i (r) e i ) = h(r)+δa z (r). Up to first order on δh, it is easy to deduce δh = δa z − δa j h j . Finally, one has to impose that the normal n at the boundary is kept constant, which yields h ′ k (r + δa i (r) e i ) = h k (r). Again, up to first order, one has ∂ k δh = −δa j h jk . These results put together lead to ( 1.56) In the case of the Helfrich Hamiltonian, one has The other components of the tensor can be obtained by permutation of x and y. Remark that these expressions are valid up to order two in h and that the Gaussian curvature gives no contribution to the stress tensor. As we have seen in the last chapter, membranes are very particular systems from the mechanical point of view: they are liquid, but rigid; they disrupt very easily under stretching and they fluctuate a lot in even in sub-optical levels. Accordingly, the term surface tension has always been rather confusing. First, it refers to the energy needed to bring a bunch of phospholipids in contact with the aqueous media, which we denote γ. As these molecules are amphiphilic, there is almost no energetic cost for creating an interface. Consequently, one can find in the literature statements like "a membrane has vanishing surface tension" [60]. Secondly, the expression stands for the tension τ that one can mechanically apply to a membrane, for instance, by aspiring it with a micropipette or by extracting a nanotube. As discussed in section 1.2.1, unless in extreme situations, this tension has entropic origin, coming from the flattening of thermal fluctuations. Thus, it is also called effective mechanical tension. At last, surface tension denotes also the multiplier Lagrange σ one adds to the Hamiltonian in order to fix the total membrane's area, as we have done in section 1.3. 6. In this case, the tension is more like a chemical potential associated to the total membrane area. The tension σ is not experimentally measurable, but its large-scale counterpart r, renormalized by fluctuations, is measurable through the q 2 dependence of the spectrum fluctuation. From an experimental point of view, it is fundamental to determine the relation between r, σ and τ . In particular, it is very important to determine under which conditions these quantities can be assumed identical. Indeed, experimentally, one measures usually r or τ , whereas the theoretical predictions involve frequently σ, which is non measurable. One takes currently for granted the equality between τ , σ and r to interpret data, as one can see in the sum-up presented in table ??, even though there is no support to this premise. Many theoretical articles were written in order to clarify this question [2], [68], [104], [3]. In most cases, the authors tried to derive r and τ from the free-energy F . This route is however very tricky, since one needs to consider terms up to O(h 4 ) in order to evaluate r. In this case, the naive measure presented in section 1. 3.6 must be subtly corrected [2]. Besides, the definition of the effective mechanical tension τ from the free-energy is not so clear and slightly different alternatives for the definition presented in eq.(1.7) were proposed. In this chapter, we try to address the question of the relation between τ and σ for symmetrical planar membranes in contact with a lipid reservoir. We evaluate τ using the projected stress tensor introduced in the end of the last chapter in section 1.5.2. This calculation is much more straightforward, since one avoids problems related to the choice of the measure. Besides, the definition of τ in terms of the projected stress tensor is unique: τ is simply given by the average of the latter. In this section, we show that in general, we can assume τ = σ − σ 0 , where σ 0 is a constant non negligible for small tensions. In section 2.2, we compare our result to the ones derived by Cai et al. [2] and by Imparato [3] by differentiating the free-energy with respect to the projected area A p . In his derivation, Imparato used the definition presented in eq.(1.7), while Cai et al. used slightly different definition, obtaining consequently a different result. In this section, we show that our result coincides with the one from Imparato, which gives support to the definition presented in the last chapter where the derivative is taken with the total number of lipids constant. Besides, we question the previous demonstration by Cai et al. that τ = r, since their definition of τ seems less suitable. We propose then that in general, we should have three different values for τ , σ and r. In order to check this prediction, we present a simple numerical experiment in section 2. 3. In section 2.4 and 2.5 we discuss some consequences to experiments, namely to those involving micropipettes, introduced in section 1.4.1. As τ is indeed different from σ, we propose corrections to the eq.(1.36) presented in the last chapter. We conclude in section 2.5 with the description of the first recent numerical and experimental evidences that τ = σ. All results presented in this chapter were obtained under the direction of Jean-Baptiste Fournier and published in [1]. Evaluation of τ from the stress tensor Consider a planar membrane whose projected area on a plan Π parallel to the average plan of the membrane is A p and well described by the Helfrich Hamiltonian given in eq. (1.15). This membrane is not stretched and departs very weakly from a plane. Therefore, we use the Monge's gauge and develop H up to order two, obtaining the Hamiltonian given in eq.(1.16), the average |h(q)| 2 given in eq.( 1.29) and the projected stress tensor is given in eqs.(1.58)-(1.60). Consider now a cut of length L on Π parallel to e y , so that the normal to the projected cut is simply m = e x , as shown in Fig The force exchanged through the cut is ( 2.2) The thermal average of f , denoted by the brackets , is given by with q = 2π/ A p (n, m), n, m ∈ N and Note that as h(r) is real, one has h −n,−m = h * n,m , where the symbol * indicates the complex conjugate. The mode n = 0 and m = 0 corresponds to a simple translation and gives no contribution to the energy. It will be therefore omitted throughout this section. Using this definition of the Fourier transform, the Hamiltonian for a weakly fluctuating membrane in contact with a lipid reservoir introduced in section 1. 3 .6 becomes where q varies between q min = 2π/ A p up to Λ = 2πN max / A p ≈ 1/a, where a is a microscopical cut-off comparable to the membrane thickness. The correlation function is given by where we have used the result displayed on eq.(1.29) to obtain the last passage. It is a straightforward calculation to evaluate averages using the correlation function. As an example, we do a step-by-step evaluation the average of Σ xx : where we used in the first and in the last passage the fact that by symmetry h 2 x = h 2 y and h 2 xx = h 2 yy . By the same reasoning, one can demonstrate that Σ yx = 0 and Σ zx = 0, as expected given the symmetry of the system. We have thus the effective tension τ , which relates to the average of the force f through (2.9) In the thermodynamic limit, A p is very large and the sum over q becomes an integral whose calculation leads to where σ r = κΛ 2 . Numerically, for typical values a ≈ 5 nm and κ ≈ 10 −19 J, one obtains σ r ≈ 5 × 10 −3 N/m, which is of the same order of magnitude of the rupture tension of membranes [55]. Fig. 2.2 shows the difference σ − τ normalized by as a function of σ/σ r . The difference σ − τ normalized by σ 0 as a function σ/σ r . For tensions smaller than 10 −2 σ r , we see that τ ≃ σ − σ 0 . The green shaded area corresponds to the region where we expect our theory to need corrections due to the stretching of the membrane. As we can see, it is a good approximation to set τ ≃ σ − σ 0 for σ < 10 −2 σ r . Beyond this limit, the tension is relatively high and we expect corrections coming from the stretching of the membrane. For the previous values of a and κ and taking k B T ≈ 4 × 10 −21 J, we obtain σ 0 ≈ 5 × 10 −6 N/m. As tensions as small as τ ≈ 10 −8 N/m are measured in micropipette experiments, this correction may be non negligible (see Fig. 2.3). We will discuss the consequences of this prediction to experiments in section 2.4. Derivation from the free-energy Here we re-derive the result given in eq.(2.10) by differentiating the free-energy. To begin, we evaluate the free-energy as in references [2] and [3]. By definition, we have where Z is the partition function given by As discussed in section 1. 3.6, one needs a measure to evaluate this integral. We will consider here the naive measure, which is justified up to first order on the temperature and up to second order on h (further discussion on the subtleties of the measure can be found in [2] and [105]). We discretize the projected plane Π in N 2 squares of areaā 2 , so that the height of each one of these squares is denoted h px,py ≡ h(p xā e x + p yā e y ). The naive measure reads The factor λ is a vertical quantum introduced to keep Z dimensionless. Naive measure in the Fourier space In section 1.3.6, we have used this measure to evaluate averages, but we have not derived explicitly F . Here, to do so, we prefer to work in the Fourier space. We need thus to derive the equivalent of the measure above in this space. As in the last section, let's consider the Fourier transform Remark that only half of the total number of modes are independent, since h −n,−m = h * n,m . In terms of these independent modes, the measure is thus where the superscripts R and I stand, respectively, for the real and imaginary part of h n,m and J is the Jacobian of the transformation. To simplify notations, in the following we will simply denote To determine J, we will evaluate the partition function for a simple Gaussian Hamiltonian. With the measure given in eq.(2.14), we have Using the definition presented in eq.(2.15), the quadratic Hamiltonian becomes and the partition function is The partition function should be the same in both eq.(2.19) and eq.(2.21), which implies J = 1/(ā) N 2 . Summing up, in the Fourier space, for a weakly fluctuating membrane, the naive measure is equivalent to Evaluation of F and discussion Using the Hamiltonian given in eq.(2.6), the partition function given in eq.(2.13) with the measure (2. 22 . Accordingly, the free-energy is given, to lowest order in T , by where we remind that λ is a quantum discretizing the membrane vertical displacements. Equivalently, highlighting the dependence of F on A p , one obtains where we have usedā 2 = A p /N = 4π/Λ 2 , N being the total number of modes or degrees of freedom, andq = 2π(n, m). With the definition presented in section 1. where the derivation was taken keeping the number of modes constant. Indeed, once the cutoff Λ is fixed, having a total number of particles fixed is equivalent to having a total number of modes fixed. This result coincides with our previous derivation (eq.(2.10)) and gives some evidence of the correctness of the derivation presented in [3]. Instead, in their work, Cai et al. used a slightly different definition for the effective tension, assuming where F lim is the free-energy in the limit of very large membranes. In this case, the sum in eq.(2.25) becomes an integral and one obtains It follows A p keeping the number of modes constant and only after take the thermodynamic limit, if needed. Note that with the projected stress tensor these subtleties are avoided, once one deals only with the straightforward evaluation of averages. What about r? In their work, Cai et al. showed also that one should have r = τ . Here we will present in detail their reasoning and argue that their conclusion follow from the fact that their definition of τ is slightly different from ours (compare eq.(2.31) with the thermodynamical limit of eq.(2.28)). Thus, in general, one should have r = τ . First of all, as in section 1.3.6, they introduced a conjugated field m(r) to the Hamiltonian in order to fix a general average shape h(r) =h(r) ≡h, obtaining where H is the physical Hamiltonian given in eq.(1.15). The corresponding partition function is and the effective action, i. e., the Legendre transform of the free-energy, is given by The average height of the membrane is given by where δZ/δm(r) stands for the functional derivative of the effective partition function with respect to the field m at the point r. For m = 0, we have a simple Gaussian integral and thush(r)| m=0 = 0, which corresponds to the case of a planar membrane. Differentiating the free-energy with respect toh(r) and using eq.(2.35), one obtains For the case m = 0, the correlation function is given by where the F eff,lim is the effective action for the limit of large membranes. Suppose now that the average shapeh(r) is tilted. The free-energy should remain the same, since the physical area of the membrane has not changed. The dependence of the free-energy onh to lowest order should thus be [105] F eff = τ Cai where the first ellipsis involves terms O(h 4 ) and the second involves high order derivatives onh. Note that the dependence should remain the same if one takes the thermodynamical limit. One has thus where δ(x) is the Dirac delta function and ∆ r is the Laplacian calculated at the point r. By definition, the inverse of an operator M(r) is given by . Let's look again at eq.(2.37): the term on the left is the correlation function for a planar membrane, given in general by . (2.43) Note that here the coefficient of the quadratic term r is in general different from σ. Indeed, to obtain the correlation function given in section 1.3.6, we have used the naive measure, while the discussion presented here remains general and valid for any measure. Eq.(2.37) combined with eq.(2. 42) imply thus that one should have always r = τ Cai . Indeed, after a careful study taking into account measure subtleties, Cai et al. succeed to prove this assertion. We do not question their proof, but rather their definition of τ , which seems less appropriated, since it does not yield the same results as with the stress tensor. With our definition of τ , we have τ = τ Cai and thus in general we should expect r = τ = σ. We will show that it is indeed the case in a simple numerical experiment in section 2.3. 1-D Numerical experiment Here we present a simple numerical experiment proposed to check the results of the two last sections. We have chosen for simplicity to simulate the 1-d equivalent of a membrane, i. e., a 1-d filament fluctuating in a 2-d space. Despite the plainness of our numerical system, described in section 2.3.2, we have access to the three tensions r, σ and τ . In section 2.3.3 we present and discuss the compatibility of the numerical data with the theoretical predictions for a filament, derived in section 2.3.1. The tension τ for a 1-d filament Let's call e the average direction of the filament and e ⊥ the perpendicular direction. The filament's length L is fixed by adjusting the conjugated variable σ and the projected length on e is denoted L p . In the Monge's gauge, its shape is described by the height h(x) e ⊥ , where x is the ordinate in the direction e . For a weakly fluctuating filament, the energy is given by the 1-d counterpart of eq.(1. 15) Accordingly, with the Fourier transform and q = 2πn/L p , n ∈ N * , one has where 2πN max /L p = Λ ≈ 1/a, where a is a microscopical cut-off. It follows that |h(q)| 2 is given by the equivalent of eq.(1.29). In appendix B, we derive the projected stress tensor for a 1-d filament. There, we show that it has just two components, one tangent to the filament direction Σ 1D , developed up to order two on h, and other perpendicular to it Σ 1D ⊥ , developed up to first order in h, yielding Note that these equations are equivalent to Σ xx and Σ zx (eqs. (1.58) and (1.60), respectively) for h y = 0 and h yy = 0. In order to evaluate τ 1D ≡ Σ 1D , we introduce the correlation function for H 1D : We have thus where we have taken the thermodynamic limit in eq.(2.52) and ξ = κ/σ. For ξ ≪ a, i.e., for non-extreme tensions, we have simply As for a two-dimensional membrane, the effective tension is smaller than the tension σ and the difference is well approximated by a constant. Numerically, to evaluate τ 1D = Σ 1D , one should evaluate each one of the three averages of eq.(2.51) ( h 2 x , h 2 xx and h x h xxx ) at the point x = L p . If however one imposes the filament to remain horizontal at its end, i. e. h x | Lp = 0, eq.(2.51) becomes simply where C L P = h xx | Lp is the curvature at the filament's end and where σ t stands for the tangential tension. Eq Numerical system and dynamics We considered a discretized version of a 1-d filament constituted of a chain of N rod-like segments of natural length a, each rod representing a coarse-graining of several lipids. We assumed in an approximation that all segments had the same length a(1 + ǫ) and thus the total length of the system was L = N a(1 + ǫ). We wanted a filament with L = N a. Imposing ǫ = 0 would not allow us however to measure σ, which is a fundamental point of this simulation. We have thus considered that the chain was connected to a lipid reservoir, so that ǫ was free to vary. In order to fix ǫ = 0, the conjugated variable σ had to be properly adjusted, as in eq.(2.44). .57)) and e R is the end-to-end direction. The projected length, indicated in red and denoted L p , is the length of the chain projected on e R . Note that we impose the last segment to be parallel to e R . Each configuration Ω i of the chain was described by the set {θ 1 , ..., θ N −1 , ǫ}, where θ i stands for the angle that the segment i makes with the horizontal axis (see Fig. 2.4). The last segment was imposed always parallel to the vector R, defined by where u i is the unitary vector in the direction of the i-th segment so that τ 1D = Σ 1D can be easily checked through eq.(2.55). Associated to the vector R, we define the average direction e R ≡ R/R ≡ e , where R ≡ |R|. Thus we impose u N = e R and The projected end-to-end length is given by During the simulation, we considered two kinds of moves: 1. Move A: changing one segments angle θ i , which corresponds to the effects of thermal fluctuations on the chain's shape (see Fig. 2.5(a)). In this case, the direction e R is changed and consequently, the last segment must have its direction corrected; 2. Move E: changing the extension of segments through ǫ, which represents the exchange of lipids with the reservoir (see Fig. 2.5(b)). (a) Move A: changing the angle of one segment with respect to the horizontal. The last segment must be adjusted so that its direction is parallel to the vector e R . (b) Move E: changing the extension ǫ of segments. In addition, an external force f = τ · e R always parallel to the last segment is exerted over the chain. The chain's free-energy is given by a bending contribution, a contribution coming from the Lagrange multiplier σ plus a contribution from the external force where is an approximation of the curvature between two successive segments. Note that the problem is invariant under rotation around the origin O, so that at any moment we can describe the configuration on Monge's gauge. The imposed parameters of the simulation were N, τ in units of β a and κ in units of β. For each τ , σ was adjusted in order to fix ǫ = 0, as discussed above. We detail how it was done in the following. In the end of section 2. 3.1, we have argued that in general, one should expect r = τ = σ, where r is the coefficient in q 2 of the spectrum fluctuation (2.60) In order to measure |h n | 2 (and thus r), we have first performed a rotation so that we were in the same situation as in section 2. 3 x is the ordinate in the axis e x . The function Θ(x) is a series of steps of length cos(θ i −θ), as shown in Fig. 2. 6. Note that this function is well-defined only when there are no overhangs (|θ i −θ| < π/2 for all segments), unlike the configuration seen on Fig with q = 2πn/ L p . The strategy to obtain r was to average |Θ n | 2 over a large set of configurations and fit the data with eq.(2.62). Fig. 2.7 shows a representative fit. A sum-up of the variables can be seen in table 2.1. Numerical dynamics We used a Monte Carlo method to generate a sufficiently large sample of chain's configuration so that we could evaluate averages with a good precision [106]. The configurations were generated through a Markov chain algorithm: from a certain Ω i , a random move of kind A or E was proposed, generating a new state Ω i+1 . In order to respect the detailed balance, the configuration Ω i+1 was accepted with the probability P (Ω i → Ω i+1 ) given by the Metropolis algorithm In the case of thermodynamic equilibrium, the probability of each configuration is given by the Boltzmann distribution where Z is the partition function. The probability of transition is then simply given by In our case, we have 1. move A: an angle θ i of the set {θ 1 , ..., θ N −1 } is randomly chosen. We propose a new angle θ ′ i = θ i +∆θ, where ∆θ = δ θ × rand(−1, 1), with rand(a, b) a random number with uniform distribution of probability between a and b. The newθ ′ is evaluated and consequently θ ′ N =θ ′ . The variation of free-energy is thus If i = 0, the third and the fourth terms should not be taken into account. The value of δ θ is chosen in order to have ≈ 50% of acceptance of this kind of move. 2. move E: a new extension ǫ ′ = ǫ + ∆ǫ, with ∆ǫ = δ ǫ × rand(−1, 1) is proposed. The free-energy variation reads Again, δ ǫ was chosen in order to have ≈ 50% of acceptance for moves of kind E. To each move of kind E, N − 1 moves of kind A were tried in order to assure that in average every degree of freedom is equally modified. In the following, we will call this sequence a Monte Carlo step. Equilibration criterion In order to obtain meaningful averages, we had to be sure that our numerical experiment reached equilibrium. Usually, it is enough to examine the number of Monte Carlo steps needed to decorrelate the longest modes on the Fourier space, which are the slowest to relax, and then choose a number of steps much larger for the simulation [106]. As our system is really simple (in the sense that the energy do not have several local minima) and that we have not chosen too long chains, we have chosen two criterion that together are stronger than the relaxation of the longest modes. First, for the equilibration of angles, we have required the average of R, given in eq.(2.56), to be ∼ 0. One could imagine the case of a rotating fixed configuration, which would also yield R ∼ 0. To exclude this improbable situation, we have visually checked a set of configurations. Secondly, to study the equilibration of the extension ǫ, we have examined the evolution of ǫ over time: when it reached a plateau, we considered the system at equilibrium. In our experiments, we have taken N = 50 and a larger βκ = 125 to assure that the chain departs weakly from a straight line. For typical values ranging from τ = −0.2 βa up to τ = 5 βa, the equilibration was attained after 5 × 10 5 steps. Currently, we have made 8 × 10 6 steps to be sure that the sampled configurations had an equilibrium distribution. At the end of each step, we have calculated Θ(x) (when there were no overhangs). Adjusting σ As we applied τ to the membrane, we had to adjust σ in order to have ǫ ∼ 0. To estimate also the uncertainty of σ, for each pair κ, τ we determined σ min corresponding to ǫ max = ǫ = 10 −3 and σ max corresponding to ǫ min = ǫ = −10 −3 . Buckling transition In order to verify the correctness of our simulation, we have also applied negative tensions to the filament. Indeed, for compressive tensions bigger than a certain limit, we expect our filament to fluctuate around a curved line, instead than around a straight line, as shown in Fig. 2. 9. This transition is known as the buckling transition. This transition is also characterized by an increase and a discontinuity on the length excess, the equivalent of α, defined as as the compressive tension increases. In Fig. 2. 10 we can see that we have effectively a relatively abrupt increase of α 1D as the tension approaches τ = −0.3βa. We have thus some evidence of a buckling transition for negative tensions. We have not however made a systematic study of the transition, since it was not the aim our numerical experiment. Moreover, we have found some technical problems for 3βa, since the projected length was highly fluctuating due to the alternating presence of buckled and straight configurations near the transition. The averages varied a lot and thus, with the parameters presented in the last section, the falsepoint method failed to converge in 20 iterations. Therefore, we have just considered τ −0.2βa, situation in which we are sure that we had small fluctuations around a straight line. Results Once σ min and σ max were found, we have performed the numerical experiment three times with each value. We have taken the average of these three runs for the spectrum and for the average curvature of the last segment. For each averaged spectrum, we have made a fit using gnuplot to obtain r (see Fig. 2.7 for an example). We obtained κβ ≈ 110 in all fits, which we remind is a bit different from the microscopical κβ = 125. This result is coherent with what one should expect given eq.(1.30), which takes into account corrections due to the renormalization. The final results are summed up in table 2. 2. In this table, σ and r are the averages of σ max and σ min and the corresponding r. From table 2.2, it is evident that τ = σ (see also Fig. 2.11). In the same graphic, we have also plotted σ t = σ − κ C 2 Lp /2. As predicted, we have indeed σ t = τ . Quantitatively, the fit with the theoretical equation for the difference σ − τ given in eq.(2.53) can be seen in Fig. 2.12 (solid line). The agreement is excellent for Λ = 1.1 a −1 (one parameter fit), which is a very reasonable value for the cut-off. In this graphics, we have also plotted the percent difference between σ and r. The behavior is non-trivial: the sign of σ − r changes at low tensions and we have σ = r even at high tensions. To conclude, with this simple numerical experiment, we could accede to the tension σ needed to fix the length of the filament, which one cannot usually measure in true experiments, simultaneously to the tension τ and r. Our data corroborates the prediction that τ = σ and verify eq.(2.55). In addition, the difference between τ and σ was well fitted by eq.(2.51), giving some support to our theory. Regarding r, as discussed in section 2.2.3, we expected in general τ = σ = r, which our data seems to confirm. The way in which r depends on σ seems to be non-trivial (see Fig. 2.12) and further studies have to be done in order to understand it. Some experimental implications In this section, we discuss the implications of eq.(2.10) to micropipette experiments. Indeed, in these experiments, it is usually assumed that σ ≈ τ , which, as we have seen, is not justified in the limit of low tensions. We will assume that we are dealing with very large GUV and that the difference of pressure between the inside and the outside of the vesicle is very small, so that the membrane is locally equivalent to a flat membrane. A more detailed derivation for quasi-spherical vesicles of any size taking into account the pressure difference will be done in chapter 4. In the limit of small fluctuations, the excess area is given by where we have used the correlation function given in eq.(2.7). In the thermodynamic limit, we have where the last approximation is valid in the limit κ/A p ≪ σ 10 −2 σ r . Using the fact that in this limit σ ≃ τ + σ 0 , we have finally In micropipette experiments, one measures the percent difference of projected area between the initial configuration A i p and the final configuration A f p : First of all, note that the percent increase of the projected area is very small (less than 0.5%), which corresponds to the validity range of our results (no stretching). For τ > σ 0 , we have a linear dependence of the logarithm of τ on the percent projected area with roughly the same slope whether one takes into account the difference between τ and σ (eq.(2.75)) or not (eq.(2.74)). Thus, it is justified to deduce 2. 4. SOME EXPERIMENTAL IMPLICATIONS κ by fitting a straight line to data on this region, as it is usually experimentally done (see section 4, Fig. 1. 28.(a)). We predict however a different behavior for small tensions (τ < σ 0 ): as we can see in Fig. 2.13, in the shaded area, we do not have a linear relation between the logarithm of τ and the area excess. Sadly, we cannot identify this behavior in the data of Fig. 1.28, but our prediction can be tested by further experiments using vesicles under small tension. Natural excess area Another related consequence concerns the natural excess area, i. e., the measure of the fluctuations of a membrane under no external force (τ = 0). Using eq.(2.73), we have which yields α eq ≃ 0.03, 0.01, 0.005 for βκ = 5, 25, 50, respectively. Traditionally, however, one makes σ = 0 in eq.(2.70), which leads to The main difference between these equations is the dependence in terms of the projected area A p , since for the last equation one expects an explicit logarithmic dependence. Eq First evidences that τ = σ Here we present the first strong numerical and experimental evidences of the correctness of our results. In the first part, we present the results of recent numerical experiments far more complex than the one presented in section 2. 3. In the second part, we discuss experiments on the adhesion of vesicles to solid substrate. We begin by mentioning a previous puzzling result by Rädler et al. [84], already introduced in section 1. 4. 3. We report the attempts to understand this result made by Seifert [108]. Finally, we describe a recent experiment that seems to corroborate our previsions [87]. Numerical experiments In the same ref. [3], discussed in section 2.2, a 2-d numerical experiment was proposed to check the author's predictions. The numerical system consisted on coarse-grained amphiphilic lipids represented by chains of beads (see Fig. 2.14). The black beads represent the hydrophobic tail, while the white one stands for the hydrophilic head. In addition, single beads stood for water molecules. The total energy was composed by four terms: 1. the hydrophobic interaction between the beads of the tail and water/hydrophilic head; 2. an attractive interaction between two molecules, given by a Lennard-Jones potential; 3. a harmonic potential between beads along a single molecule; 4. a three-body bending potential that models the effects of hydrocarbon chain stiffness. Both molecules of lipids and water were free to move inside a fixed cuboidal box with periodic boundary conditions. At each realization of the simulation, the size of the box could be changed, implying a change in the membranes tension τ . The dynamics alternated sequences of Monte Carlo steps with sequences of molecular dynamics steps. In order to measure τ , the forces exchanged through imaginary cuts perpendicular to the membrane plane were averaged for different box sizes. As in real experiments, the tension r was measured through the fluctuation spectrum. Similarly with our simple 1-d simulation, the buckling transition was observed for high compressive tensions. For the non-buckled regime, the results for a simulation involving 1152 amphiphilic molecules and 7200 water molecules can be seen in Fig. 2. 15. In agreement with the discussion of section 2.2, the author obtained indeed τ = r. Moreover, negative tensions are observed for non-buckled membranes, as in our case. In this work, the relation given in eq.(2.9) was obtained by differentiating the free-energy with respect to the projected area. Assuming that r ≈ σ, as usually done in laboratory experiments, the author fitted eq.(2.9) to τ by adjusting one parameter related to the upper wave-length cutoff. As we an see in Fig. 2.16, the agreement is very good, supporting the predicted relation between τ and σ given in eq.(2.9). Very recently, a similar simulation was performed by Neder et al. [109]. They have also used coarse-grained amphiphilic molecules similar to the one shown in Fig. 2.14 and the energy contributions were roughly the same as in [3], added of a term −τ A p . Thus, the main difference in this simulation is the fact that τ is imposed (and not measured) and the box size was free to change. In other words, the simulation was performed at τ and N p , the number of lipids, fixed. The advantage of this method is the possibility of controlling directly τ , while in the method used in [3], the tension was imposed by the size of the box. The configurations were generated through a Monte Carlo algorithm, since only static measures were done. Different phases of the membrane were observed, depending on the temperature of the system. In particular, we can see some snapshots for the liquid phase in Fig. 2. 17. [109]. The dark gray molecules point upward from tail to head while the light gray point downward. In the first snapshot at left, the membrane is tensionless. In the following two snapshots, the tension is increased (0.01 J/m 2 and 0.02 J/m 2 , respectively). Remark the interdigitations in the configuration of highest tension due to the stretching of the membrane. As before, r was measured through the fluctuation spectrum for the tensions above mentioned (results presented in table III of [109]). For the tensionless state, they obtained r = (0.11 ± 0.19) × 10 −4 J/m 2 . This result seems to agree with our prediction that r should be bigger than τ , even though one should be cautious given the large error-bars. For the systems under higher tension, however, the trend was inverted. This fact does not contradict our predictions, since stretching was not 2. 5. FIRST EVIDENCES THAT τ = σ 75 CHAPTER 2. PLANAR MEMBRANE taken into account in our theory. Indeed, by measuring an overlap parameter, as well as the nematic order parameter for the liquid phase, the authors confirmed that stretching takes place for τ 0.01 J/m 2 . Further simulations in the regime of low tension should be useful to compare with our predictions. Adhesion experiments: a puzzling result Here we will comment on some experiments involving the adhesion of vesicles to solid flat substrates, discussed in section 1.4.3. In 1995, Rädler et al. [84] studied the adhesion of GUVs to solid substrates. They constituted GUVs of stearoyl-oleoylphosphatidylcholine (SOPC) in a 100 mM sucrose solution, so that the vesicles were denser than the buffer solution and sank to the bottom of the chamber, where a glass cover slip coated with a thin film of MgF 2 and bovine serum albumin had been deposed. The vesicle then floated above the glass slip with a height s(r), as shown in Fig. 1.33(b), in a weakly adhered state. Using reflection interference contrast microscopy (RICM) and phase contrast microscopy (see Fig. 2.18), the group could measure the radius of the vesicle R ves , the radius of the contact region R a , and reconstruct the height profile of the adhered patch. (b) Small tension Under the supposition that energy of the contact region was well described by eq.(1.38) and defining h(r) = s(r) − s , they could infer: 1. the fluctuation spectrum |h(q)| 2 , which once fitted with eq.(1.39) allowed to 76 2. 5. FIRST EVIDENCES THAT τ = σ CHAPTER 2. PLANAR MEMBRANE obtain r and V ′′ (see Fig. 2.19). The bending rigidity for SOPC, obtained in previous experiments, was assumed to be 35 k B T ; the correlation function h(x)h(0) , which can be approximated by an exponential asymptote where ξ is the distance beyond which two pieces of membrane are uncorrelated and ξ ⊥ is a measure of the membrane roughness. From the experimental data, the authors deduced ξ through a fit and ξ ⊥ from the value of G(0) (see Fig. 2.20); Figure 2.20: Correlation G(x) [84]. The solid curve shows the fit from which ξ is deduced, while ξ ⊥ is deduced from G(0). 3. as we have discussed in section 1.4.3, under some conditions, the vesicle behaves as a spherical cap and we can define an effective contact angle that respects an 2. 5. FIRST EVIDENCES THAT τ = σ analogous to the Young-Dupree relation. After a reconstruction of the average height profile of the adhesion patch from the RICM images, Rädler et al. tried to obtain the effective contact angle θ eff by a linear fit near the edge of the contact region (see Fig. 2.21). They have also tried to fit a circle in the contact region in order to obtain the curvature radius R c , which relates to the adhesion energy per unit area W A through eq.( 1.42) in the case R c < R ves . As the vesicle was extremely flat and rounded near the contact point, one could not obtain θ eff precisely from the height profile. Indeed, measuring the contact angle in larger scales would lead to larger values for θ eff . Concerning R c , the fit was made difficult by the thermal fluctuations that remain even after averaging. Sadly, the values of R c obtained were comparable to the vesicle's radius R ves . Accordingly, eq.(1.42) could not be used to obtain the value of the adhesion energy W A . 4. the average height s . To sum up, Rädler et al. were able to obtain r from the fluctuation spectrum and to made a rough estimate of the effective contact angle. They could not however measure directly τ nor the adhesion energy W A . So, they made a theoretical estimate of the adhesion energy to check the self-consistency of their results. Theoretical estimate of the adhesion energy per unit area W theo A Here we explain just in general lines how the value of the adhesion energy per unit area was theoretically estimated. The details are given in appendix C. Rädler et al. considered that the contact region of the vesicle was submitted to three potentials: two attractive, coming from the van der Waals interaction and gravity, and one repulsive with steric origin. They considered the screened van der Waals potential, 78 2. 5. FIRST EVIDENCES THAT τ = σ since some part of the MgF 2 coating of the glass cover slip is expect to be present in small concentration in the buffer solution. To obtain the repulsive contribution coming from the reduction of the configurations due to the substrate, they had to determine whether the adhesion was dominated by the bending rigidity or by the tension, which was done by studying ξ ⊥ (see appendix C for details). They concluded that the behavior of the membrane was dominated by tension. Furthermore, they could also conclude that it was reasonable to assume σ ≈ r in this experiment. A plot of can be seen in Fig. 1.30(a). The potential presents a minimum whose depth can be considered as a first estimate of the adhesion energy per area W theo A ≈ 10 −9 N/m. Coherence test: estimate of the adhesion energy through Young-Dupree relation and discussion The second strategy of the authors was to estimate the energy of adhesion through the Young-Dupree relation where θ eff is the effective contact angle obtained through the fit shown in Fig. 2. 21. Assuming that τ ≈ r, their results are summarized in table 2. 3. Despite the imprecision in the measures of the effective contact angle, there seems to be an incoherence between the theoretical estimate and the value of the adhesion energy obtained from the Young-Dupree relation, which was initially blamed on the simplified theoretical framework that did not account for the constraints on area and volume. Shortly after, Udo Seifert proposed a refined theory that considered also the constraints on area and volume [108]. His calculations yielded a different repulsive potential V Seifert steric . Considering only V Seifert steric and V vdW , he concluded that the vesicle should present tension-induced adhesion, i. e, the potential should present a local quadratic minimum like in the experiment of Rädler when where ϕ eff ≈ R a /R ves and b is a constant. Taking b = 1/2π and κ = 35 k B T , this condition is satisfied for a ≃ 150[1 − cos(ϕ eff )] and thus ϕ max eff ≃ 0.07 rad. In Rädler experiment, ϕ eff 1/4 (see table 2.3) and thus the refined theory could still not explain the data. Finally, Seifert examined the possibility that gravity could reconcile his theory with Rädler's experiment. He compared the total contribution to the potential energy coming from the bending rigidity and from gravity: where V D is the vesicle's volume, g is the gravitational acceleration, ∆ρ is the difference of density between the liquid contained in the vesicle and the suspension medium and h CM is the height of the vesicle's center of mass. As a rough estimate, he assumed the vesicle a sphere and h CM ≈ R ves . For typical experimental values, the ratio is of approximately one hundred: gravity is thus very important to determine the shape of a vesicle. Neglecting the adhesion energy, which is justified in the case of weak adhesion, and neglecting the bending energy, the contact angle of the vesicle should be zero and the tension should simply be given by the balance between gravity and the mechanical tension: The solution and a posterior confirmation In 1995, the fact that r ≈ σ was very significantly different from τ was totally unexpected. Let's now re-examine the experimental data of that time under our theoretical framework. Our theory predicts that τ should indeed be different from σ. As a first approximation, let's assume that we are in the conditions where τ = σ−σ 0 . If we look at the values of table 2.3, we have r ≈ σ ≃ 10 −5 N/m. If we suppose that the adhesion energy per unit area is indeed ∼ 10 −9 N/m and we invert eq.(2.79), we obtain τ ≃ 10 −6 N/m, yielding σ 0 ≃ 10 −5 N/m. Recalling the definition of σ 0 presented on eq.(2.11), this result implies that the microscopic cut-off is a ≈ 4 nm, which is very reasonable. Therefore, our theory could explain the results of Rädler et al. Recently, Sengupta and Limozin made a careful study on the adhesion of vesicles [87]. They examined the adhesion of stiffer GUVs composed by phosphatydilcholine and cholesterol filled with a 200 mM sucrose solution on a substrate coated with polymers in three different concentrations: without polymer (no-polymer coating), with c pol = 0.75 µm −2 (sparse polymer coating) and with c pol = 1 µm −2 (dense polymer coating). They observed systematically the pre-nucleation state (weak adhesion), the nucleation, i. e., the formations of the first patch of strongly adhered membrane, the growth of these patches and the mechanics in the final state of strong adhesion (see Fig. 2.22). From a theoretical point of view, the predicted adhesion potential for the three coatings is shown in Fig. 1.30(b) and reproduced in Fig. 2. 23. For no-polymer coating, there is just a deep minimum and thus strong adhesion, while for sparse polymer coating, there is also a shallow minimum at ≈ 100 nm corresponding to weak adhesion. For the dense polymer coating, only the shallow minimum remains and only weak adhesion is predicted. The nucleation represents thus the passage of the shallow minimum to the deeper one. Experimentally, as in former adhesion studies, RICM images using two different wave-lengths were used to obtain an intensity map of the adhered region. This time, however, a major improvement was introduced in the reconstruction of the height profile from these images: the case of profiles with high and variable curvature was addressed for the first time. Indeed, up to now, only the deviations caused by pure tilts and by profiles of constant curvature were accounted for. With this new method, the membrane profile was described by a succession of small curved segments and the reconstruction was made fringe by fringe (see a description of the method in [87]). The advantage of this method is that it allows a more reliable profile reconstruction even for steeper profiles, allowing thus to obtain the contact angle more precisely. The results concerning the membrane mechanics can be summarized as follows: 1. Pre-nucleation state: in this state, the membranes presents strong undulations in the adhesion region. The vertical roughness ξ ⊥ ≃ 15 nm was measured, from which σ ∼ 10 −5 N/m could be deduced (the relation between these quantities is given in appendix C). As Seifert had shown in his work, gravity is dominating in the case of weak adhesion, so that the tension τ can be deduced from eq.(2.82), leading to τ = 10 −7 − 10 −6 N/m. Sengupta and Limozin verified that this discrepancy is compatible with τ = σ − σ 0 for a ∼ 5 nm. 2. Saturation of growth of the strongly adhered patch: the vesicle gains energy by increasing the contact area. In the process, its excess are decreases up to 82 2. 5. FIRST EVIDENCES THAT τ = σ CHAPTER 2. PLANAR MEMBRANE the equilibrium represented in C of Fig. 2. 22. As in micropipette experiments, the excess area before and after strong adhesion should verify eq.(2.75). This relation was verified for all vesicles studied in this work within a factor between five and ten for σ 0 ≈ 10 −6 N/m. 3. Final state of strong adhesion: instead of measuring the effective contact angle θ eff and the curvature radius R c as in Rädler's work, the authors used the second method proposed in section 1.4.3 to obtain the adhesion energy W A and the membrane tension τ , by measuring the contact angle θ eff and the length λ (see Fig. 2.22). This time, the results obtained were more reliable due to the new reconstruction method and to the fact that the effective angle is more easily defined in the case of strong adhesion. From λ, the tension τ could be directly derived (eq.(1.45)). Using the Young-Dupree relation and the measured values of θ eff , the adhesion energy W A for each polymer coating could be obtained. The values obtained for W A for the different coatings were compatible with the theoretical values, corresponding to the deep minima of the adhesion potential (see Fig. 2.23). Sadly, in this case one cannot measure neither r nor σ by measuring the fluctuations of the membrane, since the membrane is too near to the substrate. The results described in the points 1 and 2 are the first strong evidences in agreement with our predictions. In a nutshell In this chapter, we have discussed the difference between the mechanical tension τ one applies through micropipettes, for instance, and the tension σ usually added to the Hamiltonian in theoretical calculations. Quantitatively, for large membranes, we have found where the last approximation is valid for small tension (σ < 10 −2 σ r , where σ r = κΛ 2 is of the order of the rupture tension). The constant σ 0 depends only on the temperature and on the upper wave-vector cutoff Λ through The cutoff Λ is related to a microscopical length of the same order of the membrane thickness. Numerically, at room temperature and assuming Λ = 1/(5 nm), we obtain ≈ 5 × 10 −6 N/m, which is a not so small. Indeed, we predict non-negligible corrections for experiments involving small tensions. We have also questioned a former demonstration asserting that the coefficient of the q 2 term of the fluctuation 2. 6 Fluctuation of forces in planar membranes In the last chapter, we have examined the average force exerted through a cut of projected length L on a fluctuating planar membrane where e x is the direction perpendicular to the cut. Using the projected stress tensor, we have obtained τ as a function of the tension σ, introduced in the Hamiltonian in order to fix the average area of a membrane. In this chapter, we would like to study the mean square deviation of this force, defined as In the following, we will call ∆f simply the fluctuation of the force. The results exposed here were obtained in the company of Jean-Baptiste Fournier and remain unpublished. Our motivation is three-fold: highly fluctuating Goldstone modes [7]. Moreover, from an experimental point of view, in order to extract or hold a tube, one applies directly a point force using optical tweezer or using magnetic beads, as we have seen in section 1.4.4. These techniques are very precise and one should thus be able to measure ∆f , even if it is of the order of some pN. In the first section, we will define precisely ∆f and remind some results obtained in the last chapter. After, in section 3.2 we shall introduce some diagrammatic tools, which are very useful since they make calculations visual. Using diagrams, one can easily identify terms whose contribution is zero and group rapidly other terms. It will prove specially useful in the calculation of the fluctuation of the force. To gain familiarity with these diagrams, we recover the result given in eq.(2.9) in section 3. 3. In section 3.4, the most technical one, we shall evaluate the correlation of each term of the stress tensor. These results are finally used in section 3.5 to obtain ∆f . Definitions and former results Let us consider the same weakly fluctuating planar membrane described in chapter 2, whose projected area on a plane Π parallel to the average plane of the membrane is A p (see Fig. 2.1, which we reproduce in Fig. 3.1). The Hamiltonian is given by eq.(2.6) and the we remind that the corresponding correlation function reads where q = 2π/ A p (m, n) and q ≡ |n|≤Nmax |m|≤Nmax (3.4) with N max = A p /(2πa), corresponding to a maximum wave-vector q max = 1/a, with a a microscopical length of the order of the membrane thickness. As shown in Fig. 3.1, we consider a cut of projected length L parallel to e y . The average force exchanged through the cut is where Σ ij are the terms of the projected stress tensor for planar membranes introduced in section 1.5.2. In chapter 2, we have obtained (3.6) In section 3.3, we will recover this result using diagrammatic tools. The squared fluctuation of the force is given by where are the squared fluctuation of the forces perpendicular to the cut, parallel to the cut and normal to the average membrane's plane, respectively. Note that we have omitted Σ yx and Σ zx in eq.(3.8) and eq.(3.10), respectively, as these averages vanish (see section 2.1). The evaluation of the force fluctuation is made two steps: first, we will evaluate the correlations in section 3.4. After, in section 3.5, we will integrate these correlations twice over the cut's length. First of all, let's introduce some diagrammatic tools. Diagrammatic tools In physics, the word field is used to denote any physical quantity that varies in space. Accordingly, the height of the membrane h(r) is a field. Inspired from the 3 .2. DIAGRAMMATIC TOOLS Feynman diagrams used in statistical field theory, we associate graphical representations to the fields in order to make calculations visual, allowing quicker simplifications. Each field is represented by a simple straight line with a point appended to it. This point, called a vertex, represents the point r in which the field is evaluated. When two or more fields are evaluated at the same point, we represent them connected by the same vertex. Besides, we represent the differentiation with respect to x or y by a slash or a dot over the lines. We present a basic diagrammatic vocabulary in table 3.1. Usually Diagrammatically The thermal averages of fields are performed using Wick's theorem, which states that the average of an even number of fields is given by the sum of all possible complete contractions. By a complete contraction, we mean linking the free ends of a set of fields, two by two, in a way that no single field remains. The continuous line formed after the contraction between two fields represents the correlation function G(r) (in this context also called propagator), suitably differentiated. If the number of fields is uneven, the theorem states that the average vanishes. Let's see an example of the simplest case, involving only two fields: where ∂ x | r stands for the derivation with respect to x at the point r. The arrow indicates that the propagator leaves at the vertex r and enters at the vertex r ′ . It's direction is arbitrary: by inverting it, we would obtain ∂ x | r ∂ y | r ′ [G(r − r ′ )], which In other words, every slash (resp. dot) contributes to the sum a factor i q x (resp. i q y ) if the propagator enters the vertex to which it is attached and −i q x (resp. −i q y ) otherwise. From this result, it is easy to show that whenever we have correlation function of the same kind of the one given in eq.(3.3), we can group slashes and dots using the following rule: in any propagator branch, one can shift a slash or a dot from one vertex side to the other if one multiplies the diagram's coefficient by −1; once all derivatives are on the same side, the side matters no more. All the derivatives can be taken at the same point, contributing (i q x ) for a slash or (i q y ) for a dot, and we represent them in the center of the propagator: As a second example, let's see a typical case of the average of two fields evaluated at the same point. We have Finally, in section 3.4, we will deal with averages involving four fields. A representative example follows, where we have numbered each field to highlight all the possible complete contractions: Getting familiar: evaluating f with diagrammatic tools From eq.(3.5), we see that to evaluate f , one needs to evaluate the average of some component of the projected stress tensor. These components, introduced in section 1. 5.2, can be written in terms of diagrams as The average of eq.( 3.22) is the simplest one to evaluate: since each term has only an uneven number of fields, Wick's theorem imply directly a vanishing average. We shall evaluate in details the average of Σ xx as an example. We have where the vertex indicates the point r in which the average is calculated. Note that the average does not depend on it, given the isotropy of the system. Grouping the differentiations, we obtain Now, in the particular case of these diagrams, with only one vertex, we have In fact, as the correlation function is calculated at r = r ′ , it follows that more generally we can exchange globally slashes and dots in a diagram. Sadly, this nice property does not hold for the evaluation of the kind of diagram shown in eq. (3.19), particularly important to evaluate the force fluctuation in the next section. For the present case, it follows which reads (3.28) As expected, we have recovered τ given in eq.(2.9). Taking into account the rules introduced in the last section, the average of Σ yx is very simple to evaluate. Grouping the derivatives, we have Let's evaluate the first diagram: Recalling that q x = 2πm/ A p and that q y = 2πn/ A p and that the sum over q stands for two sums, one on m, and other on n, both running from −N max up to N max , one can readily show that the contribution of this diagram vanishes. More generally, for this kind of diagram, a uneven number of slashes or dots imply a vanishing contribution. So, we conclude that Σ yx = 0 and we re-obtain the result given in eq.(3.6). 3. 3. GETTING FAMILIAR: EVALUATING F WITH DIAGRAMMATIC TOOLS Evaluation of the projected stress tensor correlation In this section, we will evaluate the correlation of the stress tensor at two general points over the projected cut r = x e x + y e y and r ′ = x e x + y ′ e y . As we discussed in section 3.2, these calculations will involve mostly diagrams of the general family . (3.31) We begin thus by reminding two properties of these diagrams: 1. they may be separated in two components of the form q n x q m y e i (y−y ′ )qy σq 2 + κq 4 , cos n θ sin n θ e i (y−y ′ ) sin θ dθ dq , (3.33) where the two last passages are good approximations for very large membranes. We remind that Λ is the upper wave-vector cutoff given by 1/a, where a is a microscopical length of the order of the membrane thickness. 2. the contribution of propagators with an uneven number of slashes -and only slashes -vanishes. Indeed, one can easily proof this by remarking that the sum over q is symmetrical. Note that this property would not hold if we didn't have x = x ′ . Evaluation of C xx Let's begin by calculating (3. 35) In terms of the G n,m defined in eq.(3.32), we obtain Considering very large membranes and performing the angular integral in eq.(3.33) for each G n,m , eq.(3.36) becomes where Y = |y − y ′ | and with J i standing for the first kind Bessel function of order i. Note that, as expected given the isotropy of the system, C xx depends only on the distance between the points. At this point, it is useful to rewrite eq.(3.38): whereB ij is dimensionless. To simplify notations, we have omitted the dependence ofB ij on r, given by with σ r of the order of the membrane rupture tension. Eq.(3.37) can be rewritten as where σ 0 = κΛ 2 8πβκ ( 3.42) was already introduced in the last chapter. Note that the terms inside the brackets are dimensionless and that C xx depends actually only on σ 0 , on r and on ΛY ≡ |y − y ′ |/a. In Fig.(3.2) we can see C xx as a function of Y in units of a for different tensions. First of all, we notice that the curves do almost not depend on r -and consequently on the tension. Accordingly, the decrease of the correlation is dominated by the bending rigidity, which is not evident, since the Σ xx depends strongly on the tension. Secondly, C xx decreases relatively fast: about five times the microscopical length a for any tension. At last, in the following we will need to integrate C xx . It will be thus useful to remark that for any tension, it is very well approximated by Evaluation of C yx Following the same route as in the last section, can be written in terms of diagrams as 3 which reads In the thermodynamical limit, we obtain where Y = |y − y ′ | andB ij is defined in eq. The correlation C yx is very similar to C xx , sharing with it three features: 1. as before, C yx normalized by the auto-correlation depends only weakly on the tension, specially in the regime of low tensions. The shape of the correlation is dominated by the bending rigidity; 2. C yx relaxes over approximately five times the microscopical length a; the same approximation holds, although it is less good. Evaluation of C zx The correlation of the normal component is the simplest one to evaluate. Diagrammatically, we have This time, in the thermodynamical limit, one can integrate C zx not only over the angular coordinate, but also over q, obtaining and accordingly As we can see in Fig. 3.4, C zx normalized by C zx (0) has roughly the same be features of the former correlations: it does almost not depend on the tension and it becomes negligible for distances bigger than ≈ 5 a. This time, however, as C zx is very simple, directly given by an analytical function. Summing-up Here we sum-up some important results obtained in this section. First, the three correlations normalized by it's value at y = y ′ share the following features: 1. the normalized correlation depends only weakly on the tension; 2. they present roughly a Gaussian behavior. Moreover, C xx and C yx are well approximated by where Λ −1 = a is the smallest wave-length cut-off; 98 3. 4 3. the correlation is negligible for distances larger than 5 a, which is really small, considering a ≈ 5 nm. Finally, as the dependence on the tension happens mainly through the correlation at y = y ′ , it is interesting to plot C xx (0), C yx (0) and C zx (0), given in eq.(3.43), eq.(3.49) and eq.(3.54), respectively, as a function of the tension (see Fig. 3.5). Two important features of these curves will be reflected in the force fluctuation: 1. first, in the three cases, the dependence on the tension is not accentuated, implying that one could actually simply neglect from start every diagram proportional to σ 2 and σκ in the last sections; 2. secondly, the correlation of the component of the stress tensor normal to the membrane C zx is far bigger than the two other contributions, which are comparable among them. Fluctuation of the force To obtain square of the force fluctuation in each direction, defined in eqs.(3.8)-(3.10), we must integrate the correlation function twice over the cut's length: In the last section, we have seen that the correlations decrease very quickly, with a characteristic length of about ℓ = 5 a ≈ 25 nm. Recalling that L is the length of the projected cut, it is reasonable thus to assume L ≫ ℓ. In Fig. 3.6, we can see a graphical representation of the integrals of eqs. In Fig. 3. 6, we see that for L ≫ ℓ, eqs.(3.56)-(3.58) can be well approximated by For the two first cases, it is not possible to obtain an analytical equation from the exact expression of the correlation (eq.(3.37) and eq.(3.48)). We will thus use the Gaussian approximation given in eq.(3.55), yielding In a nutshell In this chapter, we have evaluated for the first time the fluctuation of the force exchanged through a cut of projected length L in a planar membrane. To do so, 3. 6 . IN A NUTSHELL we have introduced some diagrammatic tools useful in the following chapters. The calculation was done in two steps: first, we have evaluated the correlation of some elements of the projected stress tensor and after we have integrated them over the cut. These correlations present some interesting features: their shape do almost not depend on the tension and they decrease very quickly, becoming negligible for distances larger than 5 a ≈ 25 nm, with a of the order of the membrane thickness. For the fluctuation of the force component transverse to the cut, ∆f x , and parallel to it, ∆f y , we have obtained the same scaling behavior whereas for the component perpendicular to the membrane, ∆f z , we have obtained These equations hold up to a numerical factor of the order of the unity that depends very weakly on the tension. Interestingly, the scaling law for ∆f x and ∆f y depends neither on the bending rigidity. Quasi-spherical vesicles In chapter 1, we have seen that vesicles are widely used in experiments, since they are easy to assemble and to manipulate. Vesicles are used both in micropipette and adhesion experiments, in which one increases the mechanical tension τ by flattening the membrane's fluctuations and in contour analysis experiments, in which one measures r, the large-scale counterpart of the tension σ, through the fluctuation spectrum. In the chapter 2, we have derived τ as a function of σ for planar membranes, obtaining in the limit of large membranes. In this equation, σ r = κΛ 2 is a tension of the order of the rupture tension, Λ = 1/a, where a is a microscopical cut-off of the order of the membrane thickness and σ 0 = σ r /(8πβκ). This relation reduces simply to τ ≃ σ − σ 0 for membranes under small tensions (σ < 10 −2 σ r ). We do not know, however, if eq.(4.1) still holds for vesicles since they have a different geometry and they present a supplementary volume constraint. In this chapter, we shall thus calculate τ from the projected stress tensor for the case of quasi-spherical vesicles. We shall examine both the usual case of a closed vesicle whose volume is constrained and the case of poked vesicles. We call poked vesicles those vesicles that are free to exchange liquid with the outer media. Experimentally, it can be achieved by embedding special proteins in the membrane or by making holes in it with a micropipette. They can however keep a pressure difference with the outer media if the inner/outer fluid contains molecules bigger than the holes, so they can not transit across the membrane. In particular, we will address the following interesting questions, the first three having experimental implications while the last question deals with a more theoretical issue: 1. What is the difference between τ for a quasi-spherical vesicle (closed or poked) and τ for a planar membrane? Is there a characteristic radius over which they coincide, in which case one can simply consider the relation given in eq.(4.1)? 2. How does the volume constraint affect the expression for τ ? 3. Can τ be negative, in which case the inner pressure of the vesicle would be smaller than the outer? 4. Can τ be obtained by differentiating the free-energy with respect to the projected area? If so, what does projected area mean in the case of a vesicle? Usually, as discussed in chapter 1, one should use the ADE-model Hamiltonian. We will however use the simpler Helfrich Hamiltonian, introduced in section 4. 1. There, this choice will be justified. Following the same reasoning as in section 1.5.2, we derive the projected stress tensor for a quasi-spherical geometry in section 4. Finally, we discuss the first three questions in section 4. 5. We show that it is justified to use the relation given in eq.(4.1) for quasi-spherical vesicles, closed or poked, with a radius bigger than 1 µm. Besides, the volume constraint seems to be unimportant for the dependence of τ on σ. Experimentally, however, σ is not measurable. With vesicles, the true control parameter is the area excess, which depends considerably more on the volume constraint. We expect thus some difference between closed and poked vesicles, specially in the case of small vesicles. Lastly, we show that negative values of τ are expected well before the transition to oblate shapes in both cases, implying that vesicles may support an internal pressure smaller than the outer. At last, in section 4.6 we shall address the more theoretical question of recovering τ for closed and poked vesicles by differentiating the free-energy. Differently for the case of planar membranes, the sense of the term projected area for a vesicle is not clear: it can refer to the area of a sphere with the average radius or the area of a sphere of volume V , for instance. In this section, we shall see that indeed the term is not well-defined, since it corresponds to different area depending on whether the vesicle is closed or not. All calculations and discussions presented here were done with the collaboration of Jean-Baptiste Fournier and Alberto Imparato. The main results can be found in ref. [5]. Parametrization and effective Hamiltonian We consider a quasi-spherical vesicle whose area A and volume V are fixed (for closed vesicles). Its shape is parametrized by where u ≪ 1 (see Fig. 4.1). For closed vesicles, as in Seifert's work [60], we choose the sphere of volume V as the reference sphere, so that R = 3 4 V /π 1/3 . Indeed, in experiments, one can control V by lowering the ion concentration of the outer media of the vesicle, so it inflates at its maximum. Equivalently, using a micropipette, one can apply a large pressure difference through the membrane. In both cases, the excess area is negligible and thus from the optically resolvable shape, one deduces V . In the case of poked vesicles, as there are no volume constraints, one cannot control V . Instead, one can measure the average radius of the vesicle and deduce the average volume of the vesicle. In this case, we choose thus simply the average vesicle's shape as the reference sphere, so that u(θ, φ) = 0. The area constraint reads where ∂ θ r ≡ ∂r/∂θ, ∂ φ r ≡ ∂r/∂φ, yielding, in terms of u Here and throughout this section, u i ≡ ∂u/∂i, u ij ≡ ∂ 2 u/∂i∂j, where i, j ∈ {θ, φ}. Latin indices will denote either θ or φ, not r. The volume constraint, important for closed vesicles, reads As we have seen in section 1.3, the energy of a vesicle is best described by the area-difference elasticity (ADE) model. Seifert [60] has however shown that in the 4. 1. PARAMETRIZATION AND EFFECTIVE HAMILTONIAN quasi-spherical limit, the ADE Hamiltonian was equivalent to the minimal Helfrich model, i. e., the spontaneous curvature (SC) model with vanishing spontaneous curvature. Hence, we adopt the latter, which corresponds to an effective Hamiltonian supplemented by the area and (if necessary) volume constraints given in eqs.(4.5)-(4.6). While the volume constraint is quite easy to implement, it is difficult to handle the surface constraint exactly [60]. We shall therefore use the traditional approach, namely introducing a Lagrange multiplier σ playing the role of a tension in order to take into account the area constraint. Again, as discussed by Seifert [60], this approach gives correct results in the small excess area limit, in which we shall place ourselves in the following. The effective Hamiltonian thus reads with the additional constraint given by eq.(4.6) for closed vesicles. From differential geometry, dA is given by eq.(4.4) and φ r, n = (∂ θ r × ∂ φ r)/|∂ θ r × ∂ φ r| being the normal to the surface. Up to the second order on u(θ, φ), we have thus with [110] [111] h = (2κ + R 2 σ) sin θ + 2 sin θ R 2 σu − κ u φφ csc 2 θ + u θ cot θ + u θθ Derivation of the stress tensor for a quasispherical geometry Let us consider an infinitesimal cut at constant longitude (φ constant) separating a region 1 from a region 2 (see Fig. 4.1). The normal to the projection of this cut onto the reference sphere is m = e φ . Analogously to the case of planar membranes presented in section 1.5.2, the projected stress tensor Σ in spherical geometry relates For an oblique cut, dF is obtained by decomposing m along e θ and e φ . The derivation of the projected stress tensor in spherical geometry follows the same route as for planar geometry (section 1.5.2). We consider a patch of membrane delimited by a closed curve, corresponding to a domain Ω on the reference sphere enclosed by the curve ∂Ω. The membrane within the patch is assumed to be deformed, at equilibrium, by means of a distribution of surface and boundary forces (and a distribution of boundary torques). To each point of this patch, we impose an arbitrary displacement δa = δa r e r + δa θ e θ + δa φ e φ that keeps the orientation of the membrane's normal n constant along the boundary, so that the torques produce no work (see Fig. 4.2). On the left we see a patch of quasi-spherical membrane before (shaded shape) and after (dashed red shape) the displacement δa. At right, we show the same displacement in the (θ, φ, r) space. From this drawing, it is easier to see the relation between δa, δu, δθ and δφ. Closed vesicles In this section we shall derive the effective tension for closed vesicles, τ closed , using the stress tensor and the free-energy. As these results are readily transposable to the case of poked vesicles, we shall present here a more detailed account of our derivations. Thermal averages and correlations for closed vesicles In order to calculate the effective tension, we will see in section 4.3.2 that we need to evaluate the thermal average of Σ θθ . To this aim, we do the standard decomposition of u(θ, φ) in spherical harmonics [60] [110] [111] where L is a high wave-vector cutoff (see discussion on the following). Note that the modes l = 1, which correspond to simple translations, are discarded. In terms of u l,m and up to order u 2 , eq.(4.5) and eq.(4.6) take the form, respectively, The volume constraint V = 4 3 πR 3 (recall the definition of R for closed vesicles) implies therefore [60] using eq.(4.37) and integrating over θ and φ, the Hamiltonian for closed vesicles in terms of u l,m is given by [60] H closed = 4πR 2 σ + 1 2 ωH l |u l,m whereH l = κ (l − 1) (l + 2) l 2 + l +σ . is the reduced tension. Note that we have discarded in H closed a constant energy term, 8πκ. We emphasize that negative values ofσ are allowed [60]. Indeed, the minimum of the Hamiltonian H closed given in eq.(4.39) corresponds forσ > −6 to a perfectly spherical vesicle (u l,m = 0, ∀ l ≥ 2). The mean-field transition to an oblate shape occurs thus atσ = −6 (non harmonic terms being then needed to stabilize the system). Standard statistical mechanics yields u l,m = 0, ∀ l = 0 and where k B T is the temperature in energy units. We may now calculate the fluctuation amplitudes. Using eq.(4.42) and the Addition Theorem for spherical harmonics: we obtain The correlations of the other derivatives of u are given in appendix F. Note that u is negative and that u = − u 2 , which shows how the temperature-dependent fluctuations affect the mean shape. Cutoff The large wavenumber cutoff L should be related to the smallest wave vector allowed, Λ ≈ a −1 , where a is a length comparable to the membrane thickness (i.e., π/Λ of the order of a few times a). With spherical harmonics, however, this is not easy to implement. The requirement that we should recover the planar limit for large values of R will guide us. For a square patch of fluctuating flat membrane with reference area A p and periodic boundary conditions, the wave vectors are quantified according to q = 114 4. 3. CLOSED VESICLES CHAPTER 4. QUASI-SPHERICAL VESICLES 2π/ A p (n x , n y ), where n x and n y are integers and |q| < Λ. The number of modes is then approximately πΛ 2 /(2π/ A p ) 2 and the number of modes per unit area is For a vesicle, we have Asking that the number of degrees of freedom per unit area (per lipid, in some sense) be the same in both cases, we require these two quantities to be equal. Hence we get which gives L = ⌊ √ 4 + R 2 Λ 2 −1⌋ (⌊x⌋ is the integer part of x). In the limit R ≫ Λ −1 , this gives simply L ≃ ΛR. Validity of the Gaussian approximation Since our calculations are limited to O(u 2 ), we should check, in principle, that higher order terms are negligible. In practice this not feasible. To check the smallness of u (which is especially critical in the case σ ≤ 0) we propose a necessary, but not sufficient condition, requiring: In the following, we shall take The average of eq.(4.35) for closed vesicles up to order two yields Consequently, taking A p = 4πR 2 (the area of the vesicle with volume V ), one obtains Our validity condition (4.53) implies α closed ≤ α max (α max corresponding to α for σ = − 4), with α max shown in Fig. 4. 3. One can see that α max ≈ c 1 + c 2 ln R, where c 1 and c 2 are constants. Indeed, the sum in eq.(4.57) is dominated by the modes l = 2 and l = 3, the rest of the sum being well approximated forσ = O(1) by an integral proportional to ln(R). Note that if one takes A p = 4πR 2 (1 + u ) 2 (the area associated to the average radius) one obtains α max just slightly bigger (see Fig. 4.3). Evaluation of τ closed from the projected stress tensor Imagine replacing the fluctuating vesicle by a shell coinciding with its average shape (see Fig. 4.4(a)). The effective tension τ is the average force per unit length that is exchanged tangentially to the shell's surface. Because of the spherical symmetry, τ depends neither on the point (θ, φ) nor in the direction in which it is calculated. Let us thus consider an infinitesimal cut with θ constant of extension dφ. The component along e θ of the force exchanged through the cut is on average df = Σ θθ dφ . The length of the cut is on average R(1 + u) sin θ dφ . Hence, . (4.58) Since Σ θθ = σR sin θ + O(u), we obtain equivalently Note that the terms on u vanish and, as expected, τ closed is independent of the point (θ, φ) in which it is calculated. It is interesting to examine τ closed in the limit of large vesicles. In this case, the sum on eq.(4.61) may be substituted by an integral: The dominant term in eq.(4.62) correctly matches the difference τ − σ for flat membranes given in chapter 2, eq.(2.10). We have also calculated the normal and orthogonal components of the tension. Both vanish: Σ rθ = 0 and Σ φθ = 0. While the latter result is obvious on symmetry grounds, the former one is interesting, implying that the shell mentioned above can indeed be considered as a purely tense surface. This would probably not hold for a vesicle with non-spherical average shape. As a consequence, the Laplace law can be used without curvature corrections for a fluctuating quasi-spherical vesicle, provided that one uses τ instead of σ. Indeed, this could be expected from renormalization arguments, since the Laplace law is exact (despite the curvature energy) for a perfectly spherical membrane [74]. Poked vesicles The route to obtain τ poked is the same as with a closed vesicle, with some minor changes. We remind that the reference sphere in poked vesicles is the sphere where u = 0, since there is no constraint on volume. Accordingly, instead of eq.(4.37), we have simply u 0,0 = 0. The Hamiltonian in the Gaussian approximation becomes whereH ′ l =H l + 4κσ [72]. The correlations given in section 4.3.1 and in appendix F remain correct, provided one replacesH l byH l + 4κσ. Note that u 2 = u ≡ 0 here and that, differently from the case of closed vesicles, one must haveσ ∈ [−3, ∞[ in order to assure that correlations are positive. The discussion about the cutoff of section 4.3.1 remains valid for poked vesicles and the validity condition for the Gaussian approximation given in eq.(4.51) becomes With the same U max = 5% as before, we haveσ min ≈ −2 for poked vesicles. The average area is given by eq.(4.56) with u 0,0 = 0, yielding Consequently, (l 2 + l + 2)(2l + 1) (l − 1)(l + 2)(l 2 + l +σ) + 4σ . The excess area is somewhat larger than in the case of closed vesicles, but the general behavior is the same. Eq.(4.60), which gives τ closed by the stress tensor method, is valid whatever the form ofH l , since τ bears no term on u. Hence, we need just to replaceH l bỹ H l + 4κσ, which yields: . (4.67) In the limit of large vesicles, we recover again the result for flat membranes. Discussion on τ for closed and poked vesicles We show in Fig. 4.6 the behavior of σ −τ as a function of the Lagrange multiplier σ for closed and poked vesicles, as well as the limiting case of planar membranes. First of all, although eqs. (4.67) and (4.61) differ mathematically, it turns out that their difference as a function of σ is numerically irrelevant (see Fig. 4.6). Indeed the extra term 4σ/[(l − 1)(l + 2)] in the denominator of eq.(4.67) is only important for small l's, while the sum is dominated by large l's. 4. 5. DISCUSSION ON τ FOR CLOSED AND POKED VESICLES This representation is however not very useful, since σ is not a control parameter. The most physical representation is shown in Fig. 4.7, where we see the behavior of τ as a function of the excess area α. We show also the limiting case R → ∞. In this case, the relation between α and σ is analytical and given in eq.(2.71). In the limit of large membranes, we obtain Applying this result to eq.(4.1), we obtain an analytical expression for τ as a function of the area excess, given by There are several salient points: 1. Even though τ closed and τ poked are almost indistinguishable as a function of σ, α closed and α poked present different dependences in terms of σ, as shown in Fig. 4.8, especially for small values of R. This indicates that the volume constraint affects mainly the excess area and explains the differences shown in Fig. 4.7. 2. The results for τ deviate from the flat limit (R → ∞) essentially for R ≤ 1µm for both closed and poked vesicles (see Fig. 4.7). Consequently, for GUVs, one is allowed to use simply the relation given in eq.(4.1). Moreover, for small tensions, it is justified to assume τ ≃ σ − σ 0 , justifying the assumptions made on [87] and presented in section 2. 5 Fig. 4.9, we note that the biggest negative tension τ min that GUVs (with R ≥ 1µm) may sustain coincide with the biggest negative tension that large planar membranes sustain: Depending on the uncertainty on the value of the cutoff, τ min may be of order −10 −6 N/m or −10 −5 N/m. Let's recall the Young-Laplace equation where P inner/outer is, respectively, the inner and the outer pressure of the vesicle, R 1 and R 2 are the two principal radii (R 1 = R 2 in the spherical case). As our analysis shows that τ may indeed become negative, this would imply that vesicles could sustain an inner pressure lower than the outer pressure. For liquid drops, this situation is impossible, since the surface tension is a true material constant always positive. The possibility to sustain negative tensions, or negative pressure differences, might be experimentally investigated: i) by controlling the outer osmotic pressure, in the case of small vesicles, or ii) by poking a giant vesicle with a micropipette to which it would adhere and gently decreasing its inner pressure. 4. τ has a plateau at large values of α for both closed and poked vesicles, which probably corresponds to the actual transition to oblate shapes: when τ reaches a critical value τ c < 0, the excess area rises dramatically. For small closed vesicles we find roughly τ c R 2 /κ ≈ −5 while for giant closed vesicles it is given by τ c ≃ −k B T Λ 2 /(8π), i. e., below the mean-field threshold (see discussion after eq.(4.41)). The high symmetry phase (spherical vesicle) is thus stabilized by its entropic fluctuations, as one might have expected. Experimentally, this transition might be tested by controlling the pressure outside the vesicle. Indeed, applying the Young-Laplace pressure formula given in eq.(4.71) for a vesicle of radius R and at τ c , we find that the critical pressure difference yielding the shape transition is Numerically, for a closed spherical vesicle with κ = 25 k B T , T = 300 K and Λ = (1/5 nm), we find ∆P c = − 8 × 10 3 Pa and ∆P c = −25 Pa, for vesicles with radius 50 nm and 5 µm, respectively. 5. There exists a well defined excess area α 0 corresponding to a vanishing lateral tension τ = 0 (see Fig. 4.10). This corresponds to the case where the pressure difference between the inner and the outer media vanishes. Its value is very much radius dependent for R ≤ 1 µm, but one recovers for R ≥ 2 µm the flat membrane limit given in eq.(2.76). 4. 5. DISCUSSION ON τ FOR CLOSED AND POKED VESICLES Derivation of τ using the free-energy For a flat membrane, one may also obtain τ by differentiating the free-energy with respect to the projected area A p , as we have shown in section 2.2 [72], [2], [3], but there are two pitfalls. One must: i) take the thermodynamic limit A p → ∞ only after the differentiation, and ii) introduce a variation of the cutoff in order that the total number of modes remains constant during the differentiation, as discussed in [3]. Let us investigate the free-energy method in the case of quasi-spherical vesicles. The free-energy, F , is given by the integral running over all the configurations of the vesicle. At the Gaussian level and for closed vesicles, H is given in terms of spherical harmonics by eq.(4.39), and since r = R e r + R u(θ, φ) e r , we may write (in agreement with ref. [60]): where the superscripts R and I signify real part and imaginary part, respectively. This measure corresponds to the so-called normal gauge, which is known to be correct for small fluctuations [60]. We note that the radius R of the reference sphere appears explicitly and that for each value of l, only half of the allowed values of m have to be considered, as r is real. Performing the Gaussian integrals, one obtains (4.74) 124 4. 6. DERIVATION OF τ USING THE FREE-ENERGY In order to obtain τ closed , we must differentiate F with respect to the vesicle's "projected area" A p . Which one, however? The area A V = 4πR 2 of the reference sphere (i.e., the sphere having the same volume as the vesicle's), or the area of the vesicle's average shape, defined as A m = 4π R(1 + u) 2 ? It will turn out that the former choice is the correct one. In a sense, this is natural because it corresponds to our parametrization. However, it is not that obvious, because the definition of τ closed in eq.(4.58) involves the area of the average vesicle's shape. Let us thus pick A p = A V ≡ 4πR 2 . It is worth noticing that H l depends on A p only throughσ = σR 2 /κ, yielding ∂ H l ∂R 2 = (l − 1) (l + 2) σ . (4.75) With this choice: we obtain which is identical to the result obtained from the stress tensor approach, eq.(4.61). How about the pitfalls mentioned above? First, we didn't take the thermodynamic limit before differentiating. Actually, this would not be problematic, since the quantification of the modes does not involve the size of the system, like it is the case for planar membranes. Second, we have kept L (hence the number of modes) constant during the differentiation, in agreement with the fact that L = ⌊ √ 4 + R 2 Λ 2 − 1⌋ is constant for a mathematically infinitesimal change of R. We may obtain a more intrinsic expression for τ closed . With A = A p (1 + α), and N modes = L l=2 (2l + 1), we may rewrite τ closed as The quickest way to obtain this result is to keep separate, when differentiating with respect to R 2 , the two terms coming from ln(H l ) and ln(1/R 2 ) in eq.(4.74). The interpretation of this equation is not straightforward, because 1 2 k B T is the internal energy per mode (not the free-energy per mode). Note that the same form for τ is also valid in the planar case, as shown in [3]. In addition, let's see what happens if we take A p = A m ≡ 4πR 2 (1 + u ) 2 : where we remind A V = 4πR 2 . Using u = − u 2 , one obtains (4.80) 4. 6. DERIVATION OF τ USING THE FREE-ENERGY Clearly, differentiating with respect to A m yields supplemental terms of order u 2 The result is thus wrong, in the sense that it differs from the result obtained by the stress tensor method. Let's now re-derive eq.(4.67) by deriving the free-energy. In the case of poked vesicles, it is given by the same expression as eq.(4.74) withH l replaced byH ′ l : It turns out, again, that τ poked = ∂F ′ /∂(4πR 2 ) exactly. This result is satisfying, but at the same time it shows how slippery the free-energy approach can be: differentiating with respect to the area of the average vesicle is correct in the case of poked vesicles but not in the case of closed vesicles. The stress tensor method is thus a much safer. Our expression for τ poked differs from that obtained in ref. [72], where the authors considered also a quasi-spherical membrane without volume constraint. In particular, the mechanical tension obtained in that reference cannot take negative values. We believe that the discrepancy between the two results comes from the omission in ref. [72] of the factors R within the measure. Indeed the factor 1/R 2 in the logarithm of our eq.(4.81) is absent in the corresponding expression (A.9) of ref. [72]. In a nutshell In this chapter, we have compared the mechanical tension τ one applies by aspiring a vesicle with a micropipette, for instance, with the tension σ theoretically introduced in the Hamiltonian to fix the membrane's area in the case of quasispherical vesicles. We have studied both the case of usual closed vesicles and the case of poked vesicles, free to exchange liquid with the outer media. We conclude that in both cases, for GUVs, the relation between τ and σ is very well approximated by the relation obtained in the case of planar membranes, given in eq.(4.1). Accordingly, for GUVs under small tensions, we can assume simply τ ≃ σ − σ 0 , as in the case of planar membranes. Moreover, in both cases, we predict the possibility of an internal pressure smaller than the outer, situation impossible in the case of liquid drops. Regarding comparatively the behavior of closed and poked vesicles, we expect the excess area of both to differ for small vesicles. At last, we have shown that the concept of projected area for vesicles is not clear. Thus, we conclude that it is much safer to derive τ by averaging the projected stress tensor. In this and in the following chapter, we shall study the membrane nanotubes presented in section 1.4.4. The main results of both chapters were obtained with Jean-Baptiste Fournier and were published in [6]. As we have seen, these tubes are very thin, with a radius ranging from dozens up to hundreds of nanometers, while their length may achieve micrometers. They are very current in living cells and seem to play an important role in cell transport and communication [92]. In laboratory, nanotubes can be extracted by applying very localized forces to membranes. In Fig. 1.36 of chapter 1, we have presented a brief sum-up of the more popular methods used to extract nanotubes. Here we interest ourselves in the force needed to extract (and hold) these tubes, which can be precisely measured in experiments using optical tweezer. The experimental procedure in this case consists in attaching a small glass bead to a vesicle held by a micropipette [96], [100]. A laser is pointed to the glass bead, which is thus attracted to the center of the beam with a force that depends linearly on the distance between the bead and the center of the beam. In experiments, one displaces the position of the center of the beam, denoted Remark that usually, as in the case of Fig. 5.1, one measures only the force in the axis of the tube, which is by symmetry the only component with non-vanishing average. Note also that nanotubes are not stable: if one stops applying the point force, the membrane will evolve to a less curve configuration and the tube will be re-absorbed in the vesicle. Former theoretical works studied both the formation mechanism of nanotubes [102], [112] and their (dynamical) stability [113], [114], [115]. As nanotubes are very thin compared to the GUVs from which they are usually pulled, it is usually assumed that the vesicle acts as a lipid reservoir to the tube. In this case, as discussed in refs. [102] and [112], one can neglect the pressure difference across the tube. The effective energy H ′ is thus simply given by the Helfrich Hamiltonian (eq.(1.15)) plus the work of the force that keeps the tube. For a symmetrical membrane, i. e., a membrane whose spontaneous curvature vanishes, the energy for a perfectly cylindrical tube with radius R and length L is [102] where κ is the bending rigidity and σ is the Lagrange multiplier associated to the microscopical area of the membrane, which we remind is not directly measurable. The energy coming from the Gaussian curvature is omitted, since we do not consider topological changes. Minimizing this energy with respect to R and L, one obtains, respectively These values correspond to the mean-field values of the radius and of the force needed to hold a tube, in the sense that thermal fluctuations relatives to the cylindrical shape were totally neglected. At first glance, one may think that neglecting the effects of thermal fluctuations is largely justified, since it is a reasonable assumption for planar membranes: as the correlation length is ∝ σ −1/2 , fluctuations are quickly suppressed as the tension increases [112]. Recently, however, it has been shown that the tubular geometry implied a substantially different behavior: tubes should present very strong shape fluctuations due to a one-dimensional set of extremely soft modes (Goldstone modes, see Accordingly, it is natural to ask how the average force along the tube's axis f , taking into account its fluctuations, differs from the mean-field value f 0 . It is our aim this chapter to settle this question. To do so, we follow roughly the same steps as in the last chapter, starting by introducing the parametrization and the energy in 5. 1. Afterwards, we shall derive the projected stress tensor for quasi-cylindrical geometry in section 5.2. As this calculation is totally new, we propose some verifications in the same section. In section 5.3, we average the stress tensor and evaluate f . At last, in section 5.4, we compare f with f 0 and discuss in which cases one is allowed to assume f ≃ f 0 . There we discuss also experimental consequences and re-interpret the curve shown in Fig. 1. 37. Parametrization and Hamiltonian We shall restrict our attention to deformed tubes weakly departing from the cylinder corresponding the mean-field approximation whose radius, as we have shown above, is given by Let's consider a cylindrical coordinate system (O; r, θ, z) aligned with the tube (see Fig. 5.3). 5 The shape of the fluctuating tube is parametrized by with u ≪ 1 and z ∈ [0, L], where L is the total length of the tube. Note that instead of θ, we have used ρ = R 0 θ ∈ [0, 2πR 0 ], in order to have u as a function of two variables with the same dimension. We shall consider the situation of a relatively short tubule extracted from a giant vesicle of radius R ves ≫ R 0 (or from a vesicle connected to a lipid reservoir), so that each monolayer of the tubule is actually exchanging material with a very large reservoir and the standard Helfrich model (see section 1.3) is sufficient for the calculation of equilibrium and statistical properties [102], [112]. As we do not consider topology changes, the Hamiltonian is simply given by where S is the tube's surface. Note that taking into account the area-difference elasticity, as is done in the ADE model, is essential in the situation where very long tubules are extracted from small vesicles [116], or when studying the formation of small tethered vesicles under the action of an axial load [117]. Remark also that one should usually consider the pressure difference across the membrane by adding a term −∆P V to the Hamiltonian, where V is the volume of the tube and ∆P = P in −P out , with P in (resp. P out ) is the pressure inside (resp. outside) the vesicle from which the tube is extracted. From the Young-Laplace equation (eq.(1.35)), ∆P relates to the vesicle's radius and tension through ∆P = 2τ /R ves . Let's compare the contribution of this term with the contribution coming the term proportional to σ for a tube of radius R ≪ R ves and length L: since we are extracting tubes far smaller than the vesicle and since τ < σ. It is justified thus to neglect the pressure difference across the tubule [102], [112]. Differential geometry yields the general dA given in eq. (4.4) and H given in eq.(4.9). For the case of quasi-cylindrical geometry, we obtain up to order two on u 132 5 Derivation of the stress tensor for a cylindrical geometry In analogy to the planar and quasi-spherical cases, the projected stress tensor relates linearly the force that the region 1 exerts over region 2 to the length of the projected cut dℓ through where m = m ρ e ρ + m z e z is the normal to the cut on the reference cylinder (see In order to derive Σ, we consider at each point of the membrane an arbitrary infinitesimal displacement δa = δa ρ e ρ + δa z e z + δa r e r corresponding to a variation (δρ, δz) on the projected cylinder (see Fig. 5.5). Accordingly, the membrane's shape becomesũ(ρ, z) = u(ρ, z) + δu(ρ, z). 5 As one can see in Fig. 5.5(b), the new edge's position satisfies R 0 [1 +ũ(ρ+ δρ, z + δz)] e r (ρ + δρ) + (z + δz) e z = R 0 [1 + u(ρ, z)] e r (ρ) + z e z + δa, which implies δa ρ = δρ (1 + u) , (5.11) δa z = δz , (5.12) δa r = R 0 (δu + u z δz + u ρ δρ) . (5.13) We now impose that δa is done at fixed orientation of the membrane's normal: in this way, only the boundary forces work, not the torques. The normal to the membrane is n = t ρ ×t z /|t ρ ×t z |, with t ρ = ∂ ρ r = R 0 u ρ e r +(1+u) e ρ and t z = ∂ z r = R 0 u z e r + e z . This gives n = 1 − R 2 0 u 2 The normal variation is given by δn =ñ(ρ + δρ, z + δz) − n(ρ, z), n being the analog of n forũ instead of u. In order to impose δn = 0, we require that δn · t ρ = 0 and δn · t z = 0, yielding, up to order u 2 δu z = (u z u ρ − u ρz ) δρ − u zz δz , (5.14) δu ρ = R −2 0 (1 + u) + 2 u 2 ρ − u ρρ δρ + (u z u ρ − u ρz ) δz + u ρ δu . 19) To obtain Σ, we study the variation of the energy after the displacement δa. On the one hand, in terms of h, one has for the bulk of the membrane and for the boundary energy variation. On the other hand, the work of forces at the boundary is given by By comparing the last two equations, we obtain Σ zi , Σ ri and Σ ρi (i being either ρ or z): Note that due to the presence of terms such as R −2 0 ∂h/∂u iρ (see, e.g., the expression of Σ ρi ), it is in general necessary to have h at O(u 3 ) in order to get the stress components at O(u 2 ). This means adding to h given in eq.(5.9) before evaluating the derivatives in eqs. (5.24)-(5.26). For Σ zi , however, one may check the O(u 2 ) terms in h are sufficient. Explicitly, we obtain up to order two on u Note that we may easily recover from Σ zz the mean-field force needed to hold a tubule. Indeed, u = 0 yields Σ zz = 2σ [4], i.e., f 0 = 2πR × 2σ = 2π √ 2κσ. As we have discussed in section 1.4.4, this result is very interesting: if the mechanical tension were due only to the tension σ, we should expect f 0 = σ × 2πR 0 . In reality, the curvature yields a supplementary term 1/2 (κ/R 2 0 ) to the mechanical tension τ , which explains the factor two in our result. In the next two sections we shall propose some tests to verify the correctness of these equations. Verification: stress tensor in the tangent frame Here we propose a first check by showing that from eqs.(5.28)-(5.33), one reobtains the stress tensor in the local frame given in eq.(1.50). We consider a general membrane, not necessarily tubular. At a general point P of the membrane, there are two principal curvatures C X and C Y whose principal directions e X and e Y are 136 5 orthogonal. We place our reference cylinder tangent to the membrane at P , with its axis direction e z parallel to e Y and e ρ parallel to e X , as shown in Fig. 5. 6. By geometry (see Fig. 5.7), one determines the shape of the membrane near to P in the cylindrical coordinate system, yielding Comparing eq.(5.5) and eq.(5.34), we identify where C = C X + C Y is the total curvature. Noting that we have the equivalences X ≡ ρ, Y ≡ z and Z ≡ r near P , these equations are identical to eq.(1.50). Verification: force between two rings constraining the tube In order to control the validity of the formula giving Σ zz , which will be the only component used in the next sections, let us calculate the force acting between two "undulating rings" separated by a distance L (see Fig. 5.8) by deriving the free-energy and compare to the force obtained using the projected stress tensor. and ∂ z u(ρ, ±L/2) = 0, for n > 1. By symmetry, we assume u(ρ, z) = U(z) cos(nρ/R). Thus the distortion energy (5.8)-( 5.9) between the rings takes the form: The equilibrium shape is given by the Euler-Lagrange equation given in eq.(5.21), yielding The solution satisfying the boundary conditions is where ℓ = L/(2R 0 ), A(ℓ) = n + sinh(n + ℓ) cosh(n − ℓ) − n − sinh(n − ℓ) cosh(n + ℓ) and n ± = (n 2 ± √ 2n 2 − 1) 1/2 . To evaluate the balance of the forces acting over first ring, which is symmetrical from the force acting over the second one, we have to consider the force exerted by region A, f A , and the force exerted by region B, f B . Each region is in equilibrium, implying that the integral of Σ zz over ρ is constant in each one of them. We may thus consider an arbitrary projected path with m = e z in each region to evaluate forces. For the region A, we will consider a path very far from the ring, so that u → 0. We have ( 5.46) In the last passage, we have used the fact that Σ zz → 2σ, Σ ρz → 0 and Σ rz → 0 as u → 0. For the region B, we consider a path at z = 0. We obtain The forces in the other directions, e ρ and e r , vanish after integration, as expected given the symmetry of the system. The resulting force is then 5 Using eq.(5.28) to calculate Σ zz , we obtain where B = πRσU 2 0 √ 2n 2 − 1. Intuitively, one should expect the rings to collapse in order to minimize the tube's deformation. Indeed, the resulting force between the rings is always attractive. In order to check this result, we propose ourselves to re-derive f (L) using a different method that does not involve the stress tensor. First, we will evaluate the energy stored between the rings, given in eq.(5.43). As the tube is in equilibrium, it's shape obeys eq.(5.44). Applying this equation to the first term of eq.(5.43), we obtain Integrating by parts the term on U U ′′′′ and reminding that U(z) is an even function, that U(±L/2) = U 0 and that U ′ (±L/2) = 0, we obtain where we have used the solution given in eq. (5.45) to obtain the last passage. From the stored energy, the resulting force between the rings is given by After a careful calculation, we recover the result obtained from the stress tensor (eq.(5.28)), testifying of the correctness of the component Σ zz of the stress tensor. Evaluation of the average force In order to hold a nanotube, one must apply a force exactly equivalent to the force that the rest of the fluctuating tubule exerts. Thus, considering a section of the tube with m = e z , the average force needed to hold a fluctuating tube is ( Σ rz e r + Σ ρz e ρ + Σ zz e z ) dρ , where we remind that f 0 = 2π √ 2κσ = 2πκ/R 0 is the mean-field force and f fl is the correction due to fluctuations. Note that in average, there is no force perpendicular to the tube's axis, as expected by symmetry reasons. Correlation function Let us consider a tubule of length L with periodic boundary conditions for simplicity. The fluctuations of the tube's shape may be decomposed in Fourier modes: where m = 0, ±1, . . . , ±M andq = 2πnR 0 L, with n = 0, ±1, . . . , ±N. As the modes with m = ±1 andq = 0 correspond to pure translation, they will be omitted in the following. The cutoffs M andq max (or N) are related to the high wave-vector cutoff Λ through M = ΛR 0 andq max = 2πNR 0 /L = ΛR 0 . As in the last chapters, we assume that π/Λ is somewhat larger than the membrane thickness a ≈ 5 nm and we take Λ ≈ 1/a. Note that there is an uncertainty on Λ of a factor of order unity. In terms of the Fourier modes, the Hamiltonian given in eq.(5.8) becomes [7], [118] H ≃ κ 2 m,q m 2 − 1 2 +q 2 q 2 + 2m 2 |u m,q | 2 . (5.55) Using the equipartition of energy, we have where δ stands for the delta of Kronecker. Hence, with u ≡ u(ρ, z) and u ′ ≡ u(ρ ′ , z ′ ), the correlation function of the tubule thermal fluctuations is given by (m 2 − 1) 2 +q 2 (q 2 + 2m 2 ) dq . (5.58) Here, as in the last sections, k B T is the temperature and the brackets indicate the thermal average. In the last passage, we have transformed the sum over n into an integral, which is legitimate for tubes longer than a few times R 0 , so that both the sum and the integral run from −ΛR 0 to ΛR 0 . Using this correlation, one can easily derive other correlations involving derivatives with respect to ρ or z. Let's see an example in detail: first, let's evaluate an average without using the correlation function where we have used eq.(5.56) to obtain eq.(5.59). One can easily verify that this result can be simply obtained from eq.(5.58) through where | (ρ,z) indicates that the derivative is taken at the point (ρ, z). This method can be generalized to the calculation of similar averages. Average force Calculating the average of each term of eq.(5.28), we obtain The average force is thus with, in tubes whose length is bigger than R 0 , 142 5. 3 It turns out that this approximation is excellent in the regimes of interest (see Fig. 5.9). It follows and consequently Equivalently, using the definition of R 0 given in eq.(5.4), we obtain in terms of σ Hence, we find that the actual force f is significantly smaller than the mean-field approximation f 0 , the correction being more important when R 0 is large (Fig. 5.9). Note, however, the strong influence of the uncertainty on Λ. Discussion on the validity of our results Let us comment on the validity of our results. First, we should recall that eq.(5.65) actually corresponds to the first term in a power series expansion of the 5 . .], the higher-order terms arising from terms beyond O(u 2 ) within the expressions of h and Σ zz . The fact that k B T /κ ≪ 1 for biological membranes is good for the convergence of the series, but R 0 should not become too large. Obviously, f must be positive, implying the upper bound condition with a ≡ Λ −1 ≈ 5 nm and κ ≃ 50 k B T . This condition, essentially due to the existence of an upper wave-vector cutoff, is normally verified (see, e.g., Ref. [116]). At the same time, we must require u 2 ≪ 1 for the harmonic approximation to be valid. As shown in ref. [4], eq. (5.58) is well approximated by Requiring, e.g., u 2 < 0.2 corresponds to the condition L/R 0 < πκ/(k B T ), i.e., L/R 0 < 200 for κ ≃ 50 k B T . When R 0 ≤ 50 nm this corresponds to L < 10 µm. These ranges, together with the requirement that the vesicle from which the tubule is extracted should be very large, define the conditions of validity of our analysis. To conclude, let us comment on the influence of the boundary conditions. Due to the force conservation principle, f cannot depend on the position at which it is measured. Therefore, the boundary conditions are not important to the average force and it is justified to choose periodic boundary conditions, as we have done here. Discussion on experiments In this chapter, we have analyzed the influence of the thermal fluctuations on the force exerted by a nanotube which is pulled from a membrane with bending rigidity κ and internal tension σ. Two other parameters play a role: the thermal energy k B T and the upper wave-vector cutoff Λ ≈ 1/a (up to a prefactor of order unity), where a is the membrane thickness. While κ, Λ and k B T are rather fixed, σ, the in-plane stress, may span several decades as it depends on the way the membrane is tangentially stressed. As we have seen previously, the problem is that σ itself is not exactly a control parameter. Instead, one usually controls the effective mechanical tension τ . Let's examine a typical experiment involving nanotubes, as presented in section 1.4.4. To a giant vesicle, held by a micropipette, one attaches a glass or magnetic bead. This bead is subsequently displaced, forming a tube, while the vesicle is held at the same position. By measuring the difference of pressure between the interior of the micropipette and the aqueous solution, the tension τ can be obtained, using eq.(1.35). Let's suppose one is interested in studying the force f needed to extract a tube as a function of the membrane's tension. As we have discussed in section 1.4.4, two assumptions are usually made: 1. firstly, one considers σ ≈ τ ; 144 5. 4. DISCUSSION ON EXPERIMENTS CHAPTER 5. FORCE NEEDED TO EXTRACT A FLUCTUATING NANOTUBE 2. secondly, one neglects the thermal fluctuations of the tube, implying that the force to extract a tube is simply f 0 . Thus, under these assumptions, the force needed to extract a tube, which we will call f ′ 0 , is given by In Fig. 5.10, we have plotted this relation, which is simply linear in log-scale (line in red). In experiments, as one can see for instance in the Fig. 1.37 in section 1.4.4, this linear behavior seems to be indeed verified and consequently, up to now, these two assumptions were held as justified [96], [98], [119]. In the chapter 2, however, we have seen that τ was considerably different from σ, since τ has additional contributions arising from the curvature strains excited by the thermal undulation. For a planar membrane, we have in general relation still valid for large vesicles (see chapter 4). Taking into account this difference, but still neglecting thermal fluctuations, the force needed to extract a tube should be This curve is shown in blue in Fig. 5.10, which seems to be completely incompatible with the linear trend of experimental data. Finally, we have seen in the previous section that the contribution of the thermal fluctuations to the force may be important. Taking into account the difference between τ and σ as well as the thermal fluctuations, we obtain from eq.(5.66) This curve is plotted in green in Fig. 5. 10. We observe that thermal fluctuations are indeed important: the average force f differs significantly from the mean-field approximation f 0 = 2π √ 2κσ. The relative error (f 0 − f )/f is of order 5% at τ = 10 −4 J/m 2 , of 30% at τ = 10 −5 J/m 2 and it reaches 100% at τ = 10 −6 J/m 2 (see Fig. 5.10). Interestingly, the relative error (f ′ 0 − f )/f is much smaller than (f 0 − f )/f (see Fig. 5.10). Indeed, it is less than 1% for τ > 10 −5 J/m 2 , and it becomes larger than 20% only for τ < 10 −6 J/m 2 . Hence f ′ 0 appears to be indeed a good approximation of the average force: for τ > 10 −6 J/m 2 , one should expect a linear behavior. This happens however by a happy coincidence, since one makes two non justified assumptions. Let us discuss what could be done experimentally in order to test these predictions. The difference between f and f ′ 0 will be difficult to evidence, because one should detect a difference of the order of a few pN while measuring precisely the tension in the range τ < 10 −6 J/m 2 . It should be easier to detect the difference between f and f 0 = 2π √ 2κσ, since it is already significant at τ ≃ 10 −5 J/m 2 . This could be done if the tension r were measured simultaneously from the thermal fluctuation spectrum of the vesicle from which the tubule is drawn and then assuming that r ≈ σ. It would be interesting to measure R 0 as a function of τ directly, in order to check the difference between R 0 and the usually assumed relation κ/(2τ ) (we expect κ/[2(τ + σ 0 )]). This would require a specific experiment, since R 0 is normally below optical resolution. In a nutshell The mean-field force needed to extract a membrane nanotube in terms of the membrane rigidity κ and it's microscopical tension σ is well known and given by Assuming that thermal fluctuations are negligible and that the mechanical tension τ coming from the flattening of the membrane's fluctuation is a good approximation for σ, this relation seemed to be successfully experimentally verified. Recently, however, it was shown that these nanotubes, due to their geometry, present very soft modes and should thus have strong fluctuations, implying that the actual force f needed to extract a tube should be somewhat different from f 0 . To evaluate this difference, we have derived the stress tensor for quasi-cylindrical geometry and averaged it appropriately, yielding where Numerically, the difference between f and f 0 is non-negligible. The fact that it has not been previously noticed comes from a happy coincidence: the assumption that σ ≈ τ seems to make up for the neglected thermal fluctuations. 5. 5 Fluctuation of the force needed to extract a membrane nanotube As discussed in section 1.4.4, nanotubes are extracted from vesicles by applying point forces. In the last chapter, we have described a popular method for pulling nanotubes, which consists in attaching a glass bead to the membrane and displacing the bead with a laser. The advantage of this method is that one deduces with precision the applied force by measuring the position of the center of the beam and the position of the bead along the tube's axis (see Fig. 5.1). In general, only the force in the direction of the tube's axis f z is measured, since by symmetry the averages of the transversal components of the force vanish. A typical time-sequence of the bead's position along the tube's axis and force f z can be seen in Fig. 6.1. [120]. The blue well-defined line represents the displacement of the glass bead as a function of time. The bead was moved in order to elongate the vesicle and create a tube (from 0 up to 80 s), then kept roughly at the same point for 20 s and finally moved in the opposite direction. The red fluctuating line shows the force applied to the bead. Note that there is a force barrier to form a tube (region before 40 s), but afterwards, the force does almost not depend on the length of the tube. In the last chapter and here, we are interested in the situation where the tube's length is kept constant (results published in [6]). In Fig. 6.1, this corresponds to the interval between the dashed lines, where we can see that the force is roughly a plateau. We have studied the average value of this plateau in the last chapter, obtaining where κ is the bending rigidity, Λ is a wave-vector cutoff and σ is the tension associated to the microscopical area of the membrane. Interestingly, in Fig. 6.1, we see that the measures of f z present a considerable dispersion, mainly coming from the fluctuations of the bead's position (we cannot see this fluctuation on the blue curve due to the length scale). In effect, the bead is subjected to many sources of thermal fluctuations, such as the fluctuating forces that the solvent applies to the bead, producing a Brownian movement, and the thermal fluctuations of the membrane to which the bead is attached. As membrane nanotubes present very soft Goldstone modes [7], the membrane fluctuation is possibly responsible for an important part of the dispersion, in which case measures of force fluctuation could be used to characterize membranes. Indeed, in section 1.4.4, we have seen that from measures of the average force f z , one can deduce the bending rigidity of a membrane. Likewise, the fluctuation of the force in the direction of the tube's axis, easily accessible with the same experimental setting, could provide supplemental informations. Accordingly, our aim in this chapter is to study the contribution of the membrane fluctuations to the fluctuation of the force in the direction of the tube's axis, defined through As in chapter 5, we will consider a tube small enough so that we can consider the vesicle from which it is extracted as a lipid reservoir and we can neglect volume constraints. In section 6.1 we remind some important results deduced in the last chapter. To evaluate ∆f z , we will use the diagrammatic tools introduced in chapter 3. We will thus sum-up some properties of these diagrams and write the stress tensor using them in section 6.2. In section 6.3, we evaluate ∆f z . Finally, we discuss our results in section 6.4. Some important definitions and results As in chapter 5, we shall restrict our attention to deformed tubes weakly departing from the mean-field cylinder, whose radius is given by We will keep the same coordinate system presented in Fig. 5.3 and the tube's shape will be parametrized by eq.(5.5). As before, we consider a tube relatively short compared to the vesicle from which it is extracted, so that the vesicle can be treated as a lipid reservoir. In the case of 150 6 short tubes, one can also neglect the pressure difference across the tube's membrane (see discussion in section 5.1). The energy is simply given by the Helfrich Hamiltonian, given in eq.(5.8) and eq.(5.9). The corresponding correlation function is given by where m ∈ {−M, · · · , M} andq = 2πnR 0 /L, with n ∈ {−N, · · · , N}. We remind that the upper bounds N and M are given by M = ΛR 0 andq max = 2πNR 0 /L = ΛR 0 , where Λ = 1/a is the high wave-vector cutoff and a is of the order of the membrane thickness. The force needed to extract a tube is given by where Σ rz , Σ ρz and Σ zz = σ 2 + u 2 + 2R 2 0 u 2 ρ + (2u − 1) u ρρ + R 4 0 u 2 ρρ − u 2 zz + 2u z (u zzz + u ρρz ) ( 6.6) are the components of the projected stress tensor for quasi-cylindrical geometry derived in section 5.2. As in the last chapter, the subscript ρ (resp. z) indicates the derivative with respect to ρ (resp. z). Here we will evaluate the fluctuation of the force in the direction of the tube's axis: As in chapter 3, the first step is to evaluate the correlation function of the stress tensor over the same section of tube: To do so, we will use another time the diagrammatic tools introduced in chapter 3. We recall their properties in the next section. Diagrammatic tools Throughout this chapter, we will use notations similar to those introduced in section 3.2. Each field u(ρ, z) is represented by a straight line. The derivatives with 6.2. DIAGRAMMATIC TOOLS respect to ρ are represented by a dot over the field, while a derivative with respect to z is represented by a slash. An adapted diagrammatic vocabulary is presented in table 6.1. Usually Diagrammatically Averages are performed using Wick's theorem, i. e., by adding all complete contractions of fields. Each contraction yields a propagator and, as in the case of section 3.2, one can pass a derivative from one branch of the propagator to the other by multiplying the diagram's coefficient by −1. For instance, with r = (ρ, z) and r ′ = (ρ ′ , z ′ ), we have (m 2 − 1) 2 +q 2 (q 2 + 2m 2 ) . (6.9) This time, once the derivatives are grouped, every slash contributes with a factor iq/R 0 and every dot contributes with a factor i m/R 0 . In the following section, we will evaluate the propagators between points over the same section of tube, i. e., with z = z ′ . In this case, as the sum overq is 152 6 symmetrical, an uneven number of slashes over a propagator implies a vanishing contribution. Here follows a typical example of the terms that we will need to evaluate: which one can readily read by noting the equivalence = × . (6.11) As the number of slashes over these propagators is uneven, the contribution of this diagram vanishes. The first term of eq.(6.10) is also composed by diagrams with an uneven number of slashes whose contribution vanishes. At the end, one obtains simply r r ′ = . ( 6.12) In the next section, we will re-derive Σ zz in terms of diagrams in order to gain familiarity with these tools. Getting familiar: re-deriving Σ zz The component Σ zz , given in eq.(6.6), can be written in terms of diagrams as Using Wick's theorem to evaluate the average, we obtain (6.14) Evaluation of the fluctuation of the force Aiming to obtain ∆f z , we start this section by evaluating the correlation function of the component Σ zz of the stress tensor. In section 6.3.2, we integrate twice this correlation and derive the force fluctuation. There, we discuss also some approximations in order to obtain a simple analytical expression. Finally, we conclude by a short discussion on the validity of our final result in section 6.3.3. Correlation of Σ zz Here we will evaluate the correlation function where f n,m,q,q is a complicated coefficient depending on m, n,q andk: The second term of eq.( 6.19), with an unique sum over the wavenumbers, is the contribution given by the last diagram of eq.(6.18). As expected, the correlation depends only on ρ−ρ ′ . In Fig. 6.2, we show the behavior of the correlation normalized by it's value at ρ = 0 for two different tubes with the same length, bending rigidity and wave-length cutoff. 6. 3. EVALUATION OF THE FLUCTUATION OF THE FORCE First of all, we note that even though the stress tensor correlation decreases with the distance, we have no more the fast decay found in the case of planar membranes (see section 3.4). In both cases, the function C(ρ) presents oscillations that remain non negligible throughout the whole section of the tube, indicating that the stress tensor is correlated all over the length of a tube's cross section. This is a signature of the fact that the fluctuations in the shape of membrane tubes are themselves correlated over a whole cross section, whatever the tube's radius [7]. Moreover, we observe that the oscillations in Fig. 6.2 take place over a roughly constant wavelength λ. For the tube with R 0 = 30 nm, we have 6 oscillations distributed over the perimeter, which gives a wave-length λ ≈ 31 nm ∼ 6 Λ −1 , with Λ −1 ∼ 5 nm. Interestingly, we find the same value for the larger tube. This characteristic wavelength corresponds to the length beyond which the correlation of the stress tensor in planar membranes becomes negligible (see section 3.4). It is thus probably an universal quantity, valid for any value of R 0 . To characterize better how the stress tensor correlation decreases, we have plotted the absolute value of the extrema of the oscillations of the red curve as a function of ρ/R 0 in a log-log scale (see Fig. 6.3). This curve seems to indicate that the amplitude of the oscillations decay with a power law, which is a characteristic sign of long-range correlations. Finally, we have compared the contribution of the first term of eq.( 6.19), involving two sums over the wavenumbers, and the contribution of the second term of eq.( 6.19), with an unique sum over the wavenumbers, to the total stress tensor correlation. As one can see in Fig. 6.4(a) and in Fig. 6.4(b), in general, both contributions are oscillating and important. In the following, however, we will see that the second term of eq.( 6.19), with an unique sum and represented by the solid lines in these figures, gives a vanishing contribution to ∆f z . ( 6.25) Note that the contribution of the last term of eq.(6.19) vanishes after integration. (6.26) Taking into account the fact that eq.(6.26) depends only on |m|, it can be rewritten as 2q 4k4 q 2 +k 2 + 1 −q 2k2 2 (1 +q 4 ) 1 +k 4 dq dk , (6.28) and (6.30) Both integrals overq and overk in eq.(6.28), eq.(6.29) and eq.(6.30) can be performed analytically. One can compare the contributions of some modes to the force fluctuation in Fig. 6. 5. Not surprisingly, the modes |m| = 1, which are extremely soft [7], give a greater contribution. In Fig. 6.6, we show the percent contribution of these soft modes to the total fluctuation. In agreement with the curves of Fig. 6.5, the modes with |m| = 1 are responsible for more than one third of the force fluctuation. 6. 3 Approximations and an analytical formula for ∆f z In order to obtain a simple analytical expression to the force fluctuation, we consider the limit of relatively thick tubes, with Λ R 0 > 6. Considering a = Λ −1 = 5 nm, this corresponds to tubes with a radius R 0 > 30 nm, which is currently observed in experiments. In this limit, we have and The sum over m in eq.( 6.27) can be approximated by an integral, yielding At last, we obtain We discuss the quality of this approximation and its meaning in section 6 Discussion on the validity of this result Here we remind the conditions of validity of eq.(6.27). First, denoting u the deformation of the tube relative to the mean-field cylinder, we considered here only terms up to O(u 2 ) in the Hamiltonian and in the stress tensor. Accordingly, our result corresponds actually to the first term in a series expansion of the form The term ∝ (k B T ) 2 corresponds to the contributions of terms up to order two in u, coming from the diagrams of the form . Further terms of higher order on k B T come from the terms beyond O(u 2 ) in the Hamiltonian and in the stress tensor. Secondly, eq.(6.27) is valid for tubes relatively long, i. e., whose length is bigger than the radius, but still small compared to the radius of the vesicle from which it is extracted. The simplified eq.(6.35) is a good approximation under the supplemental condition Λ R 0 > 6. Finally, let us comment on the influence of the boundary conditions. Differently from the case of the average force, there is no conservation principle for ∆f z . Here, we have calculated ∆f z assuming periodic boundary conditions, or equivalently, through an arbitrary section in the middle of a long enough tube. The actual value of ∆f z at the extremity of a tubule with specific boundary conditions might be somewhat different. Note also that we have only calculated the fluctuation of the component of the force which is parallel to the tube axis. Discussion and consequences for experiments First of all, let us discuss on the dependence of ∆f z on R 0 . From eq.(6.35), we have with a = Λ −1 of the order of the membrane thickness. The first term reminds the result obtained in chapter 3. There, we have seen that for planar membranes, the correlation of the stress tensor decreases over a very short length, whatever the membrane tension or rigidity. One could thus consider that a piece of membrane was a composition of uncorrelated patches of size ≈ a and use the Central Limit Theorem to obtain the force fluctuation. In this case, however, the force fluctuation of tubes has a supplemental logarithmic correction relative to force fluctuation in planar membranes. This correction can be explained by the fact that the correlation of the stress tensor in membrane nanotubes decreases in a power law, as we have shown in section 6.3. 1. In Fig. 6.7, we show ∆f z as a function on R 0 for different values of the cutoff Λ. The exact curve for long tubes given in eq.(6.27) is indicated by circles, while the 6.4. DISCUSSION AND CONSEQUENCES FOR EXPERIMENTS approximation given in eq. (6.35) corresponds to the solid lines. We can see the good quality of the approximation. For a given value of Λ, ∆f z does not vary much over the experimental range of R 0 . On the contrary, it depends strongly on the value of the cutoff Λ. Numerically, we have found a force fluctuation of some pN, which is of the same order of magnitude of the value obtained experimentally experimentally (see Fig. 6.1, [98] and [119]). For an accurate comparison, however, time-resolved measurements should be performed and the Brownian force on the pulling bead should be taken into account. Finally, we compare the average of the force needed to extract a tube, given by eq. (5.72), with it's fluctuation. We trace both curves as a function of the effective mechanical tension τ , since this tension can be experimentally controlled (by changing the difference of pressure in micropipettes experiments, for instance). In agreement with the results of the previous chapters, we assume τ = σ − σ 0 , where σ 0 = (k B T Λ 2 )/(8π). Applying this relation to eq.(6.35), we obtain . (6.38) The curve for f z , already presented in Fig. 5.10 and the curve of ∆f z given above is shown in Fig. 6. 8. We can remark that ∆f z is in general small compared to the force f z needed to extract a tube, despite the presence of soft Goldstone modes: it is comparable to f z for τ < 10 −6 J/m 2 , but quite negligible for τ > 10 −5 J/m 2 . Sadly, from Fig. 6.7 and 6.8, we conclude that ∆f z does almost not depend on the tension of the membrane nor on it's rigidity. The force fluctuation seems thus of little interest in the mechanical characterization of membranes. On the other hand, the fact that ∆f z does not depend neither in κ, neither on the membrane tension, could be of great interest to experiments involving active membranes. Differently from the membranes studied in this work, which are passive, active membranes have proteins embedded in it that add non-equilibrium noise to the system. Experimentally, the activity of these proteins depends on a external source of energy. It has been observed that the protein activity causes an enhancement of the membrane fluctuations and of the excess area relative to the passive case, as if the membrane were in contact with a thermal bath of higher temperature [121]. Let's imagine now an experiment in which tubes were extracted from an active membrane. If the membrane fluctuations were intensified, ∆f z should be also affected. Since it does almost not depend on the tension nor on the bending rigidity, it could thus be a used as a direct indicator of the proteins activity. In a nutshell In this chapter, we have examined the possibility of using the fluctuation of the force along a membrane tube's axis ∆f z as a tool to characterize membranes. We have only considered the contribution of the membrane's fluctuation, that can be very important due to the presence of very soft modes. For a weakly fluctuating tube of length L, with R ves ≫ L > R 0 , where R ves is the radius of the vesicle from which the tube is pulled and R 0 is the mean-field radius of the tube, we obtained 39) 6. 5 . IN A NUTSHELL where Λ −1 = a, with a of the order of the membrane thickness. Interestingly, ∆f z can generally be written as which reminds the result found for the force fluctuation in planar membranes. The logarithmic correction is a signature of the long-range correlations present in the tubes. Numerically, for a ≈ 5 nm, these equations yield ∆f z ≈ 1pN, which is compatible with experimental data. Studying the behavior of the force fluctuation, we have found that it is extremely sensitive to the value of Λ, whereas it does almost not depend on the bending rigidity nor on the tension. Thus, ∆f z seems of little usefulness to the mechanical characterization of membranes. It could however be used in experiments involving active membranes, i. e., membranes containing proteins whose activity can be modified, as an indicator of their activity. Indeed, when proteins are active, the membrane fluctuations are increased, which would affect ∆f z regardless of variations on the bending rigidity or on the tension. Preliminary results on a 2-d membrane simulation In section 2.3, we proposed a simple numerical system to verify our predictions concerning the mechanical tension τ , the internal tension σ and the tension r obtained from the fluctuation spectrum of a membrane. Our model was composed of a set of variable-sized rods, each one representing a coarse-graining of several lipids, free to move in a two-dimensional space. In this chapter, we present a more complex numerical experiment consisting of a 2-dimensional membrane that evolves in a three-dimensional space, which corresponds more accurately to the experimental situation. We are motivated by the fact that a more elaborated numerical system would not only allow us to verify precisely our predictions concerning τ , σ and r, but it would also give access to other quantities, such as the fluctuation of the force that a frame exerts over a membrane, studied in chapter 3. Moreover, in chapters 5 and 6 we predict the dependence of the force needed to extract a tube and its fluctuation on τ , which could also be verified by pulling tubes from a numerical membrane. Sadly, due to time constraints, the results presented here are far from complete and many questions are left unanswered. In our numerical experiment, we would like to study a piece of membrane held by a circular frame and weakly departing from a plane (see Fig. 7.1). We find many popular methods used to numerically simulate membranes in the scientific literature, which we sum-up briefly in section 7.1. We have chosen to use a phenomenological model consisting in a triangular network of extensible bonds connecting effective particles. The connectivity of the network could be modified in order to mimic the membrane's liquidity (see details in section 7.2) and a harmonic potential acted over the particles at the network's edge, forcing a circular frame. Thus, we could measure directly the force applied to the frame and derive the effective tension τ as well as its fluctuation ∆τ (see section 7.4.1). The minima of this potential could be modified to widen the frame's radius, decreasing the excess area α and increasing the membrane's tension. To obtain representative averages of τ , α and other variables, we needed to generate large sets of configurations of the numerical membrane, which was done through a Monte Carlo dynamics, described in section 7. 3. In this section, we discuss also which were the criteria used by us to determine whether a sampling was large enough. As usually done in laboratory experiments (see section 1.4.2), the bending rigidity κ and tension r were deduced from the average of the fluctuation spectrum of the membrane. Since we simulate the membrane using a network, obtaining the fluctuation spectrum is somewhat complicated, as we discuss in section 7.4.2. Finally, we explain in section 7.4.3 how we could estimate the internal tension σ. In section 7.5 we discuss some preliminary results. At last, in section 7.6, we comment briefly on extracting tubes from our numerical membrane and we end this chapter with a brief discussion on issues that should be investigated in the future (section 7.7). Short panorama of numerical models on membranes Processes in membranes happen in a wide range of time, size and energy scales. For instance, interactions between lipids and proteins inside the membrane occur in distances of the order of the nanometer with a characteristic time of some ps, while the evolution of the shape of a vesicle involves scales of micrometers and may take many seconds. Consequently, depending on the process one is interested in, several different models are used to numerically simulate biological membranes (see [122], [123] and [124] for some reviews). Schematically, they can be grouped in three classes: 1. atomistic models: these models try to take into account all the chemical details of the molecules by considering the interactions between atoms. They are used to study how lipids interact among themselves and with proteins. As these simulations involve many degrees of freedom, they are very computer consuming. Consequently, one can at most simulate small patches of a dozen of nanometers for dozens of nanoseconds. 2. coarse-grained models: in these models, small groups of atoms are lumped together into effective particles that interact via simplified potentials. The solvent can be effectively or implicitly present. As the number of degrees of freedom is reduced, one can observe collective movements of the membrane, such as its self-assembly, stretching [109], pore formation [109] and thermal fluctuations [3]. The main difficulty of these models is deciding which interactions are truly essential to reproduce the membrane's behavior. A popular model of this category is the spring-and-bead model presented in section 2. 5. Sadly, with these models one is still restrained to length scales of hundreds of nanometers, which is a limitation if one wants to study large-scale processes. 3. phenomenological models: these models take coarse-graining one step further, representing several molecules as a single effective particle, which we will call a bead in the following. The solvent is always implicit. They are suitable to study the universal properties of amphiphilic systems. The effective particles can be attached between themselves through a triangular or square 7. 1. SHORT PANORAMA OF NUMERICAL MODELS ON MEMBRANES mesh or instead, the mesh can be absent [125], [126] (meshless models). In the first case, to mimic liquidity, the topology of the mesh is changed during the simulation. The meshwork is then called dynamic. Our previous simple model, presented in section 2.3, belong to this category. Throughout this work, we were interested in the general properties of membranes, regardless of the molecular details, at length scales far bigger than the membrane's thickness. Accordingly, phenomenological models are the most adapted to our case. We give some further details on them in the following. The meshless models were first proposed by Drouffe et al. in 1991 [127]: the beads interact via a hard-core repulsion, an anisotropic attraction that depends on their orientation and an effective multi-body interaction favoring a closed packed environment to simulate the hydrophobic interactions between lipids and the aqueous solvent. These models are very elegant, since one can easily observe the membrane self-assembly, topological changes, pore formation and the gel-liquid transition [127], [125], [126]. As in real experiments, the bending rigidity is usually measured through the fluctuation spectrum. Recently, however, an alternative method in which one imposes κ directly was proposed by Noguchi et al. [125]. At each point of the membrane, a quadratic curve is fitted to the beads contained in a small region in order to obtain the local curvature. Subsequently, the standard Helfrich Hamiltonian is used to evaluate the configuration's energy. The meshwork phenomenological models are a bit older [128] (see [129] for a comprehensive review). Actually, very similar models were already studied at that time in other contexts, such as lattice field theories and lattice approximations to relativistic string theories [130], [131]. The beads were connected by a triangular meshwork that could have fixed topology, i. e., each bead had always six neighbors or they could be connected by a meshwork whose connectivity evolved over time, forming dynamically triangulated surfaces [132], [133]. At this point, membranes were usually phantom, i. e., beads could superpose and the self-penetration of the network was allowed. In the context of biological membranes, models with fixed connectivity, representing a polymerized membrane, were first used in 1987 [134]. For the first time, the curvature energy was taken into account by introducing an interaction between adjacent triangles of the network. Many contemporary works were interested in the dependence of the gyration radius of the membrane on it's linear length [128], [135] and in the crumpling transition [136]. As biological membranes are self-avoiding, the effects of the self-avoidance were also studied by introducing a hard-core potential between any two beads and limiting the length of the network's bonds to ℓ max = 2 √ 3 σ 0 , where σ 0 denotes the beads radius, in order to ensure the impenetrability of the surface [135] (see Fig. 7.3 for a geometrical explanation of this value). In order to assure that the membrane cannot self-penetrate, one imposes a maximal length ℓ max to the network's bonds plus a hard-core potential between any two beads of the network. Here we show how ℓ max = 2 √ 3 σ 0 is obtained, with σ 0 the beads' radius. From 1990 on, the fluidity was taken into account by dynamically modifying the triangulation, while keeping the self-avoiding restrictions [137], [138]. Since then, this model has been used in a wide variety of complex numerical experiments, such as studying the dynamics of vesicles and red blood cells in flows [139], [140] and the budding of vesicles mediated by proteins [141]. As we explain in the next section, this well-established dynamical triangular network model was the basis for our numerical model of membrane. Our numerical membrane As shown in Fig. 7.1, we wanted to simulate a relatively large piece of weakly fluctuating membrane attached to a circular frame. Under these conditions, the probability of overhangs is very small and thus the probability that large fluctuations bring distant segments of the membrane into close spatial proximity is negligible. Consequently, in an approximation, we decided to ignore the hard-core potential between any two beads and consider only the interactions between neighboring beads, which is much less computer consuming. In this case, the meshwork phenomenological model presents a great advantage: with a mesh, we know at every instant which beads are neighbors, since they are attached by bonds, whereas in meshless models determining neighbors is not straightforward. So, we have decided to use a dynamically triangulated meshwork whose beads are phantom if they are not first neighbors. In agreement with section 7.1, we denote the beads' radius σ 0 . Each pair of neighboring beads interact through the potential where ℓ is the distance between the center of adjacent beads; ℓ min = 2 σ 0 and ℓ max = 2 √ 3 σ 0 are, respectively, the minimal and maximal distance between the center of adjacent beads. The length ℓ 0 corresponds to a preferred distance that we have chosen as the average of the minimal and maximal allowed length: ℓ 0 = (ℓ min + ℓ max )/2 = (1 + √ 3) σ 0 (see Fig. 7.4). In section 1. 3.6, we have seen that the bending rigidity of a weakly fluctuating membrane gives a contribution to the membrane's energy. We will not consider topological changes in our simulation, so the Gaussian contribution to the curvature energy need not to be taken into account. In our network, we considered the commonly used bending energy discretization [128] E discret where the sum runs over all pairs of adjacent triangles α and β, with normal vectors n α and n β , respectively. This discretization, however, presents a major problem: the relationship between κ and k depends on the membrane's geometry. Alternative more complex discretizations were proposed (see [129] for further details), but here we have chosen to keep this simplified discretization, since κ will be measured through the spectrum fluctuation. Note that eq.(7.3) is a good approximation only for n α ≈ n β . Indeed, for two triangles with n α = −n β , we have a contribution 2 k, while this configuration should be prohibitively costly. At last, to impose a circular frame, each bead i of the network's boundary, shown in red in Fig. 7 where k f is a constant that determines the rigidity of the potential, R i is the distance of the bead with respect to the center of the network and R f is the desired frame radius, imposed at the beginning of the simulation. Note that the projected area of the membrane A p is not necessarily equivalent to πR 2 f : it can vary more or less, depending on the choice of k f . To initialize the network, we construct a planar triangular network alternating lines with N x and N x + 1 beads, up to N x lines. The beads are distanced by ℓ 0 and arranged as in Fig. 7.5. Figure 7.5: Initial configuration of the triangular network with N x = 5. At this point, the network is planar and each bond measures ℓ 0 . The dashed circle represents the frame with R f = 6.96 σ 0 and R i is the distance from a boundary bead to the center of the frame. The red large beads are subjected to the potential (7.4) in order to impose the circular frame. The ratio ℓ 0 = (1+ √ 3)σ 0 has not been taken into account in this graphical representation. During the simulation, two kinds of moves were possible: 1. Move P: the position of one bead is modified. The beads shown in blue in Fig. 7.5 are free to move in three dimensions, while the ones belonging to the boundary can only move in the frame's plane. 2. Move Flip: the network's connectivity is changed in order to represent the membrane fluidity. This is done by eliminating an existing bond and proposing a new one, as shown in Fig. 7.6. In the following section, we will see how these moves were numerically implemented. Simulation dynamics As in section 2.3, we used a Monte Carlo method to generate a large sample of configurations. Again, the configurations were generated through a Markov chain algorithm: from a certain configuration Ω i , a new configuration Ω i+1 was accepted with a probability P (Ω i → Ω i+1 ) = min 1, e −β ∆H , (7.5) where ∆H = H i+1 − H i is the energy variation. In practice, we have 1. Move P: one particle i is taken at random. If the particle belongs to the bulk of the network (blue beads in Fig. 7.5), we propose a new position r ′ i = r i + ∆r, where ∆r = δr × [rand(−1, 1) e x + rand(−1, 1) e y + rand(−1, 1) e z ] , with rand(a, b) a random number between a and b, e z the direction perpendicular to the frame's plane and e x , e y two perpendicular directions contained in the frame's plane. Each bond attached to the particle i has its length modified. The normal, area and projected area of all triangles that have the particle i as a vertex must also be re-evaluated. The energy variation has thus two contributions: one coming from the changing on the bond's length and other coming from the curvature. We can see a representation of them in Fig. 7. 7. In the case of a boundary bead (red large beads in Fig. 7.5), one has simply ∆r = δr × [rand(−1, 1) e x + rand(−1, 1) e y ]. In addition to the former contributions, one needs also in this case to consider the energy variation coming from the frame's potential. The value of δr was adjusted to have an acceptance rate of ∼ 50%. 2. Move Flip: we randomly choose a bond belonging to the network's bulk. We propose a substitution to this bond, as shown in Fig. 7. 6. The normal, area 7. 3. SIMULATION DYNAMICS and projected area of the two new triangles is calculated and the total energy variation involves the terms illustrated in Fig. 7. 8. Remark that the frame potential never contributes to this kind of move, since the position of the beads remain constant. Note also that the acceptance rate of flip moves is completely determined by the tension applied to the network through the choice of R f and by the choice of the constants s and k. Typically, we have an acceptance rate between 1% and 10%, depending on the chosen values. For very large tensions, this can be a serious issue, since the energy variation ∆H is in general very large. Consequently, almost no flip is accepted and the network does not mimic the membrane's fluidity. In both kinds of move, one has to re-evaluate the normal, area and projected area of some triangles. For each triangle, we evaluate the cross product of two of its edges to obtain the direction of its normal and its new area. One must however pay attention to the order in which the cross product is evaluated to assure that the orientation of the normal is correct. Similarly, to obtain the new projected area, we considered the cross product of the projection of two of the triangle's edges onto the frame's plane. Configuration for a small network with N x = 5 and a total of N = 27 beads, β σ 2 0 s = 1, β k = 10, β σ 2 0 k f = 30 and R f = 6.96 σ 0 after the first 2 × 10 4 Monte Carlo steps. In the top view, we see that the boundaries roughly coincide with the imposed circular frame after N neg = 2 × 10 4 steps (the center of the frame is indicated by the black dot). Observe also that the topology of the network has changed: one finds beads with five and seven neighbors. In the side view, we can see that the membrane fluctuates around the plane (note that the vertical and the horizontal scales are different). For each attempt of move P , we try a flip. We call a Monte Carlo step a set of N sequences of a move P followed by a move flip, with N the number of beads. The first N neg steps are not taken into account in the evaluation of averages to assure that the membrane has reached equilibrium. In Fig. 7.9, we show the configuration of a small network after N neg = 2 × 10 4 Monte Carlo steps. The frame is already roughly circular (the fit with the frame depends on the choice of k f ). We will call a complete sequence of N neg Monte Carlo steps followed by a number of equilibrium Monte Carlo steps N iter a run. Figure 7.10: Snapshots of the network at every 2 × 10 4 Monte Carlo steps. The height of the membrane is represented by the shading scale at right and the three spatial coordinates are measured in unities of σ 0 . This image was obtained using the interpolation explained in the following (section 7.4.2) for N grid = 128. Remark that the membrane weakly departs from the plane and that the configurations look uncorrelated after 2 × 10 4 iterations. 7. 3. SIMULATION DYNAMICS Verifications and equilibration criteria In order to obtain meaningful averages, we have to assure that our configuration sampling is reasonably uniform over the space of the possible configurations, i. e, we have to assure that N iter is large enough. We have not done a systematic study on how the equilibration time depends on the network's size and constants at this preliminary stage. We have rather evaluated the equilibration at each run. In the following, we exemplify how we have carried this out using a typical network with 410 beads (N x = 10), β k = 5, β σ 2 0 k f = 10, β σ 2 0 s = 1 and R f = 33.04 σ 0 . We assume that the system has already relaxed to its equilibrium state after N neg Monte Carlo steps. First of all, we have evaluated visually the system's evolution, as shown in Fig. 7. 10. We can see that in this case, the configurations are already very different after 2 × 10 4 Monte Carlo steps. Visually, we have also checked if all the bonds were being flipped with a similar frequency. For the same network as before, we show in Fig. 7.11(a) a map of the bonds colored as a function of the relative frequency with which they were flipped. We can see that the coloring is very uniform, indicating that the network does not present regions with different liquidity. As a supplementary check, we have also studied the diffusion of one bead over time (see Fig. 7.11(b)). The inset shows a detail of the network. Each bond was colored with the relative frequency with which flips were accepted (the average was normalized to one). We see that flips happened uniformly in space. At right, the diffusion of a bead after the same number of iterations testify of the membrane's liquidity (red curve). We have superposed a network snapshot (in green) for comparison. On a second moment, we have studied the spatial average of the membrane's height: since there is no asymmetry, after a sufficiently large number of steps, one should expect this quantity to vanish. This condition is however not sufficient, since the membrane can have a vanishing spatial average and be non-planar. We have thus monitored the local average of the membrane's height, i. e., we have studied the average shape of the membrane (see Fig. 7.12). In practice, this was done by constructing an interpolation that will be explained in section 7.4.2 and averaging the height over each cell of the interpolation grid. Finally, we have monitored the evolution of longest Fourier modes h 1,0 , h 0,1 and h 1,1 (in the next section we explain how we have obtained the Fourier decomposition). Typical curves can be seen in Fig. 7. 13. We can see that after ≈ 10 4 steps, the coefficients are uncorrelated, which means that the longest modes have relaxed. Accordingly, we have considered that in this case, 2 × 10 6 steps generated a sufficiently large sampling of the configuration space. SIMULATION DYNAMICS In this section we describe how we measured the effective tension τ and the excess area α. We explain also the algorithm used to derive the fluctuation spectrum, from whose average we could derive κ and the tension r. Finally, we discuss the internal tension σ in section 7.4.3. Excess area and mechanical tension measures In order to obtain τ , we study the total force that the harmonic potential given in eq.(7.4) exerts over the beads at the network's boundary, which represents the force applied by the frame onto the membrane: where the sum runs over the beads at the network's edge and R i is their distance to the center of the frame. For k f large enough, the edge of the network fits well with the frame with radius R f and thus the effective tension of a configuration is given by During a run, τ i was evaluated at the end of each Monte Carlo step. At the end of it, we obtained τ = τ i and its standard deviation (∆τ ) 2 = τ 2 i − τ i 2 . Concerning the excess area, we carefully updated the membrane's projected area A p and actual area A after each attempt of move. At the end of each Monte Carlo step, the excess area of the configuration was added to a variable in order to obtain α = α i in the end of the run. Fluctuation spectrum Let's consider a square piece of membrane with lateral size L weakly departing from a plane, whose shape is described in the Monge's gauge by h(r). In terms of Fourier modes, h(r) can be written as with r = x e x + y e y , q = 2π/L (n, m), n, m ∈ N and q ≡ |n|≤Nmax |m|≤Nmax , (7.10) where N max = L/(2a) corresponds to the smallest possible wave length. Note that here we have used a slightly different normalization from the rest of this work. The In section 1. 3.6, we have seen that membranes connected to a lipid reservoir could have their energy described by the Helfrich Hamiltonian (eq.(1.15)). Accordingly, the average of the Fourier coefficients respects where r is the macroscopic counterpart of the internal tension σ and κ is the bending rigidity (in fact, as discussed before in section 1.3.6, it corresponds more precisely to an effective bending rigidity due to renormalization effects). As in laboratory experiments, we would like to measure the fluctuation spectrum of our numerical membrane in order to derive r and κ. In the following, we will explain how it was done. Obtaining the fluctuation spectrum For a general wave-vector (n, m), we have to evaluate eq.(7.11) in order to obtain h n,m . The first numerical difficulty comes from the fact that instead of a continuous surface h(r), we have access only to the position and height of the beads. Consequently, the first step is to built an approximation to the network's surface by discretizing it over a square grid of N grid × N grid cells with lateral side L, as exemplified in Fig. 7.14. Each cell of has a lateral size ∆ = L/N grid . We choose L slightly bigger than 2 R f to avoid problems with the discontinuities at the edges of the grid. The discretized version of eq.(7.11), known as DFT (Discrete Fourier Transform) is given by 7 where h α,β is the height of the cell whose bottom left corner position is r = ∆×(α, β). At this point, we need to attribute a height to each cell of the grid, which is initially set to zero. We do so in two steps: • First, we obtain the plane's equation for each triangle from the position of its three vertex. Using this equation, we evaluate the height of some points inside the triangle, as shown in Fig. 7. 15. • Secondly, the cell that contains the projection of a dot receives its height. If the projection of more than one dot falls inside the same cell, we attribute the average of the their height to the cell (see Fig. 7.16). Once the approximative grid is built, we can evaluate eq.(7.13) which has a great advantage: it can be evaluated using the FFT (Fast Fourier Transform) algorithm 7.4. MEASURING TENSIONS AND THE EXCESS AREA with a N 2 grid log(N grid ) complexity, instead of a N 4 grid complexity for naive algorithms. We used thus the cdft (complex discrete Fourier transform) routine of the FFT library implemented by Takuya Ooura [142], which is a general library to evaluate FFT under the condition that N grid is a power of 2. A subtlety The prediction given in eq.(7.12) is valid for a squared piece of membrane with lateral size L. Since our membrane is round, our situation corresponds to a squared membrane seen through a circular mask, given by So, we are actually performing numerically the Fourier transform of the function h(r) that denotes the height of the membrane multiplied by circ(r), instead of just performing the Fourier transform of h(r). Indicating the Fourier transform by a superscriptˆ, we recall the convolution theorem: (h circ) =ĥ * circ , (7.15) where * indicates the convolution between the two functions. In order to obtainĥ, we will evaluate the Fourier transform of circ(r). Using the above presented definition, we have [143] where J i is the Bessel function of order i. In the last passage, we have used the fact that L ≈ 2 R f . This function has a very marked pike, as shown in Fig. 7. 18. In the following, we will keep the notationh n,m for the coefficients obtained with the mask. During a run As the process of grid construction is relatively computer consuming, the fluctuation spectrum was measured over N spec configurations uniformly spaced during a run. At each time, we obtained the Re(h n,m ) and Im(h n,m ), i. e., the real and complex parts of each Fourier coefficient, with |n| ≤ N grid and |m| ≤ N grid . In the end of the run, we evaluated |h n,m | 2 = Re(h n,m ) 2 + Im(h n,m ) 2 . We plotted then 1/(q 2 L 2 |h n,m | 2 ) as a function of q 2 , with q 2 = (4π 2 )/(L 2 ) (n 2 + m 2 ) and h nm = 4/πh n,m . From eq.(7.11), we expect, at least for large wave-lengths, a linear relation between these quantities: from the y-intercept of the curve, we derive r, while from it's slope we obtain κ. In Fig. 7.19, we show an example such a plot with a linear fit to the region of large wave-lengths. 7 Internal tension σ As discussed in section 1.3.5, the internal tension σ is the energetic cost associated to an unitary increase in the microscopical area A of the membrane. From the energy E of a system, it can be obtained through: Let's consider a general triangle in the bulk of our meshwork. In an approximation, let's consider that the triangle is equilateral and has ℓ as lateral size. From eq.(7.1), the local energy E tri associated to this triangle is given by where E curv is a contribution coming from the bending rigidity. The factor 3 comes from the three sides of the triangle, while the term 1/2 comes from the fact that each side is shared by two adjacent triangles. Note that the first term is the only contribution involving the bead-to-bead distance ℓ. Under the assumption that the triangle is equilateral, its area is given by From eq. (7.19) and under the assumption of an equilateral triangle, we can define a local internal tension: Now, in our simulation, the sides of all triangles are submitted to the same harmonic potential given in eq.(7.1). Accordingly, the hypothesis that each triangle is in average equilateral is very reasonable. Moreover, as the system is spatially uniform, we propose thus a generalization of eq.(7.22) as a estimate of the internal tension: where the bar over ℓ indicates the spatial average of the bead-to-bead distance, while indicates as usually the average over an ensemble of configurations. In practice, we have kept track of the average length of the bonds over the networkl at each Monte Carlo step. At the end of the run, we could thus evaluate the average ofl over a the ensemble of configurations to obtain σ. Some first results The results presented here consist of a preliminary set of runs for a network with N x = 10, with a total of N = 410 beads. We have kept the parameters β k = 5, β σ 2 0 k f = 10 and β σ 2 0 s = 1. As discussed in section 7.3.1, we let the system evolve during N neg = 10 4 steps in order to assure that the final frame shape had been attained. The averages were made in a second time, over N iter = 2 × 10 6 steps during which 1500 spectra were evaluated. We performed fifteen runs with these parameters, increasing at each run the membrane's tension by widening the frame's radius: the initial radius R f = 32.34 σ 0 was successively increased of 0.35 σ 0 up to R f = 37.39 σ 0 . As the radius increased, the excess area decreased from ≈ 3.3% to ≈ 2.4%. For each run, we have plotted the fluctuation spectrum as detailed in section 7.4.2 to obtain r and κ. A typical example is shown in Fig. 7. 19, where we have colored the points in function of the angle that the wave-vector associated to the mode did with the horizontal direction of the grid. First, we can remark that there is no clear color pattern, which indicates that the membrane is indeed isotropic. Second, we can see that in the region of large wave-lengths, the dots are well-fitted by a linear curve, from which we deduce r and κ. Remark that there is no color pattern, which is an evidence of the system's isotropy. The line represents the linear fit for large wave lengths (λ > 9 σ 0 , corresponding to q 2 < 0.5) from which r and κ are deduced. We plot the results obtained for each run as a function of the corresponding excess area in Fig. 7.20 (we remind that the smaller the excess area, the bigger the 7. 5. SOME FIRST RESULTS frame's radius). In Fig. 7.20(a), we note a small dependence of κ as a function of the excess area. In Fig. 7.20(b), we compare the values of τ , σ and r: the three decrease as the excess area increases, as expected. As we predict, we have always σ bigger than τ and their difference is bigger for small tensions. Concerning the renormalized tension r, we find values not very different from τ and σ, which is reassuring. The tension fluctuation ∆τ , represented by red bars, seems almost constant, which agrees at least qualitatively with the predictions of chapter 3. At this point, we have good indications that the our network mimics well a liquid membrane under the Helfrich Hamiltonian. In the next section, we will test quantitatively the compatibility of these results with our theoretical predictions. 7.5.1 Difference between τ , σ and r and our predictions In chapter 2, we predicted that τ = σ − k B T Λ 2 8π 1 − σ σ r ln 1 + σ r σ , (7.24) where Λ is the bigger wave-vector possible and σ r = κΛ 2 . In this section, we would like to verify quantitatively the compatibility of eq.(7.24) with the data of the last section. We made a one-variable fit of eq.(7.24) by adjusting the value of Λ. The best fit, for Λ = 1.03 σ −1 0 , is shown in Fig. 7.21(a). The compatibility between the data obtained from the simulation and the predicted values is relatively poor. We have fitted eq.(7.24) to the red triangles by adjusting Λ. The best result, obtained for Λ = 1.03 σ −1 0 , is shown with green squares. At right, we applied this value of Λ to eq.(7.25) and obtained the predicted excess area (green squares), which are clearly incompatible with the measured excess area (red triangles). In chapter 2, we have also predicted the dependence of the excess area on the tension σ: α = k B T 8πκ ln σ r σ . (7.25) Both eqs. (7.24) and (7.25) should be valid under the same conditions. Accordingly, we decided to make a self-consistency test by plotting the predicted values of the excess area obtained through eq.(7.25), with the Λ obtained above. We observe that the predicted values for the excess area are consistently smaller than the values measured during the simulation. Up to now, we have not a clear explanation for these results. Two hypothesis deserve further attention: • our membrane is not exactly very tense, since it is very easy to stretch the bonds. Indeed, stretching a bond to its maximum costs only ∼ 0.25k B T for β σ 2 0 s = 1. It is thus possible that the simulated membrane is not under the hypothesis of our theory. We could thus imagine further tests with a higher s, but in this case, as discussed earlier, the membrane would loose its liquidity. • the projected area of the membrane could be bigger, which would explain why the measured excess area is consistently bigger than the predicted. When we proposed the equilibration criteria in section 7. 3.1, we studied the average shape of the membrane, as shown in Fig. 7. 12. We have not however excluded the possibility of a rotating deformed shape: in this case, we would still have an average shape nearly flat, but the membrane would actually fluctuate around 7. 5. SOME FIRST RESULTS this deformed shape, with a projected area bigger than if it fluctuated effectively around a plane. To verify that, we should perform a rotation to align the configurations before evaluating the average shape by aligning the direction of the maximum height at each step, for instance. This would however not explain the poor fit shown in Fig. 7.21(a). Extraction of tubes In parallel with the studies on the membrane tension, we have explored the possibility of extracting tubes from our simplified membrane. To pull a tube, we have applied a harmonic potential to a central bead whose height is denoted by h. The preferred height of the tube is defined by the choice of h 0 . As in the case of the frame's force, we could obtain the force applied to pull the tube, as well as its fluctuation. The first results for the same parameters of the last section, with R f = 33.04 σ 0 , are shown in Fig. 7. 22. Looking at these images, we remark a first problem: the angle between the triangles is very large. This phenomenon is still more marked for a bigger tubes (see Fig. 7.23(a)), where the protuberance becomes almost flat. The problem comes from the discretization of the curvature energy: (1 − n α · n β ) , (7.27) where α and β are adjacent triangles. As discussed before, this discretization is valid only for n α ≈ n β , since large deformations bear an unphysical finite cost. We proposed thus an alternative discretization (1 − n α · n β ) e 0.8 (1+nα·n β ) . (7.28) With this discretization, the energy cost is roughly the same as before for n α ≈ n β , but it increases exponentially as n α approaches −n β . The resulting tube, with the same parameters as in Fig. 7.23(a), has a more normal appearance (see Fig. 7.23(b)). Due to time constraints, we have not examined the dependence of the force needed to extract a tube as a function of its radius. It would also be interesting to measure the tube's radius: together with the measure of the force, one could thus deduce the tension σ and the bending rigidity κ. Perspectives and discussion As mentioned at the beginning of this chapter, we presented here only preliminary results and many issues need further attention, such as: 1. verify more carefully if the average shape of the membrane is indeed planar. 2. study the dependence of ∆τ on σ. 3. in section 2.4.1 of chapter 2, we predicted that for a membrane under no external force, i. e., with τ = 0, the natural excess area was given by α eq ≃ ln(8πβκ) 8πβκ , (7.29) which depends only on the membrane bending rigidity and temperature. If however one forgets the difference between τ and σ, α eq should also depend 7.7. PERSPECTIVES AND DISCUSSION logarithmically on the size of the membrane (see eq.(2.77)). Numerically, we could thus adjust the frame's radius R f for different membrane sizes in order to have τ = 0 and measure the excess area in each case. 4. study better the extraction of tubes and the effects of the alternative discretization of the curvature energy proposed by us. 5. perform a systematic study on the time needed for a system to equilibrate as a function of its size and parameters. At last, even if using a network to simulate a membrane presents many advantages, it presents a severe drawback: in order to assure liquidity, the bonds must be very easily stretched. To make the bonds stiffer without affecting the membrane liquidity, one possible solution would be to pass to a macrocanonical ensemble of effective particles: the network would have thus a non-fixed number of beads. A new particle could be introduced in the middle of a very stretched triangle, which would restore the liquidity for the case of high tension. Conversely, beads should also be deleted from the network. In practice, this is very difficult to implement already from a data structure point of view and the results are not sure. In a nutshell In this chapter we presented some preliminary results of a numerical experiment consisting in a piece of weakly fluctuating membrane attached to a circular frame. Numerically, it was represented by a triangular network whose connectivity evolved to simulate liquidity. At each vertex of the network, we placed effective particles that could interact with their first neighbors. The bending rigidity was mimicked by an interaction between adjacent triangles and the particles in the network's edge were submitted to a harmonic potential in order to force the circular frame. We used a Monte Carlo method to obtain a large sample of equilibrium configurations, from which we could evaluate averages of the mechanical tension, excess area and fluctuation spectrum. Our first results seems to show that the network behaves similarly to a membrane, but we could not quantitatively verify our predictions concerning the membrane tension. Many questions in this chapter were left untackled due to time constraints. 192 7. 8 Conclusion Lipid membranes are very particular materials: despite being almost unstretchable microscopically, in mesoscopic scale they can be easily stretched through the flattening of thermal fluctuations. Indeed, lipid membranes are highly fluctuating and present thus an excess area relative to its optically resolvable area. In the beginning of this work, we have seen that the term surface tension designates several quantities in the context of lipid membranes. First, there is the tension τ needed to increase the projected area, or equivalently, to reduce the excess area. Secondly, there is σ, the Lagrange-multiplier introduced theoretically to impose a fixed microscopical area to the membrane. Finally, there is the macroscopic counterpart of σ, r, related to the spectrum of fluctuation. Experimentally, r can be obtained directly from the fluctuation spectrum and τ can be measured through the Laplace pressure, for instance. On the other hand, the theoretical predictions usually involve σ, which is not directly measurable. To interpret experimental data, the equality between these quantities is often taken for granted. Our main goal throughout this work was to determine under which conditions these suppositions are justifiable, specially the equality between τ and σ. Firstly, we have treated the simplest case of a planar membrane. In the literature, we find some former calculations relating τ to σ and r. There was, however, no consensus: different results were found, depending on how the calculation was made and on the precise definition of τ . Indeed, the method involved deriving the freeenergy with respect to the projected area of the membrane, which we have shown here to be very tricky. To work around this problem, we have chosen to use a more recent tool: the projected stress tensor, a tensor that relates the force exchanged through an infinitesimal cut on the membrane to the projection of this cut on the projected plane. The definition of the mechanical tension τ is thus straightforward: it is simply given by the average of the projected stress tensor. As supplemental advantage, the projected stress tensor can be relatively easily derived for other geometries, such as spherical and cylindrical, which we have treated in this dissertation. After evaluating the average of the projected stress tensor, we have obtained an exact relation between τ and σ for weakly fluctuating planar membranes. In a general way, we have τ ≃ σ −σ 0 , which is the most important result of this dissertation. The constant σ 0 depends on the temperature and on the frequency cutoff Λ, i. e., the highest wave-vector allowed. At room temperature and considering Λ = 1/(5 nm), we find σ 0 ≈ 5 × 10 −6 N/m. Accordingly, the assumption σ ≈ τ is justifiable only for high tensions. Otherwise, one must consider the corrected relation to interpret correctly experimental data. Indeed, some experiments on the adhesion of vesicles seems to agree with our predictions. In laboratory, planar membranes are very difficult to manipulate. Vesicles are more commonly used, specially the giant vesicles that can be easily manipulated with a micropipette. These vesicles can be poked, i. e., free to exchange inner material with the suspension medium, or closed, i. e., with a fixed volume. We examined thus how the volume constraint and the geometry affected the above mentioned relation for quasi-spherical vesicles. For both poked and closed vesicles, we conclude that the relation obtained in the planar case is a very good approximation. Interestingly, we predict that the internal pressure of a spherical vesicle can be smaller than the outer, which is impossible in liquid drops. Another popularly geometry found in membrane experiments is the cylinder. Indeed, nanotubes are extracted from a piece of membrane, typically a vesicle, by applying a point force with an optical tweezer or with a magnetic field. Using a simplified mean-field calculations and supposing σ ≈ τ , the bending rigidity is usually obtained from the curve force versus tension. Recently, however, theoretical calculations have predicted that the shape fluctuations for this geometry are very strong. We expect hence that these fluctuations may affect the interpretation of force measurements. In this work, we have found that these fluctuation do affect indeed the value of the mean-field force. Curiously, the effect has never been observed, since the assumption σ ≈ τ seems to coincidently make up for the thermal fluctuations. Aside from the evaluation of tensions and forces, we have also evaluated for the first time the standard deviation of these quantities due to thermal fluctuations. As in tubular geometry the shape fluctuations are important, we would like to verify if the fluctuation of the force needed to extract a membrane tube could be used to characterize a membrane. Our results show that the force fluctuation depends on the temperature and that it is very sensitive to the values of Λ, whereas it does almost not depend on the bending rigidity nor on the tension. It should thus be of little usefulness to characterize mechanically a membrane. On the other hand, it is possibly interesting to study the activity of active proteins embedded in the membrane, which has an effect similar to changing temperature. Finally, while we have characterized rather well the relation between τ and σ, we leave almost untackled the question of how r relates to the other tensions: we have just questioned a former prediction stating that r = τ and observed a non-trivial behavior of r in two numerical experiments proposed by us. The question is however very important and needs further attention, since r is a popular non-invasive method used to accede to the tension of a membrane. Appendix C Estimative of W theo A In this section we explain in details the theoretical estimative of the adhesion energy per unit area W theo A proposed by Rädler et al. [84]. They took into account two attractive interactions, coming from the van der Waals interactions and gravity, and a repulsive interaction with entropic origins, due to the restrictions imposed on the membrane fluctuation. They considered the screened van der Waals potential given by where A H is the Hamacker constant, s is the distance between the membrane and the substrate, a is the membrane thickness and λ D is the Debye screening length, given by where ε 0 is the vacuum electrical permittivity, ε r is the dielectric constant of the solvent, e is the elementary charge, N A is the Avogadro number, z i is the charge number of a dissolved ion and c i is the respective molar concentration. The last term in eq.(C.1) is a correction coming from the screening of the substrate due to the presence of ions in the solution. Indeed, it is expected that some part of the MgF 2 coating of the glass cover slip is present in small concentration in the buffer solution . As vesicles are prepared in a sucrose solution, there is possibly a difference of density between the internal fluid of GUVs and the buffer solution. The potential due to gravity per unit area is given by where g is the gravity acceleration, ∆ρ is the density difference, V v = 4/3πR 3 ves is the vesicle volume, A C = πR 2 a is the contact area and h CM is the height of the center of mass. Assuming that the shape of the vesicle does not change with the distance from the substrate, we have h M ≃ R ⊥ + s , where R ⊥ is the height of the center 201 APPENDIX C. ESTIMATIVE OF W THEO A of mass relative to the contact region of the vesicle and so the free-energy is simply given by Finally, to evaluate the steric potential that arises when fluctuations are limited, they used the equipartition of energy to estimate the energy per uncorrelated patch of membrane of size ξ : where b is a numerical factor. To obtain ξ , the group assumed that the contact area was equivalent to a flat membrane under a quadratic potential (Hamiltonian given in eq.(1.38)) which yields two limiting cases: the case where adhesion is dominated by rigidity (with the corresponding equations shown in the first line of table ??) and the case where adhesion is dominated by tension (second line of the same table). Further details on the derivation of these equations are given in appendix D. Case To determine whether experimentally the adhesion was dominated by rigidity or tension, Rädler et al. plotted the measured values of ξ 2 ⊥ as a function of V ′′ , k B T , κ and σ (assuming r ∼ σ) using the equations given in the second column of table ??. For the equation corresponding to the case dominated by the tension, they obtained a nice linear relation (see Fig. C.1). The same analysis was performed on ξ , this time using the equations of the third column of table ??. Again, a linear relation was obtained for the lower equation (see Fig. C.1, lower figure). They concluded that it was reasonable to assume σ ≈ r in this experiment and that the behavior of the membrane was dominated by tension. [84]. Note that the curves were traced under the assumption σ ≈ r. Besides, in this case, it is theoretically expected that ξ ⊥ relates to the mean separation from the substrate s through [144] 2 ξ ⊥ ℓ σ 204 Appendix D Determination of ξ ⊥ and ξ for planar membranes under a quadratic potential In this section we derive the correlation length ξ and the roughness ξ ⊥ for a planar membrane under a quadratic potential [145], [144]. Assuming that the Hamiltonian is given by eq.(1.38), the correlation function is given by G(r − r ′ ) = k B T A p q e i q·(r−r ′ ) V ′′ + σ q 2 + κ q 4 . (D.1) By definition, one has where the last step is justified for very large A p and for σa 2 /κ ≪ 1 (a is a microscopical cut-off of order of the membrane thickness). The crossover tension σ * = √ 4 κ V ′′ defines two limits: one dominated by tension (σ > σ * ) and one dominated by the rigidity (σ < σ * ). The function Ω is given explicitly by Ω(y) = Case Table D.1: Theoretical previsions for ξ ⊥ and ξ as a function of σ, κ and V ′′ . The last column is obtained by substituting the third column on the second.
2010-11-07T14:35:58.000Z
2010-09-07T00:00:00.000
{ "year": 2010, "sha1": "737766d66c7d300612ae610987f4bc3bc099c2b9", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "737766d66c7d300612ae610987f4bc3bc099c2b9", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
155946650
pes2o/s2orc
v3-fos-license
Coaches and the Law : A Study of the Training of Coaches in the United States This study examined the necessity for additional coursework and/or professional development to equip coaches to oversee athletic programs within the purview of complicated regulatory laws and regulations. The research was compiled based upon a case study of a large East Texas high school. Findings indicate that coaches are not as cognizant of sports laws and regulations as they suppose. Implications for policy and practice are identified. Introduction 1. Coaching is a difficult and stressful occupation.The attributes that prepare a good teacher and the attributes that prepare a good coach are not always the same.Successful coaching requires a unique skill set.Coaches must be prepared to interact with community leaders, school administrators, compliance agencies, parents, fellow teachers, students, and other coaches.Favorable interaction does not guarantee success in the sport.Many beginning coaches are not adequately prepared to deal with complex legal and governance issues associated with successful coaching.Athletics related litigation is not a recent phenomenon. Review of the Literature 2. Appenzeller (1975) indicates that "litigation has become … a vital factor in the life of the athlete and coach."Likewise, Baley and Matthews (1988) assert that "the number of these lawsuits has continued to rise to the point where the continuation of school athletic programs is being threatened." Most coaches desire greater cognizance of the laws governing their athletic programs and fields.Constraints on time and demands related to teaching often interfere with participation in on-going professional development.Baley and Matthews (1988) suggest that while there are several areas that cannot be changed by coaches the "acquisition of knowledge and understanding of safe procedures in physical activities and their behavior … are entirely in their control."Schutten and McFarland (2005) emphasize the responsibility of educators to focus professional development activities in the areas of greatest challenge.No educator can master everything.Educators must evaluate the emerging difficulties associated with their professions and take positive action to master rising issues.Coaching certainly falls under this directive.Coaches must recognize the problems associated with increased athletic related litigation and take positive action to acquire the skill sets essential for success.At least one major portion of professional development for coaches should focus on increasing ethical and regulatory compliance.Compliance requires knowledge of and adherence to regulatory expectations.Coaches must know and obey the rules and regulations associated with their sport. The rise in athletic related litigation may stem from the many and diverse expectations that are placed on coaches.In addition to the rigorous requirements associated with traditional instruction, coaches are expected to master their sports, challenge and motive players, and interact with the community while they oversee legal and regulatory compliance.All of these expectations generate extra work while providing minimal additional pay.Accordingly, motivation to adjust to a rapidly changing legal environment may suffer. The experience of a coach may have a tremendous effect on understanding of these regulatory expectations.While experience does not always translate into knowledge, more experienced coaches are likely to have encountered compliance issues and subsequently comprehend the repercussions of non-compliance.Additionally, coaches with collegiate or professional experience may possess greater understanding of current rules and regulations.Gender issues may also arise in some sports as female coaches may be newer to the field, fewer in numbers, and face acceptance issues relating to longevity of service. The rapidly changing coaching profession requires the acquisition of innovative and heightened skill sets.Coaches must devote themselves to understanding new and emerging interpretations of the laws and regulations governing their fields.In fact, mastery of the most current legal interpretations impact more than their personal livelihood and success.School districts, athletes, and the community can be negatively impacted by failure to comply with regulatory expectations. The Texas High School Coaches' Association (THSCA) represents a major portion of coaches in Texas.THSCA (2005) formulated a Code of Ethics (COE) that underlies the values that should be held by coaches in the Lone Star State.The COE emphasizes that a person choosing the coaching profession, "… assumes an obligation to conduct himself in accordance with its ideals."The COE additionally states that the coaching profession is dependent upon the manner in which Texas coaches."… live up to both the letter and the spirit which the code represents" (p.18).The responsibility of upholding the ideals of the profession clearly rests upon the shoulders of individual coaches.Sportsmanship, integrity, and compliance must not only be practiced, but these attributes must be instilled in participating athletes.The THSCA establishes the expectation that coaches exert tremendous influence for good or bad.Coaches must lead by example and practice, "… winning without boasting and losing without bitterness" (p.19). THSCA requires that coaches be thoroughly acquainted with the rules of the game.Current official rule-books relating to the specified sport should be studied and reviewed.Coaches are expected to appropriate time to acquire and disseminate an understanding of the rules and regulations to player athletes and other relevant individuals such as school district administrators and personnel.THSCA expects coaches to demonstrate respect and adherence to all rules and regulations appropriate to the sport.Intentional exploitation of gaps within the rules is prohibited (p.20). While the THSCA has adopted ideals directly related to coaching, the University Interscholastic League (UIL) has classified violations and established procedures and safeguards to foster a system of equity within Texas academic and athletic competition.Chapter 1.F.51 of the UIL Constitution and Contest Rules establishes differing levels of violations.These range from outright and intentional violation of the UIL standards to more minor oversight issues (UIL, 2006).Negligence in regard to these expectations is a major factor leading to litigation.Cunningham (2001) proposes that established negligence and size of awards in cases of litigation are directly related.This study further states the importance of knowledge and adherence to the rules and regulations in limiting school and personal liability.Additionally, Cunningham points out that, "… if a coach is aware of the duties for which he or she is responsible and has professional training to fulfill those responsibilities, then that coach will have less chance of being sued" (p.14).Nadeau (1995) indicates that the relationship between the legal rights of the student athlete and the legal responsibility of coaches has become increasingly defined with each new case of sports litigation.Coaches have an established duty of care to ensure the safety of athletes (p.29).Courts have also imposed a standard of care in regard to safety rules and regulations which have been established for a particular sports activity; the age and competence level of the participants; and the actions or lack of action by coaches (p.34).This implied legal responsibility further highlights the need for adequate and extensive professional development to prepare and support coaches as they practice their chosen profession. Purpose of the Study 3. This study examined the necessity for additional coursework and/or professional development to equip coaches to oversee athletic programs within the purview of complicated regulatory laws and regulations.This study uniquely highlights the accuracy of perceptions held by coaches in regard to their understanding of legal issues and liabilities relating to their chosen sport.The study also examines the adequacy of existing professional development activities in equipping coaches to navigate the complex environment of regulatory expectations.Failure to deliberate these important issues may expose coaches and their respective school districts to serious and damaging litigation.Adequate professional development may serve to aid in compliance with regulatory agencies. Research Methodology 4. A large East Texas school district was chosen to solicit responses from identified participants due to the availability of a larger sample size, more diversity in regards to both gender and ethnicity, and a greater range of age.The geographic location was a result of proximity to the researcher, and the ease of obtaining information.The sample obtained may be classified as a convenience sample.Waller and Lumadue (2013) indicate that convenience samples may provide meaningful findings to guide research and practice. An online survey instrument was created by the researcher.The survey instrument consisted of several ordinal ranking scale questions.The survey was an online survey, which was e-mailed to the coaches.The survey was designed so that the coaches had to respond to interpretive and ethical considerations specific to UIL rules and regulations.These questions were designed to simultaneously examine the consistency of perceptions and practice (Lumadue & Waller, 2013).SPSS was used to provide descriptive information and to examine Pearson product-moment correlations. Correlational analysis is deemed appropriate for establishing relationships between variables of interest.Pearson correlations having absolute values more than 0.70 were deemed to be highly significant.Correlations having absolute values between 0.50 and 0.69 were deemed to be moderate.Correlations having absolute values between 0.30 and 0.49 were deemed weak (Waller & Lumadue, 2013). Research Questions The following two research questions guided this study. 1. What are the perceptions of coaches regarding their understanding of the rules and regulations appropriate to their respective sports? 2. Do coaches indicate the need for additional professional development in order to clarify their understanding of the rules and regulations appropriate to their respective sports?These questions were answered based on interpretation of the values of the Pearson r coefficients. Research Hypotheses The following research hypotheses were utilized in support of Research Question 2. Ho: No relationships exist between or among the responses of coaches regarding their understanding of the rules and regulations appropriate to their respective sports. Ha: Relationships exist between or among the responses of coaches regarding their understanding of the rules and regulations appropriate to their respective sports. Delimitations and Limitations The study was delimited to one large East Texas high school.As with any survey utilizing self-reporting, the study is limited by the accuracy and truthfulness of the responses. Research Findings 5. Seventeen participants completed this survey. Research Question 1 The survey reflected several different attributes.The age groups ranged from 18-25, 26-30, 31-35, 36-40, 41-45, 46-50, 51-55, 56-60, and 60+.The percentages that correspond with those age groups were 11.8%, 17.6%, 5.9%, 11.8%, 5.9%, 17.6%, 11.8%, 11.8%, and 5.9% respectively.The average coaching experience was taken from one to thirty plus years coaching.This was then used to determine those new to coaching (less than 10 years experience) and experienced coaches (those with more than 10 years experience).Thirty five percent of the sample was considered new and sixty five percent experienced.The sample also showed 58.8% with only a bachelor's degree and 41.2% with a master's degree.The sample was predominately male 71%, versus 29% female.The remaining questions were designed to elicit information for four types of questions.They were issues pertaining to rules, ethics, professional development, and job satisfaction. The first rules question asked, "I would speak out against school board policy that did not allocate adequate resources to athletic programs" (Matthews, 2006).This question split respondents down the middle with an arithmetic mean of 3.06.As a determination based on this question 71% of those surveyed were not properly trained that they have the right to speak against policy relating to their specific discipline, which in this case was athletics. Research Question 2 Review of the Pearson r coefficients indicated several values in the moderate to strong relationship range.Accordingly, the null hypothesis, Ho, was rejected in favor of the alternate hypothesis, Ha.Relationships were found to exist between or among the responses of coaches regarding their understanding of the rules and regulations appropriate to their respective sports. The Pearson's r coefficient as related to new coaches was .412,compared with .072for experienced coaches based on gender as the dependent variable.This showed a stronger relationship between gender and this rule for coaches that were less experienced.Similarly, if looking at the Pearson's r for cases involving level of education and this rule, there was little evidence that level of education and this rule had much of a relationship regardless of the experience of the coach, .307for new coaches and .274for experienced.The Pearson's r with the dependent variable age evidenced a stronger relationship between new coaches .054than for experienced coaches .076.This demonstrated that education rather than age or gender in this case had a closer relationship to the level of experience the coach had for this rule.This evidenced that experience and this rule are not related, so the perceptions must be an issue of knowledge of the rule.While other questions on rules were straight forward, one showed coaches either not aware of rules or in direct violation of these rules.The case involved whether "I have given over-the-counter medication to athletes" (Matthews, 2006).Of those surveyed, 53% had given athletes over-the-counter medications. Likewise, in the analysis of the Pearson's r as related to new coaches was .156,compared with .696for experienced coaches based on gender as the dependent variable.This showed a stronger relationship between gender and this rule, for coaches that were more experienced.Similarly, if looking at the analysis the Pearson's r for cases involving level of education and this rule, there was little evidence that level of education and this rule had much relationship regardless of the experience of the coach, .07 for new coaches and .02for experienced.The Pearson's r with the dependent variable age showed that there was a stronger relationship between new coaches at .582 than for experienced coaches at .387.This shows that gender rather than age or education in this case had a close relationship to the level of experience the coach had for this rule.This also indicated that gender and experience had a direct reflection in determining who would violate this rule. A serious problem occurred with the issue of professional development.Most participants (69%) were likely or very likely to attend bi-yearly training on rules and laws.They were also likely and very likely to read the UIL Constitution (59%), and read the rulebook (65%).A slim majority believed that others know the rules better than they do (53%), and a slim majority (53%) would take professional development seriously.It seemed while the participants realized there was a need to receive training, and they were likely to attend, but they would not take the training seriously. Analysis of the data shows that the participants on the surface could be classified as very ethical.Under the surface shows differing ideas on how ethical the respondents were.For the purpose of reporting oneself, 88% would report themselves for violation of rules.Yet in comparison to rules based questions 53% would give over-the-counter medication to athletes.This was a violation that they themselves must be unaware of; otherwise there would be a lower percentage of coaches that gave over-the-counter medication to athletes.The participants would equally report coaches on staff with those from another school (82%).Also disconcerting was that 53% would violate a rule if they saw others in violation of the rule.Top this with 56% have knowing or unknowingly violated a rule/law, it showed that while they believe themselves to be ethical, they were in fact not. Conclusions and Recommendations 6. This study, while not large enough in scale, did give a snapshot of what the expected answers would be.It would show that most coaches believe that they were well prepared and ethical when it comes to rules and sports related laws.The actuality was that while they were unethical, it was largely because they lacked the adequate training to know that they were breaking the law.This study also pointed out that while training was necessary, it was not always the solution if it was not going to be run correctly, or taken seriously by the coaches.Further questioning would be helpful to determine further levels of ethical violations, or issues that involve rules. If this problem exists for all school districts, then the recommendation would be to find a better way to train coaches.This study should be carried out further at both the national and statewide levels to determine if this is need throughout the United States.If the national levels are the same then there could be an issue with the way that coaches are taught.This could lead to institutions of higher learning to create course work to partially eliminate the problem. This information would then, in turn, be useful for those governing bodies that create rules and laws that govern sports.If it was know that coaches are deficient in a particular area, then they could make language clearer and more understandable.They could also create workshops that coaches could use for professional development purposes.This is limitedly done with coaches meetings before the beginning of a season with the officials.It needs to be expanded more to include the general rules that govern all sports, not just the individual. Further study would need to consider the ethical impact that coaches have in their decision-making processes.While it is believed that education will help eliminate a majority of the violations; are those that occur afterwards an ethical decision by the coach to purposely break the rules/laws?Is this ethical decision conscious or subconscious?While this would create numerous problems for researchers, it would show that through education and ethics coaches could be able to eliminate the majority of rule and law violations. The result is coaches and districts that are either confused or not aware they need to know all the information.When this is compounded with lack of funding, it creates a huge predicament.The troublesome vacuum left shows that there is a need for coaches and districts to have continued professional development with regards to sports rules and laws.While education is needed to alleviate the problem, it is not the complete solution.Further emphasis should also be placed on coaches to teach the sports rules and laws to their players and assistant coaches.This is very similar to what has gone on with the general education and special education laws.School district under the pressure of being sued for improper applications of laws faced the pressure of being sued.It was because of this pressure that districts began implementing training and changes to prevent themselves from being sued. Additionally, colleges offer coursework, some implemented into all coursework that train future teachers and administrators in educational law.The question of whether this additional training has affected outcomes has not been focused on greatly, yet the assumption is that it has eliminated violations by teachers and districts.
2017-09-10T18:51:54.467Z
2015-12-25T00:00:00.000
{ "year": 2015, "sha1": "aa180aa0c5417a7326ba26043ecedd188c0a44c5", "oa_license": "CCBY", "oa_url": "https://www.mcser.org/journal/index.php/mjss/article/download/8698/8356", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "aa180aa0c5417a7326ba26043ecedd188c0a44c5", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Psychology" ] }
231612241
pes2o/s2orc
v3-fos-license
Diagnostic value of ultrathin bronchoscopy in peripheral pulmonary lesions: a narrative review Flexible bronchoscopes are being continuously improved, and an ultrathin bronchoscope with a working channel that allows the use of a radial-type endobronchial ultrasound (EBUS) probe is now available. The ultrathin bronchoscope has good maneuverability for passing through the small bronchi and good accessibility to peripheral lung lesions. This utility is particularly enhanced when it is used with other imaging devices, such as EBUS and navigation devices. Multimodality bronchoscopy using an ultrathin bronchoscope leads to enhanced diagnostic yield. Introduction Accurate diagnosis of small peripheral pulmonary lesions (PPLs) is still challenging (1). Bronchoscopy has been widely used for diagnosis, and instrumental and technical improvements have gradually enhanced diagnostic yield. As the bronchus branches peripherally, its diameter decreases; standard bronchoscopes with an external diameter of about 5 mm are too large to access the peripheral lung region. Thinner bronchoscopes have the advantage that they provide good accessibility to PPLs through small bronchi, so their use in diagnosing PPLs is reasonable. Although no formal definition has been widely accepted, we define an "ultrathin bronchoscope" as having an outer diameter ≤3.5 mm (2). Most conventional ultrathin bronchoscopes are equipped with a working channel with an inner diameter of 1.2-mm, which allows the use of only mini-forceps <1.2 mm in diameter to obtain specimens of a limited size. Therefore, despite their potential high diagnostic yield, conventional ultrathin bronchoscopes with a 1.2-mm working channel have been regarded as an adjunct, rather than an alternative, to conventional bronchoscopes in PPL diagnosis (3,4). Conventional bronchoscopy for sampling PPLs has been performed only under fluoroscopic guidance. However, some ancillary techniques, such as navigation, computed tomography (CT), and endobronchial ultrasound (EBUS), have been developed and applied to bronchoscopy. Such guided methods have increased the diagnostic yield of bronchoscopy (5). An ultrathin bronchoscope has good maneuverability when passing it through the small-airway route and good accessibility to the peripheral lung, so its utility is enhanced when combined with confirmatory tools for use when in proximity to target lesions. Several studies have demonstrated the diagnostic utility of ultrathin bronchoscopes in combination with navigation devices (6)(7)(8)(9)(10)(11)(12)(13)(14), CT fluoroscopy (6,15,16), or cone-beam CT (CBCT) (17,18). Furthermore, a next-generation ultrathin bronchoscope equipped with a 1.7-mm working channel, which allows the use of radial-probe EBUS (rEBUS), was developed and is now available for use in clinical practice (19)(20)(21)(22). We present the following article in accordance with the narrative review checklist (available at http://dx.doi. org/10.21037/jtd-2020-abpd-001). History The idea of using a thinner bronchoscope is not novel. In the early development of flexible bronchoscopes, Shigeto Ikeda, the father of flexible bronchoscopy, manufactured several prototype bronchoscopes of different sizes, including a 3.3-mm thin bronchoscope (23). A few years later, the diameter of the thinnest prototype bronchoscope was reduced to 2.5 mm (24). In the 1980s, some thin bronchoscopes equipped with small working channels were developed, mainly for pediatric use. The first publication regarding the usefulness of a thin bronchoscope for PPLs in adult patients was reported by Prakash in 1985 (25). He reported three cases of PPLs in adult patients that could not be observed using a 4.9-mm bronchoscope but were successfully observed using a 3.6-mm thin bronchoscope. Various types of smallcaliber bronchoscopes have since been developed, and several studies are available on their usefulness in diagnosing PPLs in adult patients (19,(26)(27)(28)(29)(30). Bronchoscopes with a variety of external diameters and working-channel inner diameters are now available for clinical use ( Figure 1). Techniques Although ultrathin bronchoscopes can be advanced close to PPLs, the localization of the target lesion is performed by fluoroscopy and rEBUS and not by direct bronchoscopic vision; thus, these imaging devices are necessary during ultrathin bronchoscopy. The bronchial route is predicted before procedures by reading a preprocedural high-resolution chest CT scan (31). The anesthetic agents and techniques used are similar to those of standard bronchoscopy. Lidocaine is usually used for topical anesthesia and intravenous midazolam and fentanyl for conscious sedation. Ultrathin bronchoscopy can be performed through either the mouth or the nose. We usually insert a 5.0-mm-inner-diameter tracheal tube transnasally into the trachea. The airway established with the tracheal tube facilitates repeated insertion and removal of the ultrathin bronchoscope, reduces damage from rubbing of the nasal mucosa and vocal cords during bronchoscopy, and reduces deflection of the ultrathin bronchoscope. After examining the endobronchial region, the ultrathin bronchoscope is advanced into the bronchial route, which is indicated by the navigation device on real-time fluoroscopy. The ultrathin bronchoscope approaches the target lesion and is then localized by rEBUS and fluoroscopy. If the tumor surrounding the EBUS probe is visualized on the EBUS image, the EBUS probe is removed and biopsy forceps are advanced through the same route. We usually perform biopsies under fluoroscopic guidance until 10 visible specimens have been obtained. Direct observability Small-caliber bronchoscopes can be advanced into deeper bronchi than large-caliber bronchoscopes ( Figure 2) and, therefore, the possibility of direct observation of a peripheral endobronchial lesion increases with the use of a thin bronchoscope. Rooney et al. reported that 4 of 17 PPLs (24%) that could not be observed using a 6.3-mm bronchoscope could be observed directly using a 3.3-mm bronchoscope (3). Oki et al. reported that a 3.5-mm bronchoscope could reach two more distal generations of bronchi compared to a 5.9-mm bronchoscope, and 14 of 102 lesions (14%) were observed only using the 3.5-mm bronchoscope (28). Diagnostic yields The study results on bronchoscopy using ultrathin bronchoscopes for PPLs are summarized in Table 1. The overall diagnostic yield of ultrathin bronchoscopy is 66%, with a yield of 59% for lesions <2 cm. These yields seem comparable to those of other guided bronchoscopy procedures (5,33). As shown in Table 1, ultrathin bronchoscopes have been used with various guiding methods, including rEBUS, navigation devices, fluoroscopy, and CT fluoroscopy. The diagnostic utility of ultrathin bronchoscopes can be enhanced by combining them with other guiding methods. Randomized trials among bronchoscopes of different sizes Several randomized studies comparing diagnostic yields among bronchoscopes of different sizes have been published. Franzen et al. conducted a small pilot study comparing bronchoscopy using a conventional 2.8-mm ultrathin bronchoscope with a 1.2-mm working channel to standardsize bronchoscopes with external diameters of 5.0-6.0 mm for diagnosing PPLs in a region endemic for tuberculosis (32). Forty patients were enrolled and assigned to either ultrathin or standard-size bronchoscope groups, of whom 28% were ultimately diagnosed with tuberculosis. The diagnostic yields in the ultrathin bronchoscope group and standardsize bronchoscope group were 55% and 80% (P=0.95), respectively. Adverse events, including extensive coughing, a blocked working channel, and arterial hypertension were more frequent in the ultrathin bronchoscope group. Bronchoscopy times in the ultrathin bronchoscope group and the standard-size bronchoscope group were 31 and 26 min, respectively (P=0.15). These results fail to show the superiority of fluoroscopy-guided bronchoscopy with a conventional ultrathin bronchoscope over a standardsize bronchoscope. Oki et al. conducted a randomized non-inferiority study of rEBUS-guided bronchoscopy using a 3.4-mm bronchoscope compared to rEBUS with a guide sheath (GS)-guided bronchoscopy using a 4.0-mm bronchoscope (30). In total, 203 patients with PPLs with a median diameter of 26 mm, were analyzed. The diagnostic yields of bronchoscopy using the 3.4-mm and 4.0-mm bronchoscopes were 65% and 62%, respectively. The difference in diagnostic yield was 3.6%, with a 90% confidence interval from -7.5% to 14.7%. The lower limit of the confidence interval was higher than the predetermined margin of -10%, thus confirming the non-inferiority of the procedure with the 3.4-mm bronchoscope. Later, Oki et al. conducted a multicenter randomized study comparing rEBUS, fluoroscopy, and virtual bronchoscopic navigation (VBN)-guided bronchoscopy using a 3.0-mm ultrathin bronchoscope to rEBUS-GS, fluoroscopy, and VBNguided bronchoscopy using a 4.0-mm bronchoscope (19). The results in 305 patients with PPLs with a median diameter of 19 mm were analyzed. The histological diagnostic yield of multimodality bronchoscopy using the 3.0-mm ultrathin bronchoscope was significantly higher than that with the 4.0-mm bronchoscope (74% vs. 59%, respectively, P=0.04). The median bronchus level attained using the 3.0-mm-diameter ultrathin bronchoscope was the fifth-generation level, thus more distal than that achieved by the 4.0-mm-diameter bronchoscope (median fourthgeneration) and comparable to that of a conventional 2.8-mm ultrathin bronchoscope [median fifth-generation (12)]. Complications, including pneumothorax, bleeding, chest pain, and pneumonia occurred in 3% and 5% of cases in the respective groups (P=0.6). Oki et al. further performed a randomized study comparing the 3.0-mm ultrathin bronchoscopic method to the 4.0-mm bronchoscopic method, which was modified by adding transbronchial needle aspiration (TBNA) and standard-size biopsy forceps (21). In the 4.0-mm bronchoscope group, TBNA was performed for patients in whom the radial EBUS probe could not be inserted into the target lesion. In addition, the use of 1.5-mm forceps with a GS, standard forceps without a GS, or a combination of the two was permitted in the 4.0-mm bronchoscope group. The results in 356 patients with PPLs with a median diameter of 19 mm were analyzed. The diagnostic superiority of the 3.0-mm ultrathin bronchoscopic method over the 4.0-mm bronchoscopic method was demonstrated again (70% vs. 59%, respectively, P=0.03). The incidence of complications did not differ between the two groups (3% vs. 5%, respectively, P=0.57). Safety As shown in Table 1, the complication rate related to ultrathin bronchoscopy is approximately 3%, and the occurrence of pneumothorax is 1%, which are rates comparable to those of bronchoscopy using larger bronchoscopes (5). Ultrathin bronchoscopes can reach the visceral pleura in certain cases, so they can damage the visceral pleura directly, which causes pneumothorax. Oki et al. reported that pneumothorax occurred in 6 of 410 patients (1.5%) who underwent transbronchial forceps biopsy using a 2.8-mm ultrathin bronchoscope under fluoroscopy; four cases were related to the forceps biopsy, and the remaining two were caused by the ultrathin bronchoscope itself (34). Limitations of ultrathin bronchoscopes The obvious disadvantage of a thinner bronchoscope is the limitation of available biopsy instruments. The diagnosis of lung cancer includes genotype as well as subtype classifications, so it is necessary to obtain a sufficient amount of tumor tissue for molecular and morphological analyses. Relatively small 1.5-mm forceps must be used when performing bronchoscopic sampling using an ultrathin bronchoscope with a 1.7-mm working channel. The size of the specimens obtained using 1.5-mm forceps is smaller than those obtained with 1.8-or 1.9-mm standard forceps. This issue notwithstanding, the 1.5-mm forceps have been widely used not only during ultrathin bronchoscopy but also for bronchoscopy with EBUS-GS, and many investigators have reported a high diagnostic yield of bronchoscopic biopsy using 1.5-mm forceps (19)(20)(21)(22)(28)(29)(30)(35)(36)(37)(38)(39)(40)(41)(42)(43). Indeed, one study suggested that the size of the biopsy forceps did not affect the diagnostic yield of bronchoscopy (44). In addition, high degrees of concordance of results of genotyping (45), subtyping (46), and programmed deathligand (47) between specimens obtained with 1.5-mm forceps and surgical specimens have been reported. Future perspectives Some promising instruments that can be used during ultrathin bronchoscopy have been developed. Bronchoscopic aspiration needles have recently undergone improvement, and thinner and more flexible needles compared to conventional needles are now available for use in clinical practice (48). A new 21-gauge needle can be used through a 1.7-mm working channel of an ultrathin bronchoscope. Conventional bronchoscopic aspiration needles are stiff, and their steerability and accessibility in the peripheral lung are quite limited (21), while the flexibility of the new needle facilitates TBNA procedures for PPLs (49). The use of TBNA seems to be reasonable in certain cases (e.g., lesions into which rEBUS cannot be inserted), as TBNA can be used to gain access and obtain specimens from peribronchial lesions. The utility of TBNA should be evaluated in terms of efficacy, safety, and cost-effectiveness. Further studies are needed to determine the indications for TBNA. Another promising instrument is the cryoprobe. Cryobiopsy is an effective diagnostic method for PPLs because it provides larger and better-quality specimens (50). An ultrathin 1.1-mm cryoprobe, which is used through the working channel of an ultrathin bronchoscope, has already been adopted in clinics worldwide. The ultrathin cryoprobe is flexible enough to access PPLs located past the deepcurved bronchus (51). The use of an ultrathin cryoprobe during ultrathin bronchoscopy may overcome the limitation of a small sample size. Bronchoscope manufacturers have continued efforts to develop thinner bronchoscopes with larger working channels and better visibility. In addition, sampling instruments that can be used through the small working channel of an ultrathin bronchoscope have been developed and improved. These efforts will continue in the future and will enhance the diagnostic yield of ultrathin bronchoscopy for PPLs.
2020-12-31T09:03:39.676Z
2020-12-01T00:00:00.000
{ "year": 2020, "sha1": "9676a80b34ac8824c335fcebac9c37b4d7dc578d", "oa_license": "CCBYNCND", "oa_url": "https://jtd.amegroups.com/article/viewFile/47393/pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "f5288923f9d5eeeb26c52f5459f58a2fbf03eece", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
212954021
pes2o/s2orc
v3-fos-license
Energy-Saving UHMW Polymeric Flow Aids: Catalyst and Polymerization Process Development : Crude oil and refinery products are transported worldwide to meet human energy needs. During transportation via pipeline, huge pumping power is required to overcome the frictional pressure drop and the associated drag along the pipeline. The reduction of both is of great interest to industry and academia. Highly expensive ultrahigh molecular weight (UHMW, MW ≥ a million Dalton) drag reducing polymers (DRPs) are currently used to address this problem. The present paper, therefore, emphasizes particularly the development of a high-performance catalyst system that synthesizes DRPs (using higher alpha-olefins)—a highly promising cost reduction alternative. This homogeneous catalyst system features a new concept that uses a cost-effective titanium-based Ziegler–Natta precatalyst and a cocatalyst • Lewis base complex having both steric hindrance (around N heteroatom) and electronic effect. This novel work, which involves precatalyst–cocatalyst molecular separation and cocatalyst • monophenyl amine association-dissociation phenomena, already generated several US patents. The subject catalyst prepares UHMW DRPs at room temperature, avoiding the use of zero and sub-zero temperatures. The resulting product almost tripled the rate of transportation of a selected grade of refinery product and saved about 50% pumping energy at ppm level pipeline concentration. It is also very easily soluble. Hence, massive modification of existing pipeline will be unnecessary. This will save additional infrastructure cost. This paper also summarizes challenges facing the development of improved heterogeneous catalysts, dispersed polymerization process, molecular simulation-based DRP product formulation, and model/theory of turbulent mixing and dispersion in the transportation pipeline setting. precatalyst–cocatalyst–donor Introduction Crude oil and refinery products are transported using pipelines. Figure 1 shows the very high production level of global 2018 crude oil and refined product productions [1,2]. During transportation, the pressure drastically drops due to interfacial friction between the pipe wall and the flowing fluid. Consequently, the desired throughput cannot be maintained. Increasing the input pressure and introducing higher flow rates seem to be quick and facile solutions. However, design limitations on pipelines limit the level of pressure that can be applied. The pressure further drops when fluids are transported over long distances. Consequently, the equipment and operational costs increase. Therefore, the efficient transport of crude oils and refinery products is a great challenge [3][4][5][6][7]. To overcome this challenge, highly expensive ultrahigh molecular weight (UHMW, MW ≥ a million Dalton) drag reducing polymers (DRPs), which are commercially available, are commonly used. The DRP literature can be mainly divided into journal publications and patents. The first publication originated from Toms in 1949 [8]. He reported in the First International Rheology Congress that long-chain polymethyl methacrylate (PMMA) reduced the single-phase turbulent wall friction by 80%. This increased the fluid flow rate at constant pressure gradient. Since that time to the present, DRP-mediated fluid flow remains a very active research area. Consequently, DRPs are currently used to transport fluid in very long-distance intra-and trans-country pipe lines [9][10][11][12][13][14]. The subsequent journal publications mostly report about the hydrodynamic theories of drag reduction (DR) and the effects of DRP MW and stability, rheology, multiphase flow, hydrodynamic variables, conduit configuration, surfactant-based product formulation, etc. on the performance of commercial DRPs [14][15][16][17][18][19][20][21][22][23]. These references and the citations therein can be pursued for the details. Commercial DRPs are made catalytically, mostly by polymerizing linear alpha-olefins. Therefore, we review catalyst development research in this area as follows. The DRP catalyst research is mostly covered by patents. The central concept comprises using the second generation α-TiCl3 Ziegler-Natta (Z-N) precatalyst with an organoaluminum cocatalyst [24,25]. Note that β-TiC13 only polymerizes diolefins but not mono-olefins, such as linear alpha-olefins [26]. Figure 2 shows the crystal structures of both forms of TiC13. All the patents modified the above basic catalyst system by introducing a Lewis base (LB) electron donor to block the active centers that catalyze the formation of undesirable low molecular weight poly(alpha-olefin) backbones. The referenced LBs can be divided into Type I LBs and Type II LBs. Type I LBs are non-phosphorous and Type II LBs are phosphorous. Type I includes ether [28], ketone [29,30], pyridine and piperidine [31], etc. Type II lists phosphine, phosphite, etc. [32]. Additionally, a third family of patents uses aluminoxanes alone, or combined with Z−N alkyl aluminum cocatalysts [33][34][35][36][37]. All these patents showed variation in components and composition. The resulting poly(alpha-olefin)s were mostly characterized by measuring the standard solution viscosity, which was used to confirm the formation of UHMW product. The cited patents show the following:  Low catalyst activity;  Requirements of 0 °C and below, and very long polymerization time (≥12 h) to synthesize the DRP;  Low drag reduction (∼13.4%); and  Use of hazardous catalyst adjuvant(s), and costly gel-prone aluminoxane cocatalysts. The patents also did not report any catalyst characterization work, fundamentals, or insight. Therefore, enough room to develop high-performance catalyst and DRP polymerization process exists. Note that a high activity catalyst is a pre-requisite to develop cost-effective commercial DRP process because it increases yield and production volume. This is why the present study particularly addresses how to enhance DRP catalyst activity. Present Patented Catalyst Design Concept and Composition The catalytic synthesis of UHMW DRP has two requirements. One is kinetic and the other is thermodynamic. The kinetic requirement demands that the rate of chain propagation Rp should be, on principle, exceedingly large. On the contrary, the rate of chain termination Rt (originating from various active chain transfer reactions) should ideally approach zero. The thermodynamic requirement, in terms of activation energy, is just the opposite. See Figure 3. An embedded challenge in this context is the following. In organometallic-catalyzed olefin polymerization, the catalyst activity and the resulting MW are inversely related. How to strike a balance between both poses a challenge. Our patented catalyst design concept attempts meeting the above requirements and overcoming this challenge [23−25]. The above catalyst design concept is summarized here. To meet Figure 3 kinetic and thermodynamic requirements, we proposed to use (nucleophilic) Lewis base electron donors that featured the following [38][39][40]:  They simultaneously show steric hindrance and electronic effect. Steric hindrance is achieved by the bulky substituents around the heteroatom, which reduces the access of the related base functionality to the Lewis acid alkyl aluminum cocatalyst.  Electronic effect is obtained by placing electron-withdrawing substituents on the heteroatom to reduce its electron density. Consequently, the (electrophilic) Lewis acid alkyl aluminum cocatalyst•Lewis base complex presumably dissociates (in a nonpolar medium) under the DRP synthesis condition, and consequently the catalyst productivity increases. The introduction of this dissociation phenomenon and the control of precatalyst-cocatalyst molecular separation are the key to the subject novel catalyst design. Aromatic substituents are especially useful because they are relatively inert toward other catalyst components. Our patents illustrate the above overall catalyst constitutive coordination/complexation and precatalyst-cocatalyst molecular separation concept and structure, first by forming an alkylaluminum cocatalyst•Lewis base (LB) complex, and next by contacting this complex with the transition metal precatalyst to make the final catalyst [23][24][25]. A preferred example of the above LB is a tertiary monophenyl amine, belonging to the group comprising N,N-diethylaniline, N-ethyl-Nmethylparatolylamine, N,N-dipropylaniline, N,N-diethylmethylamine, and combinations thereof. Figure 4 shows the reference versus our patented catalyst structure, and compares the productivity of a given example of the former with that of the latter. The patented catalyst formulation PatCat, prepared as per the novel concept reported in Section 2.1, shows to be 1.4 times more productive than the reference catalyst RefCat. This can be explained based on the following postulations. First, the steric hindrance, due to N,N-diethylaniline, widens the precatalyst-cocatalyst molecular separation. The larger is this separation, the lower is the energy barrier associated with each copolymerization insertion and propagation step. This situation is analogous to ion-pair separation that (i) concerns metallocenium cation and methylaluminoxane or borate anion, and (ii) applies to olefin polymerization [41,42]. Second, the electron-withdrawing phenyl group decreases the electron density on N, which makes (diethyl aluminum chloride) DEALC•N,N-diethylaniline complex prone to dissociate. Resultantly, access of monomer (preferably, 1-hexene because of the smaller size) to the vacant coordination site, its insertion into the growing copolymer chains, and chain growth-all concertedly increase. See Scheme 1. Therefore, PatCat showed higher productivity than RefCat, and the resulting copolymer chain length also enhanced. Note that increased catalyst productivity increases production volume, which consequently reduces production cost. Therefore, our patented catalyst adds value to DRP process development and economics. Scheme 1. Insertion, propagation, and chain growth in 1-hexene-1-dodecene copolymerization mediated by PatCat. Figure 5 shows the effects of cocatalyst formulation type, and DRP MW and microstructure (from a qualitative perspective) on DR%. By DRP microstructure, we mean monomer/comonomer sequence and side chain distribution in the copolymer backbone. The following important findings are summarized below. First, compare the DR% of PatCat DRP with that of RefCat DRP. The former approaches that of the latter even for a much lower UHMW value. The MW of PatCat DRP is 1.70 million Dalton while that of RefCat DRP is 2.37 million Dalton. Scheme 2 illustrates the related major β-hydride chain transfer reaction that terminates the 1-hexene−1-dodecene copolymer chain length. The overall electronic environment and the steric hindrance around Ti affect the suppression of the above chain transfer reaction. This means that the patented catalyst is capable of regulating the microstructure of the 1-hexene−1-dodecene copolymer in such a manner that commensurate DR% can also be achieved using lower UHMW value DRP. Increased UHMW value DRP has been reported as a pre-requisite to improving DR% [5,8,9,12,13,15]. This work does not favor this argument. To the best of our knowledge, such a finding has not been reported in the literature. The above finding has significant scientific value. This will motivate the concerned researchers to revisit the current mechanism of turbulent drag reduction and DRP rheology, which will set new directions to fundamental and applied research in this important subject. The 45% DR of the PatCat DRP signifies the following. The resulting product almost triples the rate of transportation of a selected grade of refinery product and saves about 50% pumping energy at ppm level pipeline concentration. The flow rate increase (FRI) was calculated using the following relation [26]: 1 100. (1) Additionally, the use of increased UHMW DRPs has the following practical disadvantages. They dissolve less readily in hydrocarbon fluid than the lower UHMW value DRPs. The solution viscosity also increases with MW. Therefore, the injection procedure becomes cumbersome and more energy consuming. Therefore, we show how these disadvantages can be overcome through designing the catalyst in an appropriately novel fashion. Second, evaluate the DR% performance of PatCat DRP against that of CP LP100 and SP PIB. CP LP100 is a poly(1-decene)-based commercial DRP formulation whereas SP PIB is an UHMW poly(isobutene) from Scientific Polymer Products Inc. Both products are homopolymers. This evaluation establishes that the as-synthesized PatCat DRP, despite having lower UHMW, shows comparable to higher DR% without making a DRP formulation. This is another manifest of the merits of our patented catalyst and the UHMW 1-hexene-1-dodecene DRP it synthesized. This improved PatCat DRP performance may be attributed to the important role that particularly the longer and bulkier n-decyl branch of the above 1-hexene-1-dodecene copolymer plays, under such a situation, in the following areas [5,8,9,12,13,15]: The present catalyst-mediated copolymerization route is apt to shift the DRP synthesis process from 0 °C and below to room temperature and above. Materials and Methods All the procedures, related to catalyst formulation, DRP synthesis, and drag reducing (DR) performance evaluation, are detailed in our patents [38][39][40]. Therefore, they were only summarized here (as appropriate). A local kerosene was used as the transportation fluid. Toluene was demoisturized by contacting it overnight with 4A molecular sieve, activated at 230 °C. The toluene Schlenk flask (containing the molecular sieve) was shaken from time to time, evacuated, and purged with argon until the moisture level dropped to ≤10 ppm. These DRPs were synthesized using a computer-aided AP Miniplant polymerization reactor equipped with a 1 L Büchi glass reactor (Flawil, Switzerland). First, the reactor was thoroughly cleaned and baked at 120 °C for 2 h; then, it was cooled to room temperature. The required amount of toluene was added to the reactor by a syringe. Of 1.0 M triisobutyl aluminum (TIBA) 1.0 mL was added to toluene as the scavenger. The precatalyst solution and the cocatalyst formulation were prepared in dried toluene inside a highly inert glove box. All additions and manipulations were done under inert argon environment inside the reactor. In the RefCat [9,10], the DEALC:(TiCl3•1/3AlCl3) ≅ 2-4 while in the PatCat, an equimolar N,N-diethylaniline was premixed with DEALC. The copolymerization of 1-hexene with 1-dodecene was conducted at equimolar ratio at 20 °C for 5 h. The reaction was quenched using acidic methanol. The copolymer concentration in the reaction mixture was determined by drying a given volume of it at 40 °C in a vacuum oven. A Polymer Laboratory gel permeation chromatography (GPC) instrument (Salop, UK) measured the average molecular weight of the as-synthesized DRPs. The column temperature was set at 135 °C. The DRP sample (about 1.0 mg), taken in a 1 mL vial, was dissolved in 1.0 mL butylated hydroxy toluene BHT-stabilized 1,2,4 trichlorobenzene (TCB) as follows. The vial was shaken first in a regular hot-plate shaker, then in the warming compartment of the GPC instrument-both at 135 °C-for about 5 h to completely dissolve the sample. Before injecting a sample, the differential refractive index (DRI) detector was purged for 4 h using TCB (1 mL/min) to obtain stable baseline. The inlet pressure (IP) and the differential pressure (DP) outputs were also purged for 1 h. The above flow rate of TCB was used, and each sample was analyzed for 35 min. The instrument was calibrated using nine polystyrene standards whose peak molecular weights ranged from 1530 to 15 million Dalton. The MWs of the as-synthesized DRPs were determined using this calibration curve, the corresponding chromatograms, and Polymer Laboratory GPC software. where Tg1 and Tg2 are the glass transition temperatures of poly(1-hexene) and poly(1-dodecene), respectively. MW1 is the molecular weight of 1-hexene (84.16 Dalton) whereas MW2 is that of 1dodecene (170.33 Dalton). The molecular size of 1-dodecene is twice that of 1-hexene. Therefore, the insertion of 1-dodecene in the growing copolymer chain will be restricted due to diffusion limitation. A maximum of 5 mole% average 1-dodecene incorporation will be a reasonable assumption. Hence, the Tg of the assynthesized 1-hexene-1-dodecene copolymers are expected to be limited as −52.00 °C ≤ Tg ≤ −48.60 °C. The DR performance of the experimental α-olefin copolymers was evaluated, using the above test facility, as follows. A diaphragm pump injected the concentrated DRP master solution into the pipeline transportation liquid (a local kerosene). The DRP was carefully introduced after the oil pump to avoid polymer degradation (by the rotating pump impeller) and achieved good mixing before entering the pressure measurement section. A Vortex Flowmeter (DN15 Sandwich) measured the volumetric flow rate of the kerosene up to 85 L/min. A check valve, placed after the flow meter, prevented the liquid back flow. On the other hand, a needle flow control valve, installed before the flowmeter, adjusted the flowrate. A built-in data acquisition system recorded the pressure drops (at 1.5 m and 1.0 m length along the flow pipe), temperature, and the corresponding volumetric flowrate. Before DRP performance evaluation, the test loop was calibrated to evaluate system reliability, using a selected grade of kerosene. Figure 8 shows the related calibration curve that plots the pressure drop per unit length (ΔP/L) as a function of the Reynolds number (Re). It has two parts-linear laminar region for Re < 2000 and the nonlinear turbulent region for Re > 3000. The linear laminar flow behavior confirms the accuracy of the pressure transducer at very low flowrate. However, it does not apply to evaluate the performance of DRP, which reduces the frictional drag only in the turbulent flow region. ΔP/L was calculated assuming a fully developed steady state Hagen-Poiseuille flow in the loop pipe and using Equation (6) [48]: where f is the friction factor; D is the tube inner diameter (ID); ρ is the fluid density; and V is the average fluid velocity in the pipe. Equations (7) and (8) where ks is the surface roughness of the stainless steel pipe which was provided by the manufacturer (AP Miniplant Germany). Equation (9) defines Re as follows: where μ is the dynamic viscosity of the experimental liquid. The thermocouple, placed at the tube discharge and connected to the data acquisition system, measured the corresponding temperature. The dynamic viscosity at this temperature was used to calculate Re. Figure 9 shows the calibration curves without and with a polyisobutene (PIB) DRP. In both figures, the experimental (measured) and calculated ΔP/Ls (estimated using Equations (6)-(9)) matched very well. The dosage pump was calibrated from the lowest to the maximum range before the experiment. The DRP performance was evaluated at 10 L/min, which gives an average velocity of about 2.1 m/s (6.8 ft/s) in the pipe and Re = 18,850. This corresponds to highly turbulent flow, typical of a fluid transportation pipe. The DRP flow rate was gradually increased by applying the dosage pump. The concentration of the experimental DRP in the pipeline was calculated using the following: The %DR was determined using the following pressure drop relation [50][51][52]: % Drag reduction = 100 where ∆PRDF and ∆PPF are the pressure drops (corresponding to the same flow rate) over a given length of pipe for the experimental fluid with and without the DRP, respectively. Future Challenges and Research Niches This section addresses future challenges and research niches in DRP catalyst and process development, surfactant-based product formulation development, and turbulent mixing and dispersion models in transportation pipeline and DRP injection. These are categorically summarized below. DRP Catalyst and Process Development The developments of high-performance DRP catalysts and processes are inter-related. The current process and catalyst mostly synthesize a single-phase solution product. They require a very low temperature and lengthy polymerization time. Undesirable rod-climbing effect occurs with progression of polymerization due to stirring as viscosity develops and normal stress exceeds tangential stress. This adversely affects macro-and micromixing of the highly viscous reaction mixture. Consequently, consistency in product quality deteriorates. The resulting product dissolves in the alpha-olefin comonomers and the solvent used. This prevents achieving in-situ particulate morphology (during polymerization), which is a major catalyst-cum-process development challenge. Such an as-synthesized DRP morphology can make significantly improved product formulation. Comprehensive investigation of precatalyst-cocatalyst-donor complexation and dissociation behavior, precatalyst-cocatalyst molecular separation, and product microstructural characterization can further elucidate the catalyst and DRP process performance. These subjects have been either unfortunately undone or unpublished. Surfactant-Based DRP Formulation Development The UHMW DRP, dissolved in a suitable solvent, is a highly viscous product. The product development challenges list the following:  Viscofication-free pumpable and stable slurry;  Tendency to aggregate, gel, and layer with the passage of time;  Phase separation instability;  Transportation inconvenience;  Inadequate DRP dissolution within the turbulent mixing length; and  Difficulty in achieving very high extensional viscosity. The currently used cryogenic and grinding process-a prerequisite to DRP formulation-is very cumbersome and costly. This requires maintaining the storage and transportation temperature above certain critical value. Heated injectors are also to be often used to introduce the DRP into the transportation line. Dipropylene glycol monomethyl ether (DPGME) is widely used. This oxidizes readily in air to form unstable peroxides that may explode spontaneously. Most seriously, cryogenic grinding destructs the UHMW polymer backbones. Consequently, the DRP performance significantly decreases. To overcome the above problems, the DRP must be transformed into a formulated dispersion/emulsion that should comprise the following components. The DRP, devoid of the O-H bond, itself makes the dispersed nonpolar phase. This is, in turn, mixed with a liquid carrier (LC) and dispersion aids. The LC is polar (having an O-H bond), and it does not dissolve the DRP but may swell it. It consists of several mutually miscible constituents such as alcohol, glycol, and glycol ether, all of which contain the O-H bond. The glycol and glycol ether may act as co-dispersant, too. The dispersion aids include partitioning agent, wetting agent, dispersant, and nonionic surfactant. Depending on the physical form, chemical structure, and temperature, the dispersion aids may be miscible to soluble in the polar liquid carrier. Therefore, the overall product formulation, under such a situation, is a nonpolar-polar (O/W) emulsion/dispersion. As per the above summary, the interaction among a selected nonpolar DRP, polar liquid carrier, and dispersion aids must be evaluated as a function of composition and temperature. The interaction between a liquid carrier and a dispersion aid can be a sub-set of this study. In particular, the following should be determined or modeled [53]: Turbulent Mixing and Dispersion Models in Transportation Pipeline and DRP Injection The DRP pipeline injection problems include the following:  Inconsistent pumping and feeding;  Inadequate dispersion and mixing under turbulent fluid flow; and  Turbulence-and shear-induced mechanical degradation. Models that can illustrate particularly inter-phase interaction, dispersion, and mixing of DRP with the fluid of interest under turbulent flow dynamics are fairly inadequate [54,55]. Research in this area can significantly improve DRP injection methodology. Conclusions Crude oil and refinery products are transported worldwide to meet human energy needs. During transportation, the fluid pressure drastically drops in the pipeline due to interfacial friction between the pipe wall and the flowing fluid. Highly expensive ultrahigh molecular weight (UHMW, MW ≥ a million Dalton) drag reducing polymers (DRPs) are currently used to address this problem. DRPs are synthesized using linear alpha-olefins and second generation Ziegler-Natta catalysts. A high performance catalyst can develop a cost-effective commercial DRP process. This is why the present study particularly addresses how to enhance DRP catalyst activity. The novel concept underlying the present catalyst system, which involves precatalyst-cocatalyst molecular separation and cocatalyst•monophenyl amine association-dissociation phenomena, has been patented, schematically illustrated, and experimented. The premier contributions of this catalyst system are as follows. Its activity was 1.4 times that of the reference catalyst. The resulting 1-hexene-1-dodecene DRP copolymer almost tripled the rate of transportation of a selected grade of refinery product and saved about 50% pumping energy at ppm level pipeline concentration. This catalyst also regulated the copolymer microstructure in such a manner that a lower MW value UHMW product could also achieve commensurate DR%. This finding will set new directions in drag reduction and DRP rheology research. The current catalytic copolymerization shows promise to shift the DRP synthesis process from 0 °C and below to room temperature and above. Future research challenges and opportunities in overall DRP research have been outlined. Perspectives The background and importance of this research well aligned with a key part of Saudi Arabia's Vision 2030-conversion of local raw materials into value-added products. Saudi Aramco at present imports a huge volume of highly expensive DRPs for transporting crude oil and refinery products despite several basic materials being locally available. Almuttahida (Jubail United Petrochemical Company) produces the required alpha-olefins and Saudi Organometallic Chemicals Company (SOCC Al-Jubail) makes the related alkyl aluminum cocatalyst. Hence, several compelling success factors are already available. To develop, therefore, an indigenous DRP technology using Almuttahida alpha-olefins and SOCC cocatalyst is appropriate. We envisioned to build a world-class DRP research program at KFUPM (to be led by the Center for Refining and Petrochemicals). This will eventually serve a great national cause. We anticipated Saudi Aramco would continue to play a critical collaborative role in this regard. It may be noted that KFUPM and Saudi Aramco (currently holding 70% of SABIC share) are neighbors. Both are under the Saudi Ministry of Energy, and DRP research concerns energy conservation. Funding: We highly appreciate Saudi Aramco for supporting this research through PN CRP02253 under University Collaboration Program (UCP). The donation of selected higher alpha-olefins (1-hexene and 1dodecene) by Almuttahida SABIC is also thankfully acknowledged.
2019-12-05T09:30:01.727Z
2019-11-28T00:00:00.000
{ "year": 2019, "sha1": "41c5c6f31fbd843a84ecba784d10111b86e9dc56", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4344/9/12/1002/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "be1512d57b813736aac7dc405bd533a49c68ac82", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Materials Science" ] }
3353133
pes2o/s2orc
v3-fos-license
Subgingival prevalence rate of enteric rods in subjects with periodontal health and disease Background: The prevalence of enteric rods and their association with chronic periodontitis has gained prominence recently. Although the prevalence of these organisms from the subgingival plaque sample was reported in the literature, the carriage rate of these rods in our population is lacking. The present study was undertaken to know the carriage rate of enteric rods from our population in patients with periodontal health and disease. Materials and Methods: Eighty‐four systemically healthy participants, inclusive of 46 males and 38 females, were selected for the study. The selected participants were subjected to a periodontal examination and were categorized into chronic periodontitis and healthy group. Subgingival plaque samples were taken from all the participants, plated onto McConkey agar plates, and incubated overnight at 37°C to check for the growth of organisms. The grown organisms were then cultured according to the standard procedures. Results: Prevalence of 71% and 83% of enteric rods in subjects with periodontal health and disease, respectively, was found in our study which was not statistically significant. Conclusion: Although no significant differences exist in the prevalence of enteric rods between healthy and patients with chronic periodontitis, the prevalence rate of enteric rods in subgingival plaque samples is considerably high in our population. INTRODUCTION E nteric rods are a common term applied to a family of Gram-negative, facultatively anaerobic, nonsporing, nonacid-fast, straight rods inhabiting the intestine of humans. The members of the family are characterized by the production of an array of virulence factors, resulting in life-threatening infections such as septicemia, lower respiratory and urinary tract infections, and being resistant to commonly used antibiotics. The term enteric rods can be mainly applied to two families, namely Enterobacteriaceae and Pseudomonadaceae. Clinically significant members include the organisms belonging to the genera Escherichia, Enterobacter, Salmonella, Shigella, Citrobacter, Pseudomonas, Serratia, Hafnia, Proteus, Morganella, Yersinia, Providencia, and Edwardsiella. [1] The colonization of oral cavity with enteric rods such as Acinetobacter baumanii and Pseudomonas aeruginosa has gained importance recently. [2,3] However, the exact role played by these organisms in the pathogenesis of periodontal disease and whether they are transient colonizers [4] or part of the subgingival flora having a harmonious relationship with the periodontal microbes is unclear. [2,3] These organisms are not unique to subgingival flora, as they can also be seen in other ecological niches found in the oral cavity such as tongue and tonsils. [5,6] The distribution of these microorganisms in the subgingival dental plaque varies from one population to another depending on a multitude of factors such as geographic area, diet, and hygienic practices employed by the individuals. [7][8][9] The enteric rods gained prominence recently due to their ability to elaborate virulence factors [10][11][12][13][14] implicated in causing nosocomial infections, capable of causing tissue destruction, [11] and exhibiting a high synergistic relationship between periodontopathic organisms. [2,3,14] Studies have been done involving different population groups worldwide to know the prevalence of enteric rods. A study by Slots et al. [8] has shown the prevalence of these rods in an American population to be 14%, whereas the prevalence in Germany was found to be 13.5%. [15] In a Latin-American population, [16] a prevalence of 34% was seen whereas 27.9% was the prevalence in a Chinese population. [17] The above results denote that the prevalence is not uniform and varies with the population examined, economic status of the country, sampling technique employed, and the condition of periodontal apparatus. It can be seen that the data regarding the prevalence of enteric rods in our population are lacking. Hence, the primary objectives of the present study are to know the oral carriage rate of enteric rods in our population and secondarily to know whether there exists any difference in the prevalence rate of these organisms among individuals with healthy periodontium and with chronic periodontitis. Subject selection This case-control study was done from July 2013 to August 2014 to find out the prevalence of enteric rods . All the participants reporting to the outpatient department were screened, and the participants who met the inclusion criteria were allowed to participate in the study. The study was performed according to the Declaration of Helsinki, as revised in 2000, and was approved by the Institutional Ethical Committee (IEC/TDCH/030/2014). Eighty-four individuals, inclusive of 46 males and 38 females, were then selected to participate in the study. The study was well explained to all the participants in their regional language, and a written consent form was obtained from all the participants willing to participate in the study. The inclusion criteria to participate in this study include participants of either gender and a presence of at least twenty permanent teeth, excluding third molars. Exclusion criteria include smokers, pregnant women, history of any systemic diseases, and history of antibiotic usage or periodontal treatment within a period of past 6 months. Clinical examination All the selected individuals were subjected to periodontal screening done by a single examiner (ATR), which involved recording of probing periodontal pockets and attachment level (AL). The oral hygiene status, gingival health, and plaque scores were assessed by taking Oral Hygiene Index -simplified, [18] Silness and Löe gingival index, [19] and plaque index (Turesky Gilmore Glickman modification of Quigley-Hein plaque index), [20] respectively. Periodontal pockets (PDs), using William's periodontal probe, were measured as the distance from gingival margin to base of the pocket. PD for each tooth was measured on all the six surfaces (mesio-and distobuccal, mesio-and distolingual, midbuccal, and midlingual). They were then added to get a mean PD. AL was measured as the distance from the cement-enamel junction to the base of the sulcus using a periodontal probe. When the gingival margin was located either coronal to apical to cement-enamel junction, the distance between the cement-enamel junction and the gingival margin was added or subtracted from the pocket depth, respectively. PD and clinical AL (CAL) were assessed using a probe with William's marking in both the arches. All the six sites per tooth were assessed, and the measurements are calibrated to the nearest millimeter. Intraoral periapical radiographs were taken for all the patients with chronic periodontitis. After examination, the individuals were divided into healthy controls (n = 42) and with chronic periodontitis (n = 42). Participants were considered to be healthy if the probing depth is <3 mm with no evidence of attachment loss and bleeding on probing. The diagnosis of chronic periodontitis was made based on the criteria defined by the American Association of Periodontology. [21] Sample collection All the selected individuals were subjected to plaque sample collection. Subgingival plaque samples were collected from pockets measuring 5 mm or more in patients with periodontitis. In patients with periodontal health, samples were collected from an incisor, premolar, and molar from all the quadrants. The plaque samples were collected with a sterile curette after removing the supragingival plaque and isolating the area with cotton pellets, in a test tube containing peptone water, labeled properly, and transported immediately to the Microbiology laboratory, Tagore Medical College for further processing. The samples were processed immediately on arrival. One loopful (0.04 mm in diameter) of the specimen was inoculated onto MacConkey agar plates for the growth of enteric rods. The plates were incubated aerobically at 37°C for 1-2 days. Organisms, if grown, were Gram stained and characterized according to the colonial morphology. They were then speciated according to standard biochemical tests. [22] Species found on the MacConkey agar were enumerated as counts ×10 5 . Statistical analysis A statistical program SPSS version 22 (SPSS,Inc., Chicago, IL, USA). was used for all the statistical analyses. Descriptive statistics were calculated for all the variables. Kolmogorov-Smirnov test was applied to assess the goodness of fit to normal distribution. Unpaired Student t-test was used to find the differences in age and the clinical variables between the groups; Chi-square test was used to find an association between the presence or absence of enteric rods and periodontal parameters such as PD and CAL. Statistical significance was set at P < 0.05. RESULTS A total of 84 participants participated in the study, divided into healthy controls (n = 42) and subjects with chronic periodontitis (n = 42). The demographic details of the study population are shown in Table 1, and it can be seen that there was no significant gender difference between the groups. There was a statistically significant difference between the groups in all the clinical parameters studied. The participants in the control group have a probing depth of <3 mm, and a moderate grade of periodontitis was observed in periodontitis group. Smokers were totally excluded in both the groups. Table 2 summarizes the distribution of enteric rods in the study and control group. A total of 66 strains were isolated from the study population and four participants harbored two strains each. Four different genera were isolated from the periodontitis group and three genera were isolated from the healthy group. Enterobacter spp. was found only in the periodontitis group. The carriage rate of Gram-negative rods in the control and periodontitis group was 71.4% and 83.3%, respectively, and the total prevalence of these rods was found to be 77%. When the prevalence of enteric rods was compared between the groups, no statistically significant differences existed. Similarly, no statistically significant differences were seen for any of the organism, except Enterobacter spp. when the groups were compared. Table 3 summarizes a correlation between the age groups and the presence of enteric rods. It can be seen no association between the presence of enteric rods and the age intervals studied. Association between PD and the presence of enteric rods using Chi-square test is shown in Table 4. Healthy group had a pocket depth of <3 mm (mean of 0.96 ± 0.17) and the chronic periodontitis group had more than 3 mm (mean of 3.75 ± 0.79). No statistically significant differences exist with respect to the prevalence of enteric rods when the two groups were compared. Table 5 summarizes an association between CAL and the presence of enteric rods in the chronic periodontitis group. While none of the healthy patients displayed attachment loss, the mean CAL in the chronic periodontitis group is divided into two groups: CAL with 0-1 mm and CAL with more than 1 mm. It can be seen that no association exists between the prevalence of enteric rods and CAL. DISCUSSION Enteric rods being a normal inhabitant of the human intestines have been detected in the subgingival biofilm with varied prevalence. [16,17,23] Most of the data regarding the prevalence were contributed only by industrialized nations and the prevalence rate in those studies varied from 0.7% to 13.5%. [16,17,24,25] Unfortunately, data from our population are lacking and hence as a first step we tried to find the oral prevalence of enteric rods in our population. Our study found a prevalence of 77% in participants in our population. This was considerably more than the prevalence rate found in the other studies. [1,15,23] The differences seen can be attributed to the level of personal and oral hygiene practice followed and its effectiveness, oral health care access, and microbial composition in our part of the country. The results clearly show the risk that each individual is exposed and indirectly underscores the importance of maintaining personal hygiene and adequate disinfection protocol to be followed in dental practice to prevent cross infections. Periodontitis is of multibacterial origin, and almost 12 putative periodontal pathogens have been identified. [26] Recently, P<0.05 considered to be statistically significant. SD -Standard deviation; OHI(S) -Simplified oral hygiene index; CAL -Clinical attachment level; PD -Probing pocket depth; n -denotes number of subjects; P -denotes probability value due to the better understanding of the periodontal disease pathogenesis, many other nonoral resident organisms have been studied to know their possible role, if at all they have, in the etiology of the periodontal disease. [2,3,27,28] Nonoral microbes are the opportunistic pathogens residing in other body systems and colonize the subgingival flora occasionally. They can be transiently residing in the dental biofilm [4] or under favorable environment and they live in a synergistic relationship with other periodontal pathogens. [2,3] A synergistic relationship was proved between P. aeruginosa and Aggregatibacter actinomycetemcomitans in a study by Ardila et al. [3] The presence of P. aeruginosa was also strongly correlated when the probing depth and ALs are more than 5 mm in their study. [3] The same study also showed pockets harboring Gram-negative enteric rods correlated positively with the presence of A. actinomycetemcomitans, Porphyromonas Gingivalis, and Prevotella intermedia. Enteric rods are characterized by high pathogenic potential as they elaborate various enzymes which can degrade basement membrane, [29] inactivate complement components, [14] produce extracellular leukotoxins, [12] and suppress lymphocyte proliferation. [13] In addition, they are also highly tissue invasive. [11] They have also been shown to persist after periodontal debridement [30] and have been also implicated as a key pathogen in cases of refractory periodontitis. [31] All these findings favor the hypothesis that enteric rods might be involved in the pathogenesis of periodontal disease. These study results show that the periodontal pockets are populated with enteric rods, irrespective of periodontal health, in our study population. When the organisms were stratified according to the periodontal status in our study, patients with periodontitis tend to harbor more number of organisms than the healthy controls though the results were not statistically significant. This can be attributed to the technique employed to identify the prevalence of bacteria or less sample size in our study. It is to be remembered here, these rods can be temporary residents, as postulated by Martínez-Pabón et al. [4] or might have a significant role in the periodontal disease pathogenesis. [2,3,16] Similarly, no statistically significant differences are seen when the clinical parameters such as probing depth or CAL is correlated with the presence of enteric rods. Since this is a preliminary report, further studies will be done in the future with a larger sample size. The most frequently identified organism was Escherichia coli followed by P. aeruginosa in both the groups. Although statistically significant differences were not seen when the number of samples harboring these organisms was compared between the groups, the number of genera isolated from periodontitis patients were more than those isolated from the controls. The differences between the two groups would have been much appreciated if we would have subjected our samples to quantitative microbial identification techniques. The clinical implications of these study results can be better understood only if the pathogenic potential of these organisms is identified. It is an established fact that the enteric rods isolated from other parts of our body possess a multitude of virulence factors and the same trend can also be seen with the enteric rods isolated from the oral cavity. A study by Goncalves et al. [28] has shown that the enteric rods isolated from the subgingival plaque samples of periodontitis patients harbored multidrug-resistant and hydrolytic enzyme-producing strains which can get involved in tissue destruction and disease progression also attests this fact. The effect of these potential virulence factors produced by organisms such as A. actinomycetemcomitans can be accentuated [32] when a favorable environment exists; the presence of synergistically working bacteria and a microenvironment favoring the growth of these organisms. Furthermore, the synergistic interactions between the proven periodontal pathogens and these enteric rods are not clear. Hence, future studies studying the presence of periodontal pathogens coexisting with these enteric rods might give a clear picture regarding the role of these organisms in the periodontal disease pathogenesis. Furthermore, the difference in the virulence potential of the strains isolated from the healthy and periodontitis patients must be identified. Within the limitations of this study, it can be concluded that the carriage rate of enteric rods is high in our population and further studies need to be done to ascertain their role in the periodontal disease pathogenesis. CONCLUSION The carriage of enteric rods is considerably high with a prevalence of 77% in our population highlighting the importance of maintaining personal and oral hygiene. However, their role in the periodontal disease pathogenesis still remains unclear. Financial support and sponsorship This study was partly funded by Indian Council of Medical Research under ICME-STS 14 project. Conflicts of interest There are no conflicts of interest.
2018-04-03T01:56:40.584Z
2017-05-01T00:00:00.000
{ "year": 2017, "sha1": "0ec6ca7ca953974b38692a8190eb745ed0467e74", "oa_license": "CCBYNCSA", "oa_url": "https://doi.org/10.4103/jisp.jisp_204_17", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "0ec6ca7ca953974b38692a8190eb745ed0467e74", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
78092771
pes2o/s2orc
v3-fos-license
A comprehensive home-care program for health promotion of mothers with preeclampsia: protocol for a mixed method study Background One of the most common complications of pregnancy and the third cause of maternal death is preeclampsia. Thus this group of mothers should be supported, trained and received efficient health care services. Home-care is one strategy to improvement complications of Pregnancy. In Iran, high-risk pregnancies care provide in health care centers, hospitals and clinics by midwives and obstetricians. In this mixed method study, at first, a qualitative approach will be used to identify preeclampsia mothers’ health needs and home-care strategies for them. Then, the qualitative results will be emerged with literature review and expert ideas to develop a comprehensive home-care program which fits with the needs of these mothers in Iran. Methods This study is a qualitative- quantitative mixed exploratory research that consists of three sequential phases. In the first step, in qualitative study, all the women with preeclampsia, obstetricians, midwives, and maternal health policy makers will select purposefully. Health care needs and home-care strategies for mothers with preeclampsia will be determined. Sampling continues until data saturation. In the second step, an expert panel will be formed to prioritization of home-care needs and strategies extracted from result of qualitative study and review of literature. Afterwards, Primary home care program will be designed. In the third step, Delphi method will be used of minimum 10–15 experts including: obstetricians, midwives, and reproductive health professionals about validation of this home-care program by questionnaires in three rounds, then the final program is being developed. Discussion It is expected conducting a mixed method study to develop a home care program mothers with preeclamsia to improve their health status and wellbeing while reducing additional health care costs through preventing excessive admissions and interventions. Moreover it wants to follow up properly and timely high risk pregnant women. This study might be helpful in improvement quality of health services and promote health equity. Conclusion Developing a home care program for maternal health care especially for high risk pregnancy by considering Iran socio-cultural context. Plain English summary A home care pregnant woman is as a strategy for delivering services to improve health, wellbeing and education for mother with complication pregnancy. Because hypertensive disorders are second leading cause mortality and morbidity during pregnancy, home care for this vulnerable mothers has recommended to health promotion and improvement outcome pregnancy. This study is an exploratory mixedmethod study (qualitative-quantitative) that will conduct in three sequential phases. The first phase with a qualitative approach, the researcher explores needs and strategies related to home care of women with preeclampsia. in the second phase, the researcher by combination prioritization result of expert panel and review of literature will design a primary home care program. In the third phase of the study, validating home care program will be investigated by Delphi method. After applying expert opinions, a comprehensive home care program for women with preeclampsia will be developed. a home care program mothers with preeclamsia to improve their health status and wellbeing while reducing additional health care costs through preventing excessive admissions and interventions. Moreover it wants to follow up properly and timely high risk pregnant women. This study might be helpful in improvement quality of health services and promote health . Background Maternal health has always been one of the major health concerns of different communities. The reports of World Health Organization indicate that every day, approximately 830 women die from preventable causes related to pregnancy [8]. One of the main problems in maternal mortality and morbidity are high risk pregnancies [14]. Pregnancy-induced hypertensive disorders are considered to be the most common adverse outcomes of pregnancy all over the world. Annually, nearly 50,000 women worldwide die due to this health issue, And also similar to the same number experience other serious complications of these disorders, including stroke, renal failure and etc. [4]. It is also noteworthy that high blood pressure is an iatrogenic underlying cause for preterm birth [20]. In the past, preeclampsia was intervened with pregnancy termination, but current data indicate that prenatal outcomes improve with expected management. Moreover, the mean gestational age increase with expectation management and becomes closer to term pregnancy [22]. Although pregnant women with hypertensive disorder mostly experience hospitalization [9]. But many clinicians believe that further hospitalization is not warranted if hypertension abates within a few days. Therefore, Many women with mild to moderate hypertension can receive continuing care and manage at home [4]. In the early stages, regular monitoring along with precise and high quality interventions at home can reduce the complications of this related disorder and Providing an opportunity for counseling and training, and consequently improving pregnancy outcomes [17]. Furthermore, providing some parts of prenatal care at home reduce the stressors associated with these pregnancies and improve mental health in mothers with high risk pregnancy [5]. According to many benefits of home-care program, this strategy is being implemented in health delivery system in many developed countries to protect women with high-risk pregnancies. Providing health care at home natural environment,allows the person to feel more control over her life status and simultaneously as well as receive a safe and supervised health care [6]. On the other hand,, with the training provided at home, pregnant women learn to change behaviors that decrease risk of preterm delivery and modify their health life style [16], also it is observed that assessment of mothers at home, minimize the number of days of hospitalization [1]. The provision of this care can lead to decrease premature birth and hospitalization costs in low birth weight neonate due to high risk pregnancy [11,12]. Forcada reported that mothers who received home care besides prenatal routine care during pregnancy had less complication in pregnancy such as spontaneous premature rupture of membrane, preterm birth, and hypertension than patients treated only in the hospital [7]. Home -care program allow health care providers to take care of person in an environment where they feel comfortable and meet their family members. [23] moreover, maternal and fetal assessment, coordinating problem cases with health centers, educating the mother about high-risk situations and follow up their required service could achieve. [13] and so, this make easier for families to take part in educational programs and have a better attitude toward effective factors of maternal and child health [3]. By implementing a home-care program tangible psychosocial support, improvement medical communications and more using from social services will be conceivable [2]. In Iran, maternity care programs for pregnant mothers with hypertension focus more on treatment which are provided in health centers and specialized hospitals, It seems that continuity of these services are in some cases disorganize and pregnant women return to hospitals and health care centers with severe conditions [21]. Therefore, in order to reduce maternal mortality and complications in mothers with preeclampsia, the promotion of quality of care should be emphasized more strongly. Since home health care for pregnant mothers with preeclampsia is not currently available in the maternal health care system of Iran, this sequential mixedstudy has been designed to develop an efficient home-care program based on the needs and social-cultural context of the country. Objectives The objectives of each phase are as follows: Objectives of the first phase: Qualitative study: Methods/design This study is a qualitative -quantitative mixed exploratory that consists of three sequential phases. In the first step, data collection will be carried out by qualitative approach, the researcher will explain needs and home care strategies to health promotion of women with preeclampsia. In this stage, all the women with preeclampsia, obstetricians, midwives, and maternal health policy makers are considered as the participants to understand and extract mother ' s health care needs and the strategies for receiving home care services. In the second phase prioritization of needs and strategies of home-care program for mothers with preeclampsia will be performed, and then by combining the results extracted from qualitative data and review of literature, a primary home care program is designed. In the third phase, using Delphi approach, comments of at least 10-15 experts including obstetricians, midwives, and reproductive health professionals will be collected by questionnaires in three rounds for validating this program, and then according to expert opinions, a comprehensive home-care program will be developed for mothers with preeclampsia. First phase: Qualitative study The first phase of this study is designed to answer the question "What are the needs and strategies for home care to promote health of women with preeclampsia?" This study will be carried out using a qualitative content analysis method. Participants in the first phase (qualitative study) In the qualitative part of the present study, the research population will be considered to be the women with preeclampsia who referred to hospitals and health centers, obstetricians, reproductive health professionals, and maternal health policy makers, health providers who have experience in providing healthcare to women with preeclampsia that these people are selected with a purposeful sampling method and based on maximum variation (age, education, social status, job, economic conditions, and gestational age). Inclusion criteria for participants: a) women with preeclampsia will be included in the study by the following criteria 1. Having informed consent to participate in the research 2. being able to communicate and conduct interviews 3. Iranian citizenship and able to understand and speak in Persian 4. No history of well-known psychiatric disorder 5. The pregnant mothers with preeclampsia or a history of preeclampsia at previous pregnancy b) Health care providers and specialist entry into the study by the following criteria 1. Tendency to participate in the study with informed consent. 2. Having at least 12 months of experience in providing health care to women with preeclampsia history. Research environment Selection of participants will be conducted in coordination and with the views of the participants at the time and place designated by them, wherever they feel comfortable (hospital, health centers, work places, university, home, etc.) or in places preferred by the participants. Data collection process in the first phase (qualitative study) After obtaining permission from Isfahan University of Medical Sciences, the researchers will select the participants by referring to hospitals and health centers. The research makes necessary coordination and appropriate participants are selected with purposive sampling. After introducing, explaining the purpose of the study and the method of doing the research, the eligible participants are assigned an appointment in a private and comfortable environment. In the qualitative phase, data collection methods will be individual, in-depth, and semi-structured interviews along with taking notes in the fields. After The explaining objectives and methodology of the study, the researcher will receive written consent regarding participation in the research, for interviews, and recording them. If a person does not agree with recording, taking notes will be done. Interviews with pregnant mothers begin with some general questions: "tell us about your health problems during pregnancy", and so "What kind of services do you receive?" and or "in your opinion, How you could receive these services at home?" the questions which asked from obstetricians, health providers include: "What are your concerns when you send a mother with mild preeclampsia to home?" and so "How are health services delivered at the meantime for mothers with preeclampsia?" and or "From your point of view, how can these women receive these services at home?", "and which kind of problems will the care givers be encountered and what are your suggestion?". And the questions which asked from reproductive health professionals and maternal health policy makers include: "What challenges do you encounter with preeclampsia mothers?" and also "According to your experiences, is it possible to design home-care programs for them?" At the end of each interview, interviewer will listen to audio file carefully several times and afterward the narrative will be transcribed immediately, and data analysis will be done simultaneously with data collection. Data collection continues until the data saturation which means no new code or data is extracted. Data analysis of the first phase (qualitative study) In the present study, the conventional content analysis Method will be used for data analysis [18]. After transcribing each interviews, the text will be read line by line, and its meaning units will be identified. Then, the sentences and the important phrases will be underlined, and the main ideas derived from them, labeled as codes.. After extracting the primary code, the data will be reduced, and eventually Subcategories, categories and the main categories respectively will appear from these codes. Rigor and trustworthiness of qualitative data To ensure the accuracy of the study and the reliability of the results, four criteria are suggested: credibility, dependability, conformability and transferability. In order to increase the credibility of the study, participants will be selected with maximum variation, sufficient time will be assigned for data collection, and the integration of multiple data collection methods, such as interviewing and taking notes in the field will be used. Reviewing by the participants will be used to verify the accuracy of the extracted data and codes or to modify them. And for confirmability of the findings, some examples of code extraction and their corresponding interview, narratives will be reviewed by an external supervisor in order to control the accuracy of researcher's perception and to find contradictory cases. For increasing transferability, study findings will be presented to people who similar with participants in order to compare the results of this study with their own experiences. Also, the researcher will explain the whole procedure, including recording, transcription, code extraction, and categorization, In order to verify the coding procedure, some of the research colleagues and faculty members, who are acquainted with qualitative analysis are asked to review the procedure. Second phase: Quantitative phase In the second phase of the study, after completing qualitative study, the researcher will start searching literatures and texts that address needs of mothers with preeclampsia and home care strategies. The review of literature in this research includes searching through library resources (reference books and theses) and searching electronic resources in order to access more information about the needs and home care strategies mothers with preeclampsia. At this stage, all studies with quantitative, qualitative or mixed method which have been published in the last 10 years in languages English and or Persian are examined with separate or combinational keywords, such as home-care programs, preeclampsia, and clinical guidelines. The researcher searches through databases such as web of science, Magiran, PubMed, science direct, Cochrane library, Scopus, Proquest, Ovid, SID, MEDLINE, Embase, and Google Scholar which have been published during 2008 to 2018 will be studied and analyzed. And afterwards qualitative study results and review of articles prepared to discuss at expert panel . Holding a panel of experts At this stage, the intention is to prioritize the strategies extracted from qualitative study and review of literature. Obstetricians, reproductive health professionals, health services managers, as well as maternal health policy makers will form panel of experts (these people will be selected by the research team based on their professional experiences). In this step of the research, the decision matrix will be used to prioritize the needs and strategies extracted from literature review and qualitative study. In this matrix, a score between 1 and 3 will be designed for each strategy in terms of (cost, feasibility, time spent). Average rating is determined. After calculating The mean score of the strategies, the prioritized strategy is selected based on experiences and ideas of the team of experts. and a primary of home care program for mothers with preeclampsia will be prepared. Third phase: Developing and validation of comprehensive program The study population Target population for this stage is experts (obstetrician, reproductive health, providers,maternal health policy maker). Research sample In the third phase, this draft program will be sent for at least 10-15 of other experts in first Delphi rounds, the average score for each part will be determined and after the second and third Delphi round, the proposed program will be reviewed and evaluated at meeting with presence of the research team and panel members. The program will modify and reviewed in each stage based on the expert comments. Eventually, the modified and edited program will approved and the final comprehensive home care program will be developed. Discussion Since the main purposes of the implementation and optimization of health care programs are prevention of high-risk pregnancies and reduce maternal mortality and morbidity, therefore designing interventions such as providing home care during pregnancy is important at this time. By implementation of this program, mothers can be continuously monitored and receive the necessary health care when faced with any specific problems at the earliest possible opportunity, and also This delivery health service can reduce the number of high-risk pregnancies, subsequent complications, and cesarean sections due to such pregnancies. Furthermore, it provide suitable opportunities for care givers to execute their training, interventional, evaluation, and supporting programs during pregnancy [19]. At the meantime, these programs are considered to be logical and efficient solutions for improving pregnancy outcomes [10]. Expansion of care to home is an appropriate strategy which improves mother-fetus health through increased awareness of mother and her family, social support, and providence of accessible health care [15]. Thus, this study aims to extract home-care needs of mothers with preeclampsia and present a comprehensive program that consistent with cultural and social necessities of the country. Since needs assessment is an indispensable component of programming, the needs are determined and prioritized, and consequently, the required data for programming will be extracted [24]. In sum, it is expected from the presented program of this study to make a clear path to design home-care programs of mothers with preeclampsia. Conclusion Home care is a lost ring of perinatal care in Iran, which can play an important role in improving the health status and reducing the morbidity and mortality of mothers and infants, especially in high-risk pregnancies. And in order to ensure the design of services that are effective and consistent with the cultural context of the community and to prevent the loss of resources, it is essential to conduct numerous studies prior to any planning.
2019-03-14T14:25:30.340Z
2019-03-14T00:00:00.000
{ "year": 2019, "sha1": "91cce617ae2b9ccdafb8f966ff524f942deb4c56", "oa_license": "CCBY", "oa_url": "https://reproductive-health-journal.biomedcentral.com/track/pdf/10.1186/s12978-019-0695-8", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bcc86dfee218e57dff2694935cda3669a1f53c01", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
229522282
pes2o/s2orc
v3-fos-license
The digital great leap forward mapping China ’ s 21st century attempt to create a new growth model ** The recent successes of the Chinese modernisation strategy are substantiated by an array of indicators showing an impressive improvement. Irrespective of China ’ s current growth deceleration, these indicators suggest a highly effective implementation of an ambitious roadmap that can ultimately help China to catch up and achieve a global technological leadership. Still, some scholars point to deep structural de fi ciencies, and maintain that these indicators – however impressive they are – merely scratch the surface, while much deeper change is required in order to maintain economic growth. Therefore, the purpose of this paper ( fi nalized before the ongoing COVID-19 crisis ) is to contribute to this burgeoning literature – documenting the outcome and analysing the implications of China ’ s efforts to embrace a new growth model – and analyse the chances of the Chinese digital great leap forward, that is the radical transformation of its prior modernisation trajectory. Drawing on a systematic review of the literature, the author maps, presents and analyses existing indicators quantifying China ’ s progress in shifting to this new development trajectory, identifying also the gaps in the conventional measurement approaches. According to the fi ndings of this paper, there are several easy-to-measure indicators, often used in international comparisons, that indeed con fi rm the optimistic scenario of China ’ s development prospects in the near future. On the other hand, some hard-to-quantify factors, such as the localization of knowledge and the spreading of innovation, need to be also considered. These latter show a closer association with countries ’ development level as well as development potential. With regards to these latter particularities, China still has a long way to go. INTRODUCTION AND RESEARCH QUESTIONS Since the early 2000s, international organizations' medium-and long-term forecasts have projected a global economic slowdown. There is a debate among leading economists about the role of various factors in explaining such slowdowns, as well as about further growth prospects (Adler et al. 2017;Baily -Manyika 2013;Remes et al. 2018). Some economists (e.g., Eo -Morley 2018;Gordon 2014; Syverson 2017; Dufr enot -Ghouzlane 2018) mentioned secular stagnation and a sharp slowdown in economic and productivity growth. Potential reasons for this slowdown are economic policy shortcomings, investment stagnation, political uncertainties, the protracted effects of the 2008 global crisis, measurement problems and demographic causes, as well as the ongoing structural transformations. However, when taking a closer look at Figure 1, it seems to be clear that the current (and expected future) unfavourable development of global growth performance can also be explained by the rather moderate growth of the BRIICS countries 1 (Guillemette -Turner 2018). Nevertheless, the contribution of this group of countries -China and India in particularto world GDP is steadily increasing: according to the OECD's forecast, China's share will be around 27% in the mid-2030s ( Figure 2) finalized before the ongoing COVID-19 crisis. China has maintained a double-digit GDP growth for decades during the period of the Chinese economic miracle, which lasted from the announcement of the 1978 Reform and Source: Author's own compilation based on data collected from Guillemette -Turner (2018). Opening Policy to 2008, the year of the global economic and financial crisis. The 10.4% increase measured in 2010 fell to 6.7% by 2016, corresponding to a 3% deceleration rate (Tian 2019). The decline was unevenly distributed across sectors: the primary sector decreased from 4.3 to 3.3%, the secondary sector from 12.7 to 6.1%, and the tertiary sector from 9.7 to 7.8% (Xu 2019). Economists (e.g. OECD 2019a, 2019bXu 2019) agreed that among the primary demand components of growth, the decline in investment had the strongest effect, while consumption and exports also declined. Consequently, the Chinese government is no longer forecasting double-digit growth: growth of around 6% will declaredly become the new normal, while structural transformation (i.e. changes in the economic sectors' and industries' share of GDP) has been continuously shaping and modernising the Chinese economy over the past decade and a half. To secure a long-term and sustainable growth, China has been strengthening the new drivers of economic growth and seeks to create the conditions for changing the modernisation trajectory by developing local innovation capacities, investing in education and R&D, systematically developing emerging industries and technologies, and accelerating digital transformation in particular ( Figure 3). The effectiveness of the Chinese modernisation strategy is substantiated by an array of indicators showing an impressive improvement. Irrespective of China's current growth deceleration, these indicators suggest a highly effective implementation of an ambitious roadmap that can ultimately help China to catch up and achieve a global technological leadership. By contrast, one can find deep structural deficiencies in the Chinese economy, implying that these indicators however impressive they arewould merely scratch the surface, while much deeper change is required in order to maintain economic growth. Therefore, the purpose of this paper -finalized before the ongoing COVID-19 crisiswas to analyse the chances of the Chinese digital great leap forward, that is the radical transformation of its prior modernisation trajectory. I examine whether the systematic development of the emerging industries and technologies is suitable to accelerate China's economic growth and achieve the desired shift to the new modernisation trajectory. Drawing on a systematic review of the literature, I map, present and analyse the existing indicators quantifying China's progress in its modernisation endeavour, such as the industrial and technological specialisation, sophistication of exports, reduction of technological dependence, innovation and digitalisation, as well as the international success of domestically owned innovative companies. I also analyse the digitisation-related aspects of the Chinese modernisation, including the "Made in China 2025" program as well as the role of the Chinese companies in digital transformation. MAPPING THE TRADITIONAL INDICATORS OF CHINA'S MODERNISATION EFFORTS Innovation is certainly one of the key drivers of economic growth (Fernandes et al. 2018;Xiong et al. 2020), especially for the developing countries trying to avoid the middle-income trap (Eichengreen et al. 2013;Jayasooriya 2017;Glave -Wagner 2019). Zhuang et al. (2012) also concluded that a continuous industrial development requires innovation, thus ensuring the transition from a low-cost to a high value-added economy, for which the favourable macroeconomic and market environment is just as important as incentives for innovation. In Japan and South Korea, for instance, when becoming high-income economies, the productivity growth was primarily driven by innovation and new technologies, while structural transformation was already carried out. The Chinese economic miracle of the 80s, 90s and early 2000s was driven mainly by the structural transformation, the redistribution of labour and capital, i.e. their transfer from lowproductivity to high-productivity sectors and/or from state-owned to privately-owned companies. While structural transformationtransforming the economy from export-and investment-driven to a domestic consumption-driven economy that is based more on the tertiary rather than the secondary sectoris still on the Chinese agenda to address economic challenges, innovation-driven development has become the new strategy to guide China's economic rise. A broader measure of technological development and innovation used in the literature is the growth of total factor productivity (TFP). Eichengreen et al. (2013) explained about 85% of growth slowdowns in countries they analysed by the decline in TFP growth rates, while decline in labour and/or capital played only a relatively minor role. Jitsuchon (2012) and Bulman et al. (2014) also found that the countries that successfully avoided the middle-income trap showed relatively high TFP growth, consequently. TFP-driven growthrather than input-driven growth could be one of the cornerstones to economic growth in the developing countries (Tho 2013). The TFP data calculatedand collected up to 2017by the University of Groningen support the above conclusions, as Chinese TFP has indeed been growing steadily in recent decades (Figure 4). 2 The Chinese TFP growth exceeded that of its competitors till 2010, when the pace of TFP growth started to slow down sharply in all countries. However, as far as the level of TFP is concerned, Figure 5 shows that despite this relatively rapid growth, catching up according to this metrics has not started yet: China still lags far behind both its US and East Asian competitors, although the latteri.e. Japan and South Koreahave not caught up either. However, lagging behind is not necessarily a disadvantage. Lin (2019), for example, emphasised the benefits of such backwardness by saying that due to the productivity gap China's economic as well as productivity growth potential remains significant. China is not yet at the technological forefront, therefore its productivity growth is generated by several other factors, not only by innovation efforts. The human capital accumulation, robotization, and adaptation of technologies developed elsewhereeven in the absence of independent innovationswill have a significant impact on China's productivity growth in the near future, according to Lin. Consequently, the targeted technology and industrial policy component of the modernisation shift, discussed in detail in the next chapter, have a distinctive role. R&D expenditures, that have a strong effect on the TFP growth, are also significant elements based on the literature mapping the results of economic development: this variable is also mentioned in connection with and independently of the TFP ( Figure 6). Although China's performance is still lagging behind, for example, Japan and South Korea, where R&D spending It should be noted, however, that the TFP database compiled by the University of Groningen is one of the most optimistic databases, especially when it comes to Chinese data, so this result is more of an assumption than a clear conclusion. is traditionally high, China's R&D expenditure as a percentage of GDP is close to the levels of the European Union, Australia or Singapore. Although differences among government policies and the domestic regulatory environment make it difficult to compare patent applications and cross-country subsidies, it is worth noting that in 2016, according to the World Intellectual Property Organization (WIPO), China's State Intellectual Property Office (SIPO) processed 42.8% of the global patent applications. With more than 1.3 million registrations, China has processed more than twice as many registrations as the United States, four times as many as Japan, and six times as many as South Korea. 3 It should be noted, however, that based on discussions with the Chinese scholars, the Chinese system is far from being perfect: one of the main reasons for many patent registrations is the registration of foreign patents, and the other is that doctoral students of technical disciplines can only obtain a PhD degree if they "invent" somethingeven if it is a useless or just plagiarized "invention". It is also worth examining the development of China's high-tech exports (Figure 7). According to Felipe et al. (2012), the more diversified a country's export and the more capable to produce and export sophisticated products, the more likely it is that this country will be able to develop, compared to those countries being successful in a single sector. A positive example is Korea, that became a successful exporter in several sectors, unlike, for example, the Philippines or Malaysia, that have only been successful in certain segments of electronics. Eichengreen et al. (2013) concluded that chances for a growth slowdown are lower among the countries producing high-tech products, while Felipe et al. (2012) also found that the countries that managed to avoid the middle-income trap countries are characterised by a relatively more diversified and sophisticated export basket ( Figure 8). In terms of trade structure, China remains competitive in the production of several low-cost and labour-intensive products, consequently these products represent a significant part of the Chinese exports. As a resultalthough China's major export products are electrical machinery and equipmentmetals, furniture and various textile products continue to be a substantial item in China's export basket, making the export basket relatively diversified. As far as the share of sophisticated products is concerned, according to the latest available data, nearly 24% of manufacturing exports came already from the exports of high-tech products. This ratio is well above the world average (16%) and also exceeds that of the developed countries (it is around 13-14% in the USA, EU and Japan). Figure 7 should be treated with caution, since Chinese domestic value added in high-tech industries is relatively low and, in many cases, the high-tech exports come from the foreignowned companies located in China (see Figure 9 for more details). A good example of this is a study by Dedrick et al. (2010) on some selected high-tech export products assembled in China, such as Apple's iPod. Based on the company balance sheet data, annual reports, industry databases, analytics and trade data, they calculated how the iPod value chain is structured, how much the built-in spare parts cost, where they are manufactured, what are the trade margins and the shipping cost per product, etc. It was found that within the retail price of the iPod, the assembly costs, along with the relating quality control costs, represent just over 1%, and on a value-added basis, China's share is not more than 2-3%. Although the country's share might be somewhat more favourable now than it was in 2009, but the local value added remains extremely low compared to the value of high-tech exports. CENTRALISED ECONOMIC PLANNING DELIVERING INNOVATION? For decades, China has been regarded as one of the main perpetrators of imitation and copying either through disrespect for intellectual property rights or through industrial espionage. While its capacity for high-volume and low-cost production has never been questioned, the conclusion was that the country's competitiveness in innovation is minimal. However, as mentioned above, China has also made substantial progress in the field of innovation in recent years and aims at becoming a new digital superpower. It is also known that the Chinese government promotes the concept of "independent innovation". An important cornerstone of the process of becoming a digital superpower is the "Made in China 2025" (MiC25), a 10-year, grandiose industrial development plan that aims to shift from labour-intensive to knowledge-intensive manufacturing. 4 It is the first step in a larger, 3-phase development plan that intends to transform China from a global assembly plant into an autonomousi.e. that does not require foreign supply chains and technologymanufacturing power that uses innovative production technologies. The program focuses on improving the quality of the products made in China and encourages the creation of own brands through the development of stable production capabilities and state-of-the-art technologies. The first phase covers the 10-year period between 2015 and -2025, where the primary goal for China is to become one of the largest global manufacturing powers. The second phase is the period from 2026 to 2035, when China will catch up to the top manufacturing powers in terms of performance. By the end of the third phase, i.e. the period from 2036 to 2049, when the People's Republic of China is celebrating its 100th anniversary, China would like to be the world's leading manufacturing power (Li 2018). The digital transition of MiC25 will be implemented in parallel, on three differentindustrial, technology and regionallevels. The first level is the development of selected strategic industries. 5 MiC25 highlights and prioritizes 10 industries: next-generation information technology, high-end computerised machinery and robots, aviation and space equipment, maritime engineering equipment and hightech ships, advanced railway equipment, energy-saving and new energy vehicles, new materials, biomedicines and high-performance medical equipment, energy equipment, and agricultural equipment (Li 2018). The comprehensive industrial development plan is complemented and concretised by a number of industry and technology level development plans and scenarios ( Figure 8). Technology programs 6 (second level) appear closely intertwined with the industry programs. The development of advanced industrial technologies (intelligent manufacturing) and artificial intelligence are among the flagship programmes in this regard, as these technologies are present in all industries, strongly influencing productivity (Szalavetz 2017(Szalavetz , 2019. 7 In 2017, the State Council announced a program called the "Next Generation Artificial Intelligence Development In this regard, MiC25 follows the development paths of Japan, South Korea, Singapore and Taiwan, as these East Asian countries have already successfully emerged from the trap of low-tech, labour-intensive manufacturing (assembly) with an industrial policy based on strategic sectors. 6 One of these is the "Internet Plus" program announced in 2015on the personal initiative of Premier Li Keqiang. Actually, it is a modern five-year plan to integrate cloud computing, big data and Internet of Things with a variety of industries, from manufacturing and commerce to Iinternet banking, from government to healthcare or even agriculture. The program aims to connect China's growing economy, with the power of Internet services (Zhao 2019: 180). 7 As regards the volume of developments, by the end of 2018, 530 industrial parks with intelligent manufacturing technology was established (Zenglein -Holzmann 2019). Most of them focus on processing big data, but new materials and cloud computing also play a prominent role (Woo 2017). Plan" (NGAIDP), highlighting that in addition to the international competition, national security challenges also require formulating artificial intelligence (AI) as a national strategy. According to the NGAIDP, the comprehensive development of AIthrough theoretical modelling, technological innovations, software and hardware upgrades, etc.will trigger a chain reaction that accelerates economic and social development. NGAIDP sets out a three-step policy for AI development: focussing on industry, technology and applications. For industry, it focuses primarily on machine learning, smart chips and cloud-based storage 8 ; in the field of technology, the focus will be on the Internet of Things, Big Data, AI and intelligent manufacturing; while applications include geographic information system, smart grid, smart agriculture, information security and precision medicine. NGAIDP also assigns an important role to AI in university and post-graduate education: it mentions AI disciplines, AI majors, and even calls for the establishment of colleges specialising in AI (CISTP 2018). The third level of the digital transition of MiC25 is the regional level. During the implementation of the program, the Chinese leadership also draws from previous experiences: even when the Reform and Opening was announced in 1978, the coherence between free market mechanisms and public planning policy was been tested in so-called special economic zones. Similarly, the Chinese leadership designated cities and priority areas to test the efficiency of MiC25. In addition to the designated cities (the first of which was Ningbo, a port city in southeast China), in line with the country-level strategy, major local governments created their own provincial and city-level technology development programs and roadmaps focussing on industries and AI (similar to the European 'smart specialisation strategy'), that also became an instrument for the local leaderships to compete for central resources. For instance, Beijing is planning a 2 billion USD AI development park capable of accommodating 400 AI businesses, but Shanghai and Tianjin are also among the leading cities in terms of AI development. The former fishing village, Shenzhen, is now referred to as China's Silicon Valley. It is home to companies such as BYD, that is specialising in IT, cars and renewable energy, Huawei and ZTE, the two telecommunications giants, as well as Tencent, an Internet services provider, and the Beijing Genomics Institute specialising in genome sequencing. Shenzhen and its surroundings, the Pearl River Delta, will be transformed into a megapolis under the name Greater Bay Area, which becomes a key player in China's National Strategic Development Plan (Ketchum -Cheng 2018). But beyond that, Hangzhou (the capital of Zhejiang Province), the headquarter of the Alibaba Group, is also worth mentioning. The share of the service sector in Hangzhou was already above 60% in 2016, the ICT industries are also producing double-digit growth, while the implementation of the Internet Plus program mentioned above is also the most successful in this city, ahead of Beijing and Shanghai. Two kilometres from Hangzhou is a place known as "Dream Town," where the start-up Iinternet businesses are provided with free office space and infrastructure for at least three years. The initiative was launched in 2015, with more than 7,000 companies working on more than 700 projects here in the first two years (Zhao 2019: 182). 8 The program lists those AI industrial applications that needs to be developed independently (intelligent, connected vehicles, robotics, video surveillance systems, smart home products), the industrial equipment and logistics solutions integrating artificial intelligence required for their production and the infrastructure that supports these developments (5G systems, cybersecurity solutions). In line with the recent Chinese strategies, the program is not simply a set of one or more specific instructions, but rather a guideline capable of continuously adapting to emerging challenges, while maintaining the main objectives, i.e. modernising national technological capabilities and creating opportunities for technological leap. And, as demonstrated above, MiC25 is no longer a plan, as the implementation of the program has begun five years ago, with billions of dollars state funding. The amount of funding is, however, difficult to estimate accurately since besides direct allocations, companies can access grants through several other channels. For example, they can receive support in the form of tax incentives or from various development funds (provided by either state, provincial or state-owned companies/banks), 9 but also through direct state funding of pilot and demonstration programs, priority projects or industrial and technology parks or special zones. The state provides indirect support through small and medium-sized enterprise financing programs and through central or provincial support for venture capital infusion (Zenglein -Holzmann 2019). Although conditions were not necessarily idealthe slowdown in economic growth has been a constant challenge, and the Sino -US trade war did not create favourable conditions eitherthe implementation of the program gained momentum over the past few years. But, as Zenglein -Holzmann (2019) pointed out, the extent of the progress is difficult to measure due to the comprehensive and adaptive nature of MiC25. The progress in the 10 priority industries listed above is far from balanced. China has already made spectacular progress in areas such as the development of 5G networks, high-speed railways and ultra-high-voltage electricity transmission systems, and the robotization of production. As for the latter, China not only outperforms the world average in many industries, but already uses at least as many robots as its competitors. 10 China already uses more robots in the automotive industry than Japan or South Korea, while in the field of electronics it has more robots than the United States or Germany. In 2017, more than 40% of all newly installed robots went to China. 11 The growth of robot production in China is even more striking. In 2012, 5,800 robots were produced, in 2017 already 131,000 units, nearly 30% of which were produced by local companies (Cheng et al. 2019). The number of companies engaged in manufacturing of robots or robotics research is also growing rapidly: according to the State Administration for Industry and Commerce (SAIC), in the early 2000s, only a few hundred companies dealt with robots, but today this number is close to 7,000. This development is also justified by the increase in the number of robotics patents: SIPO, which deals with the registration of Chinese intellectual property rights, issued only 54 innovation patents in the field of robotics in 2000, compared to 319 in 2010 and 1,1145 in 2015. The development of strategic industries based on domestic innovation is facilitated by the huge internal market: the Chinese government encourages the development of future technologies not only through financial support but also by artificially creating demand, including through favourable regulations or tax incentives (Shi-Kupfer -Ohlberg 2019). 9 According to Zenglein -Holzmann (2019), in 2018, more than 1,800 public industrial investment funds operated in China with a total of 3,000 billion RMB. Still, China's strategy to use innovation and digitalisation for the sake of the country's development faces a number of internal and external challenges. Conflicting goals and conflicts of interest among decision-makers cause tensions in the domestic arena, while increased state control over private enterprises and inefficient allocation of capital are also among the risk factors (Shi-Kupfer -Ohlberg 2019). As for external challenges, China will definitely remain dependent on foreign core technologies for some time, that can pose serious risks as illustrated by the US attacks against ZTE and later Huawei as a consequence of the Sino -US trade war. ZTE almost went bankrupt after the US threatened to ban the sale of microchips to the company, while Huawei was left almost without an operating system due to the US sanctions imposed on the company. In addition to the above, China is still dependent on the US for semiconductors. However, China has recognised the challenges of such dependencies and has been trying to take the appropriate steps to overcome them. China plans to halve imports of semiconductors in the coming years, eliminating it altogether in the long run, while Huawei has been developing its own operating system, called Harmony, for several years. As can be seen from the above, digitalisation has become a top priority for Chinese leaders in just a few years. China is aiming to become world leader in many fields, such as AI, spending 150 billion USD on the industry by 2030, pioneered mainly by the private sector (Horowitz et al. 2018). It should be noted, however, that China's digital ambition goes beyond economic ambitions: effective governance, the control of companies and the population are also important aspects, for which the Chinese leadership has the right tools at its disposal. Here I just briefly refer to the "Social Credit System" that monitors the behaviour of individuals and companies, providing information about the trustworthiness, compliance with laws and legal violations. In addition to the economic and social use, AI can also be used for military purposes. WHO'S IN CHARGE: STATE-OWNED OR PRIVATE COMPANIES? Alibaba leader Jack Ma has set up Luohan Academy, an organisation to promote research on the impact of digital economy. The Academy's 2019report "Digital Technology and Inclusive Growth" (Luohan 2019) well reflects China's position on digitisation. One of the main messages of the report is that digital technology can be an important driver of inclusive growth and the spread of digitalisation and the increase in its efficiency require a close partnership between the public and private sectors. As one of the characteristics of the Chinese economy is the presence of state-owned enterprises (SOEs), it is worth examining the role of these firmsas well as their private counterpartsin the use of R&D resources and in the digital transformation. When evaluating MiC25 and other Chinese digitalisation programs, most of the analysts emphasize that the implementation of these programs is not only driven by the will of the state but the private sector is also deeply involved. Moreover, development plans usually designate a specific role to innovative private firms and technology companies. The indicators of the Global Entrepreneurship and Development Index (GEDI) well illustrate the development of business performance. According to the 2012 GEDI index (based on 2010-2011 data), China was ranked 58th in the global ranking, between Venezuela and Algeria. Among the pillars summarising institutional and individual variables, it showed the largest lag in the technology sector ( Acs -Szerb 2012). This pillar falls into the "Entrepreneurial abilities" (ABT) sub-index of the three sub-indices of the composite GEDI index: these include pillars such as the strength of competition (degree of market dominance), the quality of human resources, the importance of the technology sector, and the so-called opportunity-driven business start-ups. The other two sub-indices called "entrepreneurial attitudes" (ATT) and "entrepreneurial aspirations" (ASP) (Koml osi et al. 2014). The 2018 GEDI index (based on 2015-2016 data) already ranks China at 43rd, placing it between Italy and Latvia. The 25 best-performing countries are listed for each sub-index: in 2018, China was already at the forefront of the ASP sub-index (24th place). ASP measures, among other things, the product and process innovation and internationalization capacity of enterprises as well as venture capital financing. The development of innovative private enterprises is also reflected by the increase in the number and importance of Chinese brands in the world market. Case-by-case examples can illustrate this increase, while indirect data can help to estimate their increasing share of total exports. This is confirmed by the 2019 Fortune Global 500, that lists 119 Chinese companies, including own local brand industries such as computers, telecommunications equipment, industrial equipment, textiles and vehicles, as well as technology and pharmaceutical companies. Another indicator that is often used to support the claim that the achievements in digital transformation can be traced mainly to the Chinese private companies is the number of Chinese unicorns, i.e. the number of fast-growing Chinese technology companies worth more than 1 billion USD. In May 2020, 472 unicorns were registered in the world (their number is growing month by month). 12 Most of these companies are American, but the number of Chinese companies has grown at such a rapid rate that China has been second on the list for years (in May 2020, 226 American and 121 Chinese unicorns were on the list). 13 Funding for these startups has also increased incredibly in recent years in China. In 2017, 48% of global equity financing for start-up AI businesses was already concentrated in China, while only 38% to the US. This is a particularly significant increase compared to 2016, when China accounted for only 11% of global AI funding (Mitchell 2019). When assessing the role of the corporate sector, one could assume that the mechanisms of economic coordination is purely top-down in the case of China. In reality, Chinese corporate decision-making is characterised by a mixture of top-down statism with a strong bottom-up element, resulting in the simultaneous presence of multiple business systems. The bottom-up elements are provided by the local variations of central institutionsor even informal institutionswhich often supersede formal institutions (Witt -Redding 2013), making the whole system more flexible, as successful institutional innovations diffuse across different localities and inform the national level about institutional changes (Xu 2011). Informal relations, that is the so-called guangxithe network of mutually beneficial relationships which can be used for personal and business purposesalso play a unique role in the Chinese corporate as well as political relationships. 12 The full list is provided by CBInsights: https://www.cbinsights.com/research-unicorn-companies. 13 With 75 billion USD, Chinese AI company Toutiao is right at the top of the list. Toutiao offers personalised press releases to social media users based on their areas of interest and browsing habits. Didi Chuxing Technology Co. come second on the list with 56 billion USD. The company provides application-based transportation services (taxi service, car, motorcycle and bike sharing, etc.) globally. A total of 19 decacornstechnology companies worth more than 10 billion USDincludes four additional Chinese companies, such as, for example, Bitmain Technologies, which designs application-specific integrated circuit (ASIC) chips for bitcoin mining. In addition to the often non-competitive and indebted SOEs, there are profit-oriented and competition-driven state-controlled enterprises (such as China Mobile) as well as private firms (Huawei, Lenovo or Geely) that have also been able to become successful in the Chinese market as well as globally. Moreover, such non-state national firms are considered as 'national champions' in China (Naughton 2007;Ten Brink 2013). Apart from the IT sector, that is deeply integrated into global production networks, most industries in China are dominated by national (state-owned, state-controlled or Chinese private) capital and not by foreign multinationals. Chinese firms primarily use domestic funds and bank credit for their operations, partly because major banks are also not privately, but state-owned. As a result, global capital markets play a minor role in funding new investments (N€ olke et al. 2015). The Chinese leadershipand President Xi Jinping personallyidentified China's digital strategy as a central concern: the National Informatization Strategy (2016-2020) calls on the Chinese Internet companies to support the creation of a "digital silk road" in foreign markets, while both the MiC25 and the "Internet Plus" program were launched in 2015 to stimulate domestic industrial as well as digital innovation. The relationship between the public and private sectors is truly unique in the field of information and communication technologies (ICT). China's national IT champion companiessuch as Baidu, Alibaba, Tencent, Jindong and NetEasewere able to thrive under laboratory conditions because the Chinese leadership not only blocked foreign competitors 14 but also supported their international expansion and foreign access to capital through listing on overseas stock exchanges. In the case of state-owned telecommunications giant, ZTE, state paternalism is even more obvious due to direct government funding and preferential procurement. Following the security scandal started in 2018, Huawei is often claimed to be in the same category, but the privatelyowned company denies that it owes its success to government subsidies. Nevertheless, it seems to be almost impossible to trace the influence of the party or the Chinese state, the state control mechanisms and international relations that surround national champion companiesor innovative start-ups (Shi-Kupfer -Ohlberg 2019). What one can conclude with certainty, is, however, that Chinese SOEs will continue to play an important role in developing strategic industries that are directly related either to MiC25 or other digitization programmes. Industries that the Chinese government declares to be a "key industry" (e.g., shipbuilding, aviation, highspeed railways) or a "pillar industry" (e.g., electronics, mechanical engineering, automotive) will continue to be dominated by SOEs. According to MERICS's report (Zenglein -Holzmann 2019: 45), SOEs' share in the revenues of listed companies in these two categories has declined only slightly since 2013 (from 90 to 83% in "key industries" and from 53 to 45% in "pillar industries"). However, areas related to other MiC25 prioritiesnext-generation ICT,robotics,14 It is indeed difficult for foreign companies to survive in China. When eBay appeared in China in 2002 it quickly gained 70% market share, however, five years later, its market share fell below 10%. In 2004, Amazon acquired a Chinese online bookseller, its market share was 15% in 2008, while now it is below 1%. In 2005, Microsoft's MSN service entered the Chinese market and gained a 53% market share among Chinese business users. It decided to leave the market in October 2014, when its share had fallen to less than 5% due to the competition created by Tencent QQ and WeChat. Uber appeared in China in 2014, spending billions to gain market share from the Chinese competitors. Finally, in 2016, it sold its Chinese subsidiary to a local company. In 2015, Airbnb also arrived in China, but was never able to really compete with the Chinese competitors: in 2017, Airbnb offered 150,000 rooms for rent, while market leader Tujia.com offered 650,000 (Li et al. 2018 MAPPING THE FEATURES OF ADVANCED, KNOWLEDGE-BASED ECONOMIES IN CHINA When trying to respond the research question posed at the beginning of the paperi.e. is the systematic development of emerging industries and technologies suitable to accelerate China's economic growth and achieve the desired shift in its modernisation trajectoryone shall consider some of the features of the more advanced knowledge-based economies (just as the Chinese government technocrats do when creating long-term strategic development plans). These features can be divided into two groups. The first group includes easy-to-measure indicators, that can easily be used for international economic comparisons, such specialisation in technology/research-intensive or emerging industries, advanced infrastructure, high-tech production equipment, widespread use of state-ofthe-art products and technologies and significant global market share of future-oriented industries. Chinese strategic programsas well as scientific publications demonstrating their effectiveness (e.g., Song et al. 2017; Zhao 2019)usually focus on this type of indicators. They refer to the evolution of the share of emerging industries (such as those listed in MiC25) in GDP and exports, China's share of the world market within each "future industry," the number of patents, R&D spending (and its GDP share), the number of Chinese companies among the world's largest (technology) companies, the number of technology-oriented start-ups, including unicorns, and some further traditional input and output indicators of innovation. 15 These indicators have been analysed in the previous pages. In terms of the progress in digital transformation, precisely quantifiable indicators -measuring the spread of technologyare dominant, such as mobile payments and e-commerce, industrial robots, self-driving technology or digital infrastructure. Although I have already expressed doubts about the usefulness of these indicators above, 16 it has to be reaffirmed that although these indicators do suggest that the Chinese economy is catching up with the more advanced knowledge-based economies in some respects, yet these alone are only superficial results. One can get a more realistic picture on the shift of the modernisation trajectory and on the characteristics of the innovation-driven development when examining the progress that has been made on the specificities of the other group of features of knowledge-based economies. This other group includes difficult-to-quantify factors that are often described only by general terms, however, these characteristics are more closely related with the innovation-driven development trajectory of developed economies than the indicators of the former group. One of these features is that the knowledge required for development and the know-how required for 15 These include, for example, the number of scientific publications, the number of researchers and research institutions and the volume of direct investment in research and development. 16 This bias is caused, among other things, by the fact that these indicators do not clearly reflect the effectiveness of the shift in modernisation trajectory as there may be large differences in the efficiency of some input indicators, such as R&D expenditure, and the importance and economic impact of some output indicators, such as patents (see for example, Szalavetz 2011). operation are mostly created and continuously developed locally (although, of course, they also import knowledge developed elsewhere). This feature of advanced knowledge-based economies can be measured with a variety of indicators; one of these is the decline in the share of foreign value added. The Chinese data are presented in Figure 9, showing a significant decline in the share of foreign value added in Chinese exports in the past one and a half decade. The progress made in the field of localisation of knowledge can also be measured by the increase in the number and importance of Chinese brands globally. Moreover, this is also reflected in the number of the technology-oriented start-ups and unicorns as well as the number and performance of the fast-growing regional start-up hubs that are constantly launching fastgrowing start-ups specialising in new technology. 17 Another characteristic of the more advanced economies is that innovation is less concentrated in specific industries, regions and/or specific groups of companies, i.e. not only the large and foreign-owned companies possess it. The number and results of technology-oriented startups and other Chinese achievements in related areas (start-up hubs, venture capital infusion) and the overall improvement in business performance presented above are reflecting not only the localisation of knowledge, but also the spreading of innovation, i.e. the decline in its concentration. These phenomena point to deeper changes than those presented by the more superficial indicators, reflecting the first real results of the shift in the Chinese modernisation trajectory. It has to be added, however, that these are just the first visible signs of the transformation. For instance, Ma and his co-authors (2019) concludedbased on the Chinese databases as well as the data from the balance sheets of the 500 largest listed Chinese companies that in geographical or regional sense there is no dispersion yet: China's innovation performance is concentrated in the eastern coastal areas, and, as Lu and Cao (2019) pointed out, the development of the key city hubs does not spill over to the surrounding cities. Similarly, the decline in the concentration of industry innovation has just begun: the vast majority of innovations are still connected to high-tech manufacturing and knowledge-intensive services. A further feature of the more advanced, knowledge-based economies is the significant share of intangible capital in total investments. In order to quantify intangible capital stock, a number of international statistical methodologies have been developed (e.g., Corrado et al. 2005; Ilmakunnas -Piekkola 2014) and econometric calculations have consistently shown a strong and growing role for intangible capital formation among the growth drivers in the developed countries (e.g., Fukao et al. 2009). Of course, China has also been involved in the country-level investigations (e.g., Hulten -Hao 2012;Yang et al. 2018;Li -Hou 2019). Their findings suggest that: (1) Although China still lags behind the developed countries, all corporate components of intangible capital, i.e. corporate capital (including company-specific human capital, brand value, business model and market position); IT capital (such as software and other digitized information, databases); and technology capital have grown spectacularly over the past decade; (2) The growth of intangible capital creates a good basis for further productivity development in the long-run, while these investments will not have a strong, direct positive effect on economic growth in the short-run. 17 According to the start-up Genome 2019 report, there are three Silicon Valley-like start-up hubs in China (Beijing, Shanghai and Hong Kong), while Hangzhou and Shenzhen are also on their way to enter the top 30 start-up hubs in the world. CONCLUSIONS Based on the analysis of the literature and statistics detailed above, I found that China has achieved significant results with its economic development strategy based on innovation and digitalisation, named as the digital great leap forward in this article. The (superficial) indicators of the first group reflecting China's economic development have improved remarkably within a short period of time. This is impressive by all means even if most analysts have rightly pointed out that these results could have been achieved more efficiently with a slower pace and/or with fewer resources (Hong et al. 2016;Howell 2017;Wei et al. 2017 the opposite view is expressed by Hu -Yongxu 2019). Nevertheless, not just superficial indicators prove that the Chinese economy is moving closer to the more advanced knowledge-based economies: in the past few years, it has become increasingly apparent how extensive efforts are beginning to reap results in intensive development. The localisation and diversification of knowledge has accelerated, the concentration of innovations has decreased, the stock of intangible capital is accumulating and becoming an increasingly significant growth-driver. However, these phenomena only show that China is moving forward on the path to catch-up with the more advanced economies. The road ahead, despite the spectacular results so far, is still long, and closing the gap with the more advanced economies will not necessarily be easier as these countries are also making serious innovation efforts. I can conclude that the development of the domestic innovation capabilitiesbased on the large internal market, accompanied by the development of human capital and business incentivesis indeed suitable for transforming the Chinese modernisation trajectory. However, it is not necessarily accelerating China's economic growth. So far, the Chinese economy's performance indicators are determined by traditional drivers (such as infrastructure investment or new export-oriented production capacities) rather than by the new growth drivers. In addition, the shift to a more resource-efficient, higher value-added production will result in significant structural losses, which could adversely affect growth rates. Is the digital great leap forwardthe systematic development of emerging industries and technologiescapable to accelerate China's economic growth and achieve the desired shift in its modernisation trajectory? The answer, in short, is that this strategy and its systematic implementation is essential, but not necessarily sufficient to achieve these goals. China could soon become a high-income economy based on the gross national income per capita levels defined by the World Bank. But for China to become an advanced, knowledge-based economy, further systematic industrial and technology policy efforts are needed, that may take over many decades.
2020-10-29T09:02:01.522Z
2020-10-16T00:00:00.000
{ "year": 2020, "sha1": "6d2a48af52fbf0896b1cecc91a19c44537f09dd4", "oa_license": "CCBY", "oa_url": "https://akjournals.com/downloadpdf/journals/032/70/S/article-p95.pdf", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "caf8ac28ff6942ac44364ba0626740ae5b9f9d68", "s2fieldsofstudy": [ "Economics", "Computer Science" ], "extfieldsofstudy": [ "History" ] }
136324391
pes2o/s2orc
v3-fos-license
Effect of Selective Coatings on Solar Absorber for Parabolic Dish Collector A solar parabolic dish concentrator is a collector system in which designed to collect thermal energy by concentrating direct sunlight. The solar concentrating system in which the absorber is plays an important role to collect the heat energy. In the working of the solar collectors, the solar absorber gets heated up due to the solar radiation incident and transfers the heat to the Heat Transfer Fluid (HTF). The different types of solar thermal collectors with the optical and thermodynamic performance of the collectors reviewed by1. The modeling and experimental study on solar collectors are carried out by 2–4. Energy conservation measures on solar, and wind energyis discussed by 5–7. The effect of temperature distribution on surface absorption receiver in a parabolic dish solar concentrator with and without PCM was studied by8–10. The improvement in efficiency of solar cells is demonstrated by 11. The study of black coating for solar applications reported by several authors. The objective of present work is to explore and investigate the performance of parabolic solar collector with coated absorber. The different absorber coatings investigated for PDC, and the results reported, testing regular sunny days. Introduction A solar parabolic dish concentrator is a collector system in which designed to collect thermal energy by concentrating direct sunlight. The solar concentrating system in which the absorber is plays an important role to collect the heat energy. In the working of the solar collectors, the solar absorber gets heated up due to the solar radiation incident and transfers the heat to the Heat Transfer Fluid (HTF). The different types of solar thermal collectors with the optical and thermodynamic performance of the collectors reviewed by 1 . The modeling and experimental study on solar collectors are carried out by [2][3][4] . Energy conservation measures on solar, and wind energyis discussed by [5][6][7] . The effect of temperature distribution on surface absorption receiver in a parabolic dish solar concentrator with and without PCM was studied by [8][9][10] . The improvement in efficiency of solar cells is demonstrated by 11 . The study of black coating for solar applications reported by several authors. The objective of present work is to explore and investigate the performance of parabolic solar collector with coated absorber. The different absorber coatings investigated for PDC, and the results reported, testing regular sunny days. Methods and Materials An experimental set-up is used to analyze the thermal performance of solar PDC with coated absorbers. The experimental set-up consists of 16-square meter 2 PDC, absorber, storage tank, pump, and flow meter and flows control value. The schematic diagram of the experimental set-up is shown in Figure 1. The incident solar radiation on PDC and concentrated on the absorber. The water storage tank is 110 liters which are filled entirely with soft water, and the connected pump used for circulation. The flow controlled by a control valve and measured with a rotameter. The receiver is 400 mm in diameter and 100 mm deep. The metal used is 5 mm thick. MS receiver is a hollow circular vessel with 1-inch pipes coming out of its top and bottom. It has a provision for changing its covering receiver plate, by removing the eight bolts that hold the plate to the hollow receiver. The inlet and outlet pipes coming out of the receiver welded on the receiver. The absorber has an inlet and an outlet port. K-type thermocouples inserted into the entry and exit of the receiver. The entire system insulated with glass wool of 30 mm thickness. All three plates are made from MS plate with a thickness of each plate 5 mm. The choice of MS is made due to the relative ease of availability of the metal, its cost effectiveness, and most importantly, its physical properties that are essential for its performance as a good absorber plate. MS has a melting point of 1450 o C, density of 7.85 g/cm 3 and emissivity of 0.16. A low emissivity value leads to low emissive heat loss. The plates of the receiver are changed to compare the efficiency of the receiver. The plates with and without coating are tested in the experiment. Ni has high absorptivity and low emissivity values. These parameters are pertinent to high absorptivity. A low emissivity value means that a low amount of the incident radiation would be emitted by the plate and would mostly retain. The properties of MS plate electroplated with Ni are emissivity of 0.08 and the absorptivity of 0.92. The properties of Cr, though not as good as Ni, are still good enough to be tried as an alternative to the MS Plate. The properties of MS plate electroplated with Cr emissivity of 0.11 and absorptivity of 0.88. The plates are placed on the receiver and fixed with bolts. The experiments conducted as per ASHRAE standards 12 . Performance Analysis Heat losses occur from a receiver due to the temperature difference between the receiver and its surroundings. Total heat losses from the receiver are conductive losses, convective losses, and radiation losses. The geometry of the set-up is as given in Table 1. The following relation can use as to find shape factor consider the reflector and receiver as two co-axial discs, where, The shape factor between the surfaces obtained by Substituting the values for 1 r , 2 r and L in the equation and also using the summation rule and by the Reciprocity relation. For enclosures with specified surface temperatures, the following relation can be used to obtain the radiosities (J) of the surfaces. Applying the above equation, For the receiver, reflector and side walls determined by using the expression: Convective heat loss of the receiver to the surroundings can express as, where, U L is the heat loss coefficient determined in W/m 2 K, A r is the effective receiver area in square meter, T av is the average of T in and T out, and T amp is the ambient temperature in K. The following equation uses the general equation for total radiation heat transfer due to emission, 4 4 ( ) Thermal efficiency obtained by measuring the temperature difference of working fluid through the receiver, together with fluid properties, mass flow rate, and solar direct radiation incident. Thermal energy efficiency of the collector based on heat gained by working fluid expressed as: where, m  is the rate of flow of HTF, C p is the specific heat of HTF, p A is the absorber plate area b I is the incident beam radiation, out in T T − is the temperature difference of water inlet and outlet in the absorber. Results and discussion The experimental test is carried out to investigate the performance behavior of the PDC with different coating absorber. The performance of the dish solar concentrator can characterize by an estimation of the stagnation temperature and performance test at constant solar energy input with the same period interval. The direct solar radiation, wind speed, and ambient temperature are observed using pyranometer, anemometer, and K-type thermocouples respectively. The experiments conducted during April 2016 in Chennai climate. The average solar radiation around 760 -820 W/m 2 only considered in the heat loss and performance calculations. The stagnation test is often the preliminary test to compare the characteristics of the different plates. The temperature of a collector system measured in periods of time under no fluid flow conditions as referred the stagnation temperature. It considered for designing of the solar collector, absorber, selection of absorber material, working fluid, hot water tanks and other auxiliary equipment. The useful heat gains by the absorber plate yields evaluated with the following equation: At stagnation condition, Q u = 0, Where S is the incident solar flux and l Q is the total heat loss. Figure 2. The HTF temperature at inlet and outlet temperature of the absorber plate and atmosphere temperature readings are taken and recorded in every five-minute time interval. The experiment tests restricted to collector within operating temperature of 100 o C. A temperature difference of inlet and outlet heat transfers fluid (HTF) variation for different coated absorber plates obtained and as shown in Figure 3. Figure 4 indicates that the absorber temperature obtained by the Ni plate is higher than that of the other two plates, thereby providing the higher amount of energy for the heat exchanger to absorb. So the among three plates, by using MS with Ni coated plate provides higher performance of collector. Figure 5 shows that the efficiency of our collector system is highest when the MS absorber plate is electroplated with Ni. It would mean that the heat available for useful work would be highest with the Niplated absorber, thereby justifying our hypothesis and preliminary tests which suggested that MS Electroplated with Ni is the best choice for our PDC. Figure 5. Variation in efficiency for different coating absorber plates. Conclusion Experimental investigations of the solar PDC with and without coating on absorber plates were carried out to determine the efficiency of the receiver. MS plate with Ni coating has the highest stagnation temperature about 531ºC among the three plates. The heat available for useful work would be highest with the Ni-plated absorber. The radiation heat loss is more for the MS absorber without coating about 94.24 W. Ni, and Cr exhibited low heat losses under similar radiation conditions. The convective heat loss found as lowest in the Cr plated absorber about 148.6 W. This indicates that the heat loss due to convection mostly controlled by the electroplating of Cr on MS. The Ni coating on MS absorber produces effective heat absorption.
2019-04-29T13:16:01.339Z
2016-12-16T00:00:00.000
{ "year": 2016, "sha1": "97cd1cc0da4c10d4ac75f978149b81c8f07a2dc4", "oa_license": "CCBY", "oa_url": "https://indjst.org/download-article.php?Article_Unique_Id=INDJST10951&Full_Text_Pdf_Download=True", "oa_status": "GOLD", "pdf_src": "MergedPDFExtraction", "pdf_hash": "65a71a5043b73ca33b2e31e17e5c34ee82c8bb6e", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Materials Science" ] }
142347536
pes2o/s2orc
v3-fos-license
Back to the Future? History, Material Culture and New Materialism The study of history currently witnesses two markedly different material turns. Some historians are using material artefacts as alternatives to textual sources. Others draw on ‘new materialism’, a new tradition in thought that originated in the field of gender studies. Both groups are trying to move beyond the cultural turn, which has dominated the study of history since the 1980 s. However, the first group merely extends the programme of the cultural turn into new domains without rejecting its methods or epistemological foundations. The latter group, on the other hand, provides a new cultural theory. This article demonstrates that the ‘new’ in new materialism is not so much an increased engagement with the material world, but rather a new conceptualization of developing theory and reading texts, which cuts through established dichotomies between matter and meaning or culture and the social. In doing so, a new materialist history can solve some of the problems associated with the cultural turn and the turn to material artefacts. Introduction The cultural turn revolutionized many disciplines in the humanities and the social sciences. 1 In the case of history, after heated debates between converts and adversaries in the 1980s, it dramatically changed the way engagement with the material world, but rather a new conceptualization of developing theory and reading texts, which cuts through established dichotomies between nature and culture, matter and meaning. These insights can help historians overcome some of the problems associated with the cultural turn. They also reshape the cultural historian's ideas about objects and material culture. The Material Turn in Cultural History In a recent article, Harvey Green detects a 'material turn' in cultural history. 5 This turn, according to Green, consists of an increasing interest among historians in material culture, and draws from material culture studies, an interdisciplinary research programme rooted in archaeology, art history and anthropology. Although individual historians, especially medievalists and others closely tied to archaeology, have always been interested in material artefacts, coordinated efforts to establish a material culture research programme only started in the late 1970s. 6 While proponents of material culture studies organized international conferences, and published in their own Journal of Material Culture, the majority of historians ignored material objects in favour of written sources. 'The scholarship nobody knows' only gained recognition when socially engaged historians started to search for traces of marginalized people whose voices had escaped the official historical records in the archives. 7 At first, Marxist historians with a keen eye for material conditions in the past tried to reconstruct the history of lower classes. Feminist and postcolonial scholars then turned to the lives of women and colonial 'others'. 8 Peasants, workers, women and slaves, it turned out, had left all kinds of everyday objects. As a result, the work of material culture scholars, who specialized in material culture of the everyday, suddenly became conventional. This development culminated in what Green called a 'material turn' in cultural history. What does this material turn look like? Material culture studies focuses on stuff, whose value to historians is reflected in a classical definition by one of the founders of the field: 'objects made or modified by humans, consciously or unconsciously, directly or indirectly, reflect the belief patterns of individuals who made, commissioned, purchased, or used them, and by extension, the belief patterns of the larger society of which they are a part'. 9 In History and Material Culture (2009), Karen Harvey and contributors demonstrate the potential of this alternative source. 10 Case studies, ranging from furniture and clothes to buildings and landscapes, offer a passionate plea in favour of 'artefacts as evidence', a type of source that easily measures up to written material if one knows how to handle it. After all, objects, like texts, carry meaning. To understand them, Harvey suggests, historians need to 'read'. Take, for example, gardens. According to Marina Moskowitz, one of the contributors to Harvey's book, gardens or 'domestic landscapes', often modified by ordinary people over longer periods of time, are mirrors of political, economic and cultural ideas. 11 Following Harvey, Moskowitz shows how these landscapes, like other material artefacts, can be read as texts. By carefully monitoring human alterations, such as fences, pathways and buildings, domestic landscapes reveal choices, and underlying ideas and political agendas, of the people who designed and used them. At the end of the nineteenth century, for instance, planners in the United States divided freshly colonialized landscapes into zones for different purposes. They reserved the best parcels for structures that supported family life, such as single-family houses, churches, and schools. Thus, as Moskowitz demonstrates, the dominant idea at the time of the nuclear family as core unit of society is reflected in the landscape. Interestingly, the case studies in History and Material Culture only illustrate how material culture can be used by historians to reconstruct the mental world of people. The contributors heavily draw on interpretative tools associated with the new cultural history. They apply its idea of 'reading culture as a text' to material culture. Objects as 'texts' provide information about humans; their materiality, however, remains undiscussed. Artefacts are cultural stuff, that is, passive transporters of human ideas. The way people handle them creates meaning. In other words: while humans are seen as agents who actively move in, and give meaning to, the passive world around them, objects do not carry any significance of their own. 'New materialism' provides an entirely different notion of matter. The 'New' in New Materialism Unlike the material turn in cultural history, new materialism does not continue the programme of the cultural turn. Rather, it offers an alternative to the epistemological and metaphysical foundations that informed the cultural turn and (post)modern thought in general. Thus, while the material turn in cultural history is characterized by an increased engagement with objects as 'new' historical sources, new materialism is a new way of developing theory. Paradoxically, for a movement that rejects the episteme of the cultural turn, this 'new metaphysics' is, like the analytical toolkit of the new cultural history, based on the metaphor of 'reading'. New materialist readings, however, result in surprising and challenging conceptualizations of matter and the agency of objects. New materialism originated in the field of Gender Studies where Rosi Braidotti coined the term in the early 1990s. 12 Drawing from the philosophy of Gilles Deleuze, Henri Bergson and Spinoza -thinkers who attempted to undo the dualisms that have constituted Western thought since Descartes -Braidotti and other feminist scholars like Donna Haraway, Karen Barad and Vicki Kirby started to critically question the cultural turn's one-sided focus on culture. The movement recently gained prominence with the publication of three programmatic companions that explicate its aims and methods. 13 The volumes make clear that new materialism is, first and foremost, a commentary on, or a critical rethinking of, the cultural turn. Their authors are keen to show, however, that new materialism does not simply aim to reject the work of a previous generation. 14 Rather, they want to draw it into conversation with earlier paradigms as well as with ideas from the natural sciences. In New Materialism: Interviews & Cartographies (2012), for instance, Rick Dolphijn and Iris van der Tuin explain that new materialism is 'transversal' rather than 'dialectic'. Previous traditions in thought, they argue, have always positioned themselves dialectically against their predecessors. In doing so, they created negative relations between terms by structuring theoretical approaches and paradigms as dual opposites (e.g., new cultural history versus social history, postmodernism versus modernism). As a result, postmodern theory, underlying the cultural turn, may claim to have deconstructed the dualisms of modern thought (i.e., culture-nature, male-female, mind-body), but in practice only reinforced dualist thinking: New materialism is a cultural theory for the twenty-first century that attempts to show how postmodern cultural theory, even while claiming otherwise, has made use of a conceptualization of 'post-' that is dualistic. Postmodern cultural theory re-confirmed modern cultural theory, thus allowing transcendental and humanist traditions to haunt cultural theory after the Crisis of Reason. New materialist cultural theory shifts (post-) modern cultural theory, and provides an immanent answer to transcendental humanism. 15 In an effort to break through the hierarchical dialectics of (post) modern thinking, new materialism attempts to establish 'transversal cartographies', that is, affirmative relations between seemingly opposing theoretical traditions, which are 'structured by positivity rather than negativity'. 16 Its main tool in achieving this ambitious aim is a conceptualization of reading as 're-reading'. Thus, new materialists reread classical and marginal texts from different paradigms and (inter)disciplines through one another. In doing so, they look for 'sharing characteristics' and 'unexpected theorizations' between, for instance, the structuralism and Marxist materialism of the 1970s -a scholarly tradition that the cultural turn so forcefully rejected -and recent ideas from the field of Science and Technology Studies (STS), including Latour's Actor-Network Theory. 17 In the words of Dolphijn and Van der Tuin: 'New materialism says "yes, and" to all intellectual traditions, traversing them all, creating strings of thought that, in turn, create a remarkably powerful and fresh "rhythm" in academia today'. 18 Theoretical particle physicist and feminist theorist Karen Barad, for example, who is one of the most prominent new materialist scholars, brings insights and approaches from physics, including recent discoveries in quantum mechanics, and cultural and social theories of Michel Foucault, Judith Butler and Bruno Latour into conversation with one another. 19 The result is what she calls a 'posthumanist performative account' that breaks through established dualisms and shows how humans and nonhumans, and matter and meaning are co-constitutive. Central to Barad's argument is the notion that humans and culture are not outside of nature. Rather, humans are nature; they are 'of the world'. Nature and culture, according to Barad, are both performative. In other words, nature is not a passive stage on which humans perform; nature shapes culture as culture shapes nature. In a similar vein, Donna Haraway, whose early training as a biologist introduced her to the selfregulation power of the natural world, coined the term 'naturecultures', i.e., the idea that 'bodies and meanings coshape one another'. 20 Such ideas direct attention to the body as a biological entity, as living matter, next to the cultural turn's insight that bodies, as empty containers, only acquire meaning in discursive practices. 21 The transversal readings of Barad and Haraway bring 'nature' into cultural theory without giving preference to either one. Indeed, one of the central insights of new materialism is that nature and culture are two sides of the same coin only taken apart by the academic world whose internal dynamics have distributed labour to separate science and humanities departments. How Matter Comes to Matter 22 From these new materialist rereadings, and their 'unexpected theorizations' about the co-constitution of nature and culture, emerge new notions of matter and the agency of objects that differ considerably from the passive objects in cultural history. New materialists talk about 'matter' in at least four different ways. Firstly, materiality is seen as a dynamic and self-organizing process. According to this notion, which signifies the new materialist's attempt to bring ideas from the natural sciences into the humanities, matter is a productive and agentive force. 'Matter is neither fixed nor given nor the mere end result of different processes. Matter is produced and productive, generated and generative. Matter is agentive, not fixed essence or property of things'. 23 Secondly, and closely related, is the notion that nonhuman agents co-shape social worlds. If materiality is agentive, then objects have a life of their own; they actively interact with, resist and co-shape other entities, including humans. Drawing on STS scholars like Bruno Latour, new materialists argue that objects are 'actants', that is, objects are part of networks of relations and play an active role in establishing, maintaining or dissolving these networks. 24 Thirdly, new materialists are engaged with 'material realism'. 25 The cultural constructivist and deconstructionist approaches of the cultural turn reduced the material world to discursive representations. New materialist scholars, on the other hand, want to engage with and theorize about non-discursive aspects of reality by taking lived experience, corporeal practice and biological substance into consideration. Finally, new materialists refer to matter in the sense that some things 'matter' because they are a cause of great concern. This last notion indicates a turn to ethics and highlights the political programme of the movement. After the cultural turn's political correctness and moral impartiality, new materialists want to take a position with regard to debates about climate change and biotechnical engineering among other issues. 26 In order to do so, they need to take the nonhuman world, traditionally the domain of the natural sciences, seriously. Consider Nancy Tuana's reading of hurricane Katrina, one of the case-studies in Material Feminisms. 27 On August 29, 2005, Katrina played havoc among New Orleans. At first sight, hurricanes are natural forces. However, according to Tuana, it is impossible to separate the natural from the social. Katrina, for instance, only became such an explosive natural force because of global warming, which is the result of complex interactions between chemical processes and human activities fuelled by cultural beliefs of consumerism and the social structures of the free-market economy. What is more, American politicians trivialized the dangers of climate change and refused to heighten New Orleans' levees. Katrina also interacts with other matters of concern such as poverty, racism and ignorance. And all these have material dimensions too. 'Grow up without proper nutrition', Tuana states, 'and physiological development will be affected. Grow up without educational resources, and cognitive development will be affected. Grow up living the effects of institutionalized racism, and trust in those institutions will be affected'. 28 By looking at Katrina, and the complex web of relations of which it is part, Tuana shows that boundaries between natural and social phenomena are 'porous'. Human and nonhuman entities, she argues, interact and are dynamically related. In sum: new materialism, in an effort to traverse the dualisms of (post)modern thought, creates diffractive cartographies by reading different disciplines and paradigms through one another. These result in fresh theorizations about and empirical engagements with the material world and matters of concern. Towards a New Materialist History Is new materialism applicable to the study of the past? Is there a new materialist history? Notions of matter as generative force and nonhumans agency may seem foreign to many historians who, as we saw above, treat objects as cultural artefacts made and modified by humans. Some historians, however, have recently discovered Latour and Barad. It is in their work that the contours of a new materialist history are starting to emerge. 29 Rather than simply applying Latour to historical case studies, they bring his work in conversation with the existing historiographical tradition. These conversations -or transversal readings -help overcome some of the problems associated with the cultural turn. They show, for example, that culture and nature, and matter and meaning are interrelated. In doing so, these transversal readings problematize and deepen the cultural historian's ideas about the role of objects and material culture in the past. Interestingly, among the historians who flirt with new materialism are some 'old' social historians who, during the turbulent times of the cultural turn, stayed true to Marxist-inspired materialisms. As I explained earlier, socially-engaged Marxist historians were among the first in the profession to take material culture seriously as a source to reconstruct the history of the lower classes. Transversal readings can result in excited conversations between seemingly incompatible intellectual traditions and disciplines. Recently, archaeologists, anthropologists, geographers and STS scholars started such a conversation in The Oxford Handbook of Material Culture Studies (2010). 30 Unlike Harvey's History and Material Culture, which focuses on one intellectual tradition, The Oxford Handbook provides a 'dialogue' between cultural turn-inspired approaches in material culture studies and work from other disciplines on the agential power of nonhumans. In what they call a 'reactionary view', the contributors deliberately refuse to position themselves vis-à-vis previous generations of scholars. Rather, they say to 'represent a series of crossroads rather than a new series of "turns"'. 31 In doing so, the work provides a transversal cartography and breaks through established dualisms, e.g., between humans and nonhumans. Drawing on Barad and other new materialists, the editors of the The Oxford Handbook conclude that researcher and research object co-shape one another: 'The studies collected in this volume lead towards an appreciation not only of the effects of things, but also of things as the effects of material practices'. 32 In similar fashion, Material Powers (2010), edited by social historian Patrick Joyce and sociologist Tony Bennet, assembles a group of scholars whose empirical work is inspired by theories from different paradigms and disciplines. 33 By bringing ideas about power and matter of, among others, Michel Foucault, Latour and Gilles Deleuze into the study of history, the book shows that culture, economy and the social are always intertwined and co-constituted in material-discursive networks of relations. Chris Otter's chapter on urban history provides a particularly good example of such a new materialist history. 34 Otter rereads old texts (Heidegger, Braudel) and new ones from disciplines such as STS and environmental sciences and applies them to the history of the city. From these 'conversations' he concludes that 'old analytic binaries (natural-social, urban-nonurban) no longer have much analytical purpose'. And like Barad, he argues that matter is a 'dynamic … and interactive force'. 35 In an urban environment such a force can be traced by looking at the 'metabolism of the city', that is, the circulation of particular substances, such as water and meat. 'Analyzing the flows themselves brings scholarship closer to the material transformations which have really defined the past 200 years: the dramatic exploitation of resources, the inefficiency of their use, the development of synthetics, the widening inequality of access to resources, and the remarkable lack of concern for "externalities" like air quality'. 36 Substances that flow through a city, Otter argues, are at once material, political and environmental. The presence of clean drinking-water, for example, is the result of human infrastructures and political decisions. Water, however, is part of an assemblage (Deleuze's term) that, next to humans, includes a 'chain of material agents'. Thus, although people try to control water supplies, non-humans can frustrate these efforts in unexpected ways. At the end of the nineteenth century, for instance, water reservoirs in the United Stated consisted of lead pipes that polluted the environment and poisoned people instead of making their lives more comfortable. The same goes for meat, which, according to Otter, 'has a material history which is simultaneously technical, political, environmental and physiological'. 37 People domesticated animals for meat, but now these same animals produce methane, a substance that contributes to environmental problems like climate change. Back to the Future Examples like Otter's metabolism of the city demonstrate that new materialist approaches can lead to new insights into the past. However, to many historians new materialism is problematic for the simple reason that they are interested in people as cultural beings. Historians turned to objects in order to find traces of marginalized groups who had escaped the written record. They were not interested in the objects themselves. The idea of non-human agency, therefore, remained trivial. What is more, most historians lack the feminists' ethical imperative to engage themselves with contemporary matters of concern in which materiality plays an important role. One could easily look at, say, the 1755 Lisbon earthquake from Tuana's interactionist point of view, but what most historians want to know is how the disaster functioned as a metaphor in philosophical treaties. There are no lives at stake anymore. Yet, on a theoretical level a new materialist history might help historians overcome some of the problems of the cultural turn. The great strength of new materialism is that it tries to incorporate fresh ideas from other disciplines, including the natural sciences, into the humanities, without rejecting the work of previous generations of humanities scholars. By focusing on flows and networks of relations that are at the same time cultural, social, political and natural, a new materialist history breaks through the cultural turn's hegemony of culture and language. In so doing, it does not deny the importance of language and discursive practices, and other key insights from the cultural turn. Rather, a new materialist history shows that meaning and matter are mutually productive. In addition, transversal cartographies can help reappreciate the valuable work done by social historians in the 1970s, and earlier generations, which new cultural historians rejected a bit too fast in the heat of the cultural turn. The study of history has a very rich past full of beautiful texts (classics and obscure ones) that deserve rereading. What unexpected future theorizations may we expect if we bring, say, Emmanuel Le Roy Ladurie's work on climate into conversation with the cultural turn and present developments in the natural sciences? 38 It is about time to go back to the future. Notes 1 I would like to thank the anonymous reviewers for their helpful comments and suggestions to improve this article.
2019-05-02T13:06:22.443Z
2015-04-23T00:00:00.000
{ "year": 2015, "sha1": "2f5e3977109d8865e18a0f8cc274737a2fe6aec5", "oa_license": null, "oa_url": "https://doi.org/10.18352/hcm.476", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "29c873fd468f42c75a3e596c7e39d3a25cb2590e", "s2fieldsofstudy": [ "Sociology", "Art" ], "extfieldsofstudy": [ "Sociology" ] }
49866679
pes2o/s2orc
v3-fos-license
Lyophilization and stability of antibody-conjugated mesoporous silica nanoparticle with cationic polymer and PEG for siRNA delivery Introduction Long-term stability of therapeutic candidates is necessary toward their clinical applications. For most nanoparticle systems formulated in aqueous solutions, lyophilization or freeze-drying is a common method to ensure long-term stability. While lyophilization of lipid, polymeric, or inorganic nanoparticles have been studied, little has been reported on lyophilization and stability of hybrid nanoparticle systems, consisting of polymers, inorganic particles, and antibody. Lyophilization of complex nanoparticle systems can be challenging with respect to preserving physicochemical properties and the biological activities of the materials. We recently reported an effective small-interfering RNA (siRNA) nanoparticle carrier consisting of 50-nm mesoporous silica nanoparticles decorated with a copolymer of polyethylenimine and polyethyleneglycol, and antibody. Materials and methods Toward future personalized medicine, the nanoparticle carriers were lyophilized alone and loaded with siRNA upon reconstitution by a few minutes of simple mixing in phosphate-buffered saline. Herein, we optimize the lyophilization of the nanoparticles in terms of buffers, lyoprotectants, reconstitution, and time and temperature of freezing and drying steps, and monitor the physical and chemical properties (reconstitution, hydrodynamic size, charge, and siRNA loading) and biological activities (gene silencing, cancer cell killing) of the materials after storing at various temperatures and times. Results The material was best formulated in Tris-HCl buffer with 5% w/w trehalose. Freezing step was performed at −55°C for 3 h, followed by a primary drying step at −40°C (100 µBar) for 24 h and a secondary drying step at 20°C (20 µBar) for 12 h. The lyophilized material can be stored stably for 2 months at 4°C and at least 6 months at −20°C. Conclusion We successfully developed the lyophilization process that should be applicable to other similar nanoparticle systems consisting of inorganic nanoparticle cores modified with cationic polymers, PEG, and antibodies. Introduction In the last decade, nanoparticles have been widely developed as carriers for the delivery of antibodies, oligonucleotides, and drugs. Nanoparticles protect cargos against enzymatic degradation, prevent rapid clearance of small compounds by the kidneys, and prolong blood circulation half-life of the cargos. 1 Typically, nanoparticles are formulated in solution as colloidal systems, which cannot be stored long term due to physical instability (aggregation) and chemical instability. 2 To facilitate long-term storage, all traces of water must be removed by a process of freeze-drying or lyophilization. 3,4 However, lyophilization of nanoparticles is more challenging than that of traditional Dovepress Dovepress 4016 Ngamcherdtrakul et al chemical compounds since the process may affect both the physical (eg, size) and chemical properties of the nanoparticles. This is especially true when the nanoparticle consists of many components. Therefore, optimization of the lyophilization process and stability assessment are clearly needed. We have recently developed cationic polymer modified mesoporous silica nanoparticles (MSNPs) as a promising small-interfering RNA (siRNA) carrier for breast cancer treatment. 5 As shown in Figure 1, the MSNP of 50 nm in size was surface modified with a cross-linked polyethylenimine (PEI) and polyethyleneglycol (PEG). The cross-linked PEI allows the loading of negatively charged siRNA and promotes endosomal escape via proton sponge effect, while PEG provides a steric effect that protects siRNA from enzymatic degradation and nanoparticles from aggregation and phagocytosis. The PEI was cross-linked to increase the buffering capacity and enhance the endosomal escape of siRNA by the proton sponge effect principle. The modified nanoparticles are then conjugated with antibodies for targeting the cancer cells of interest. Specifically, Trastuzumab (Herceptin, Genentech), a humanized monoclonal HER2 antibody, was used as a homing agent for HER2-positive cancer. The siRNA nanoconstruct has been shown to overcome drug resistance in two HER2-positive cancer mouse models. 5,6 Toward its clinical evaluation, material stability over a long period of time is needed. Lyophilization of polymeric nanoparticles systems, such as poly(lactic-co-glycolic acid), 7,8 polycaprolactone, 9 or PEI, 10 and lipid nanoparticle systems have been explored. [11][12][13][14] Likewise, lyophilization of inorganic nanoparticle systems, such as silica 15 or gold 16 nanoparticles, has also been attempted. However, little has been reported on lyophilization and stability of hybrid nanoparticle systems, consisting of polymers, inorganic particles, and antibodies. These hybrid systems have been studied intensively for siRNA and drug delivery in the past several years due to the recognition that lipid or polymeric systems alone have not reached desired clinical outcomes, and hybrid systems, or at least targeting agents (eg, antibody), may be needed. Amine-modified silica nanoparticles were successfully lyophilized in the presence of trehalose (TL) as a lyoprotectant; 15 however, the long lyophilization time of over 6 days is not highly economical. Our aim was to achieve a stable long-term storage of the antibody (trastuzumab)-conjugated PEG-PEI-modified MSNP for siRNA delivery. Ideally, the material should be kept stable for over 6 months without requiring an expensive −80°C freezer. It must be easily reconstituted and maintain the properties and performance of the material for siRNA delivery. Trastuzumab has been lyophilized with TL to sustain protein structure and activity during long-term storage, 17 but its lyophilization when conjugated on nanoparticles has not been reported. We adopt a two-vial strategy for our material (nanoparticles and siRNA were lyophilized separately) for its future use in personalized medicine, in which different siRNAs can be used on the nanoparticle carriers when oncogenes on the tumors are identified. This strategy exploits the rapid and simple loading of siRNA on our nanoparticle, which will be demonstrated. Nanoparticle synthesis Nanoparticles were synthesized following our published protocol. 5 Briefly, CTAC surfactant (0.15 M) was mixed with TEA (7 mL) in 2.5 L of water at 95°C. TEOS (60 mL) was then slowly added to the mixture under vigorous stirring for 1 h. The nanoparticles were then recovered by centrifugation, washed twice with ethanol, and dried overnight. The dried nanoparticles were resuspended and refluxed in acidic methanol (0.6 M HCl), recovered, washed with ethanol, and dried in a desiccator to obtain MSNPs. The dry MSNPs with the size of 50 nm were then mixed with branched-PEI in absolute ethanol at a mass ratio of 4:1 of MSNP per PEI. The mixture was shaken continuously for 3 h at room temperature, centrifuged, and resuspended in the ethanol solution containing PEI and 0.2 mg DSP as a cross-linker. The mixture was shaken for 40 min, then washed twice to remove excess PEI and DSP. Maleimide-PEG (5 kDa)-NHS was conjugated to MSNP-PEI at a mass ratio of 1:1 in PBS pH 7.2 under shaking condition for 2 h. The MSNP-PEI-PEG was washed twice with the PBS, resuspended, and kept in PBS. Antibody, trastuzumab (T), was conjugated to MSNP-PEI-PEG via a thiol-maleimide reaction following our published recipe. 5 First, trastuzumab was thiolated with Traut's reagent in phosphate buffer pH 8.0 with 50-fold molar excess of reagent for 2 h and purified by Zeba spin column. Thiolated trastuzumab was mixed with MSNP-PEI-PEG at a mass ratio The loading of siRNA was achieved by mixing T-NP and siRNA (at a nanoparticle/siRNA mass ratio of 50) in PBS solution under rigorous shaking of 250 rpm for 2.5-30 min at room temperature. The thermal gravimetric analysis (TEM) image of MSNP cores and the schematic of surface modification are presented in Figure 1. The material contained 65.3 wt.% of MSNP, 13.5 wt.% of PEI, 18.2 wt.% of PEG (all by TGA analysis), and 3 wt.% of trastuzumab by BCA analysis as reported previously. 5 The hydrodynamic size in PBS of the final construct with 2 wt.% siRNA was 113 ± 2.2 nm with narrow size distribution (PDI of 0.2) and zeta potential of 9.56 ± 0.13 mV in 10 mM NaCl, considered in a neutral range according to the NCI's Nanotechnology Characterization Lab (NCL). The hydrodynamic size of about 110 nm of the final nanoparticles (after the polymer coating, antibody attachment, and siRNA loading on the 50 nm [TEM size] MSNP core) indicated that the particles were not aggregated but remained as individual particles owing to the dense layer of PEG. If aggregated, the size would increase more significantly (eg, from 32 nm of MSNP core [TEM size] to 305 nm [DLS size in water] of the PEI-coated MSNP). 18 characterization of size and zeta potential by Dls Hydrodynamic diameter and zeta potential (charge) evaluations were performed with a Zetasizer Nano ZS (Malvern Instruments, Westborough, MA, USA). For size measurement, the 100 µg/mL of the material in 100 mM Tris-HCl buffer or PBS was used. The charge was measured in 10 mM NaCl using the same suspension conditions. The samples were loaded in appropriate cuvettes/capillary cells, equilibrated to 25°C before a minimum of 3 measurements were made. Freeze-thaw of nanoparticles (T-NP) Nanoparticles at 10 mg/mL in 100 mM Tris-HCl buffer (pH 7.4) or PBS with 0%-10% TL (%w/w of TL per nanoparticle) as a lyoprotectant were slowly frozen at −1°C/ min from room temperature to −80°C using a CoolCell ® LX Freezing Container (BioCision, San Rafael, CA, USA). The frozen materials were slowly thawed on ice to room temperature before use. lyophilization of nanoparticle (T-NP) Nanoparticle (T-NP) was first lyophilized without siRNA as shown in Figure 1C. T-NP was lyophilized at 10 mg/mL in 100 mM Tris-HCl buffer pH 7.4 or PBS with 0%-25% TL (%w/w) as the lyoprotectant. Then 500 µL of each nanoparticle formula was lyophilized in a 2-mL glass vial in a BenchTop Freeze Dryer (SP Scientific VirTis AdVantage 2.0, Warminster, PA, USA). The "initial" lyophilization conditions were adapted from the work by Sameti et al on amine-modified silica nanoparticle, which took 6 days. 15 Specifically, the formulation was slowly frozen at a shelf temperature of −55°C for 6 h. Primary drying was performed at a shelf temperature of −55°C and a pressure of 100 µbar for 24 h. The shelf temperature was then gradually increased to −40°C, and the pressure was reduced to 20 µbar for secondary drying for 54 h. After completion, vials were capped under vacuum with a built-in stoppering function. This process worked well at preserving the material but still took 4 days. To reduce the lyophilization time further, the conditions were optimized as follows: reducing the freezing time from 6 to 3 h (still at −55°C), increasing the primary drying temperature to −40°C (100 µbar), and increasing the secondary temperature to 20°C (20 µbar) while shortening the time to 12 h ("optimized" condition). Thermocouples were inserted into representative vials to monitor the product temperature throughout the lyophilization process. storage, reconstitution, and sirNa loading Lyophilized nanoparticles were used immediately or stored at 4 different temperatures: −20°C, 4°C, 20°C, and 37°C, for a specified period of time (up to 6 months) prior to characterization and performance evaluation. Prior to use, the lyophilized material was reconstituted with 500 µL of RNase-free water to 10 mg/mL. The suspension was sonicated for 1 min. Size and charge were measured as previously described. For siRNA loading, the reconstituted nanoparticles were mixed with siRNA in PBS to achieve an NP/siRNA mass ratio of 50. The mixture was shaken at 250 rpm and at room temperature for 2.5-30 min, and the material was ready for transfection in cells. The size and charge of the nanoconstruct (post siRNA binding) was also measured. To characterize siRNA loading, (Dy677) siSCR was loaded on the nanoparticle as aforementioned. Then the suspension was centrifuged at 21,130×g for 30 min, and the fluorescent signal of (Dy677) siSCR in the supernatant was measured by Tecan Infinite M200; negligible signal was found for all materials, indicating complete siRNA binding. Toronto), was used for the initial gene silencing efficacy assessment of the nanoparticles as previously reported. 5 The cells were derived from MDA-MB-231 human breast cancer cell line 19 available through ATCC. Cells were cultured in RPMI-1640 supplemented with 5% FBS and 1X P/S at 37°C in 5% CO 2 atmosphere. Cells were seeded at 3,500 cells/well in a 96-well plate under cell medium without antibiotics for 24 h prior to treatment. Nanoparticles loaded with siLUC or siSCR at an NP/siRNA mass ratio of 50 were applied to each well at a fixed dose of 30 nM siRNA. After overnight incubation (~20 h), cells were washed once and replenished with complete media containing antibiotics. At 48 h post treatment, cells were lysed and analyzed for luciferase activity by the Luciferase Glow Assay Kit (Thermo Fisher Scientific) and protein concentration by BCA protein assay kit (Thermo Fisher Scientific), following the manufacturer's protocols. Luciferase activity of the lysate was normalized with the corresponding protein concentration in the same well. cancer cell death HER2+ human breast cancer cells, BT474, were obtained from ATCC. Cells were cultured in RPMI-1640 supplemented with 10% FBS. Cells were maintained at 37°C in 5% CO 2 air atmosphere and were passaged weekly by trypsinization. Cells were plated in a 96-well plate with cell medium without antibiotics. One day after seeding, cells were treated with nanoparticles loaded with HER2 siRNA (siHER2) at an NP/siRNA ratio of 50. siRNA dose was 60 nM. The media was switched to complete media after overnight incubation. Figure S1 shows a significant cellular uptake of the nanoparticles within 1 h of exposure, and a significant siRNA transport within the cells at 6 and 16 h (higher signal intensity indicated endosomal escape of siRNA). Five days after treatment with T-siHER2-NP, cells were analyzed for viability using CellTiter-Glo ® Luminescent Assay (Promega, Madison, WI, USA). The value was reported against scrambled siRNA counterpart (T-siSCR-NP). Data and statistical analysis Size and charge measurement were performed in triplicate. SiRNA binding was performed in duplicate. Gene silencing and cell viability were performed with 3-4 replicates. All data are reported as a mean ± SD. Comparisons of 2 groups were performed with Student's t-tests (assuming normal distribution) using Statistical function of Excel. A p-value ,0.05 was considered to be statistically significant. Buffer selection To stabilize the pH, drugs or nanoparticles are prepared in physiological buffer systems such as PBS or Tris-HCl buffer. PBS was evaluated since our nanoparticle was synthesized, bound with siRNA, and initially kept in this buffer. Tris-HCl buffer was used because it was proven to be a bio-compatible buffer in lyophilizing silica nanoparticles. 15 Its pH did not shift with freezing temperature 20 unlike PBS shown in prior reports. [20][21][22] We investigated the effect of both buffers on freeze-thawed and freeze-dried nanoparticles. The nanoparticles were suspended at 10 mg/mL with a TL content of 0% and 10% (w/w of nanoparticles) in either 1X PBS or 100 mM Tris-HCl (pH 7.4). The formulae were then frozen slowly at the rate of −1°C/min overnight. For the freeze-thaw study, the materials were thawed slowly back to room temperature. For the freeze-dry study, the materials underwent two drying steps as mentioned above. After reconstitution, hydrodynamic sizes of all freeze-thawed materials remained similar to that of the freshly made material as shown in Figure 2A. The freeze-thawed materials did not aggregate in PBS nor Tris-HCl buffer even without TL as a lyoprotectant. The lyophilized materials in PBS, however, aggregated even in the presence of TL ( Figure 2B). The sizes increased to 1.5-fold over that of the freshly made counterpart. When lyophilized in Tris-HCl buffer, the particles did not aggregate and retained the original size when in the presence of 10% TL ( Figure 2B). Furthermore, it was the only formulation that retained the size of the nanoconstruct once loaded with siRNA to that of the freshly made counterpart ( Figure 2B). Although a previous report 23 suggested that PBS may affect the freezing step (eg, drastic pH change due to a decrease in PBS solubility as temperature decreases) leading to unwanted particle aggregation, our data indicate that it was the drying step that caused our particle aggregation in the PBS system. However, low concentration of Tris-HCl buffer, which did not cause drastic pH change during the freezing step, 20,24 appeared to protect the nanoparticle better in the drying step too. Thus, Tris-HCl was selected as a lyophilization buffer in all subsequent experiments. lyoprotectant selection and optimization Lyoprotectant is a vital component in lyophilized nanoparticle formula. It protects nanoparticle stresses generated during the freezing and drying step of lyophilization. 4 These stresses along with concentration changes encountered during the freezing step could cause particle aggregation, irreversible fusion, or destabilization. 25 The most common lyoprotectants are sugars and other polyol compounds, such as TL, sucrose, glucose, sorbitol, and glycerol. We screened for the best lyoprotectants by performing a freeze-thaw experiment on our nanoparticles using all five lyoprotectants and found TL and sucrose to be the best at preserving the size and charge of our nanoparticles (not shown). However, since our nanoparticles are developed as siRNA carriers for cancer therapeutics, we chose TL over sucrose since there is a report that sucrose can accelerate tumor growth in tumor-bearing mice. 26 TL was also previously reported as an effective lyoprotectant for silica nanoparticles, 15 trastuzumab antibody, 17 and siRNA on lipid nanoparticles. 14 Next we investigated the concentration of TL required for lyophilization of our nanoparticles. We rejected the 0% TL condition since the material increased in size ( Figure 2B). The nanoparticle suspension at 10 mg/mL in 100 mM Tris-HCl with a TL content of 5%-25% (%w/w of T-NP) were lyophilized under the aforementioned conditions. All lyophilized T-NP with a TL content of 5%-25% retained the average size and charge to that of the freshly made material after 1 min of sonication as shown in Figure 3A and B. However, with 25% TL, the finished product had partially collapsed cake, while lower TL contents produced perfect cakes ( Figure S2). It is possible that high amount of TL caused a drop in collapse temperature of the mixture (ie, −30°C for TL-water binary mixture). 27 When the product temperature during primary drying exceeds the collapse temperature, it causes loss of cake structure. 28 Therefore, we rejected the 25% TL condition. In addition to size and charge, luciferase knockdown and cancer cell killing were used to test the performance of the lyophilized materials upon loading with siLUC and siHER2, respectively, while siSCR was used as a negative control. When delivering siLUC, the lyophilized materials yielded comparable luciferase knockdown efficacy with freshly made material counterparts (from the same batch), but the 5% TL condition yielded the closest outcome ( Figure 3C). When delivering siHER2, they also yielded comparable cell viability of BT474 cells with freshly made materials ( Figure 3D). It is worth noting that the nanoconstruct contains trastuzumab, and thus killed about 50% of BT474, which is a very trastuzumab-sensitive cell line, as shown in our prior publication. 5 We also showed that this killing effect was much lower in trastuzumab-resistant cell line (BT474 made resistant to trasutuzumab). 5 Some exception was found with 0% TL condition, which yielded the material that was more toxic to cells than freshly made material (eg, greater nonspecific cell death with T-siSCR-NP). The two fold increase in size (see Figure 2B) may contribute to higher toxicity due to higher cellular uptake by mass. On the basis of the particle size and efficacy, the 5%-10% TL possessed similar characteristics as the freshly made counterpart, and the 5% TL was slightly better than the 10% TL condition based on the silencing efficacy. Since the lower amount of additive is more desirable for human applications, 5% TL was selected for subsequent lyophilization processes and long-term storage study. reconstitution The working drug formula must be reconstituted easily in clinics. We tested the reconstitution of the optimal lyophilized T-NP (with 5% TL). Five hundred microliters of RNAse-free water was added slowly to the lyophilized cake. The suspension was vortexed for a given time and subjected to hydrodynamic size measurement. After 30 seconds of vortexing, the size was about 6 times of that of freshly made material and only reduced to about 3 times after a prolonged period of 10 min ( Figure S3A). However, after only 1 min of sonication, the size and size distribution ( Figure S3A not bring the size down, suggesting particle agglomeration post lyophilization. A short sonication was needed to separate individual particles from each other. We concluded that 1 min of sonication was best for reconstitution of lyophilized T-NP and was used throughout the studies. Time and temperature optimization The initial lyophilization employed −55°C for 6 h during the freezing step, followed by 2 drying steps at −55°C for 24 h and −40°C for 54 h. The entire lyophilization process took almost 4 days. We proceeded to optimize the freezing time and drying temperature in order to shorten the entire process while preserving the characteristics and performance of the lyophilized materials. We started with the formulation of 10 mg/mL T-NP in 0.1 M Tris-HCl with 5%w/w TL. Freezing time was reduced from 6 to 3 h. During the primary drying, the product temperature must be above the collapse temperature to avoid collapsing of lyophilized cake. 4 Primary drying temperature was increased to −40°C with a chamber pressure of 100 µbar. At this condition, the product temperature, monitored with thermocouple, was stabilized at −20°C for at least 10 h (overnight) before the primary drying was stopped to ensure complete ice sublimation. The total primary drying time was 24 h. Trastuzumab antibody was shown to preserve its protein structure after undergoing lyophilization with the secondary drying temperature of 20°C. 17 Thus, we elevated the secondary drying temperature of our material (containing trastuzumab) from −40°C to 20°C under the chamber pressure of 20 µbar, which allowed us to shorten the drying time to 12 h. The lyophilized product under the "optimized" conditions demonstrated noncollapsed cake-like structure similar to Figure S2 (with 5% TL). At these new conditions, long-term storage of lyophilized nanoparticles (T-NP) To optimize the storage condition, the nanoparticles lyophilized under the aforementioned conditions were stored at 4 different temperatures: −20°C, 4°C, 20°C, and 37°C. The lyophilized products were evaluated bimonthly for physical appearance, size, charge, siRNA loading, luciferase silencing efficacy, and cancer cell killing efficacy. All materials (stored at 4 temperatures) retained the same cake appearance to the freshly lyophilized product. The materials were reconstituted and measured for hydrodynamic size (relative to freshly made material) as shown in Figure 5A. The material stored at 20°C started to aggregate at 6 weeks and the one stored at 37°C started to aggregate as soon as 2 weeks (ie, not fully reconstituted even after 7 min of sonication) ( Figure 5A). The lyophilized products stored at −20°C and 4°C for up to 8 weeks were reconstituted effectively within 1 min of sonication to Figure 4 No difference in materials from two lyophilization conditions in terms of size, luciferase silencing, and BT474 killing. Notes: T-NPs were lyophilized at 10 mg/ml in 100 mM Tris-hcl with 5% Tl. In original conditions, samples were slowly frozen at −55°c for 6 h and followed by primary drying at −55°c for 24 h and secondary drying at −40°c for 54 h. In optimized conditions, the freezing step time was reduced to 3 h, while primary drying step was performed at −40°c for 24 h and secondary drying step at 20°c for 12 h. Abbreviation: Tl, trehalose. 4023 lyophilization of targeted nanoparticles for therapeutic delivery achieve the same size and size distribution of freshly made material ( Figure 5B). At week 12, the material stored at 4°C had larger size distribution ( Figure 5C), and the size was increased by 150% by week 16 of storage ( Figure 5A). On the contrary, the material stored at −20°C continued to retain both the size and size distribution for at least 6 months with minimal changes ( Figure 5D; longer term was not monitored). On the basis of sizes, the best two storage temperatures were −20°C and 4°C. Next, we evaluated charge (zeta potential), siRNA loading, luciferase silencing efficacy, and cancer cell killing efficacy of lyophilized materials stored at both temperatures. The charge ( Figure 6A) and siRNA loading ( Figure 6B) of both materials remained similar to the freshly made nanoparticle. Luciferase silencing efficacy ( Figure 6C; with siLUC) and cancer cell killing efficacy (reported as the viability of BT474 cells, Figure 6D, with siHER2) of both materials were also comparable to (or with marginal changes) the freshly made materials for up to 8 weeks, while the one stored at 4°C started to deviate from the performance of freshly made materials at 12 weeks. This was in agreement with the larger size and size distribution of the material ( Figure 5C). Larger particle sizes lead to higher silencing efficacy and BT474 cell killing in vitro, but are not desirable for in vivo use. However, the material stored at −20°C retained comparable size ( Figure 5A and D) and efficacies to those of freshly made materials ( Figure 6C and D). We conclude that −20°C is the most suitable temperature for long-term storage of the nanoparticles. In short, MSNP nanoparticles coated with PEI and PEG and conjugated with trastuzumab could be lyophilized and stored stably under −20°C for at least 6 months (longer term was not monitored). Lyophilization of siRNA is routinely done by vendors such as QIAGEN (Germantown, MD, USA) and can be kept stably at −20°C for 12 months according to QIAGEN; thus, it was not a subject of this work. Two vial approaches (ie, siRNA and nanoparticle carrier in separate vials) are preferred in a personalized medicine setting since it permits interchangeable siRNAs for targeting different genes using the same nanoparticle formulation. This is especially true for nanoparticle systems like ours, which allow easy loading of siRNA that can be done in clinics. Figure 7 shows that after just 2.5 min of binding in PBS, the T-NP and siRNA construct ° ° yielded comparable hydrodynamic size and luciferase silencing efficacy to those after 5, 10, and 30 min of binding. More conveniently, we can also add powders of siRNA and nanoparticle lyophilized separately into the same vial, and let them bind together upon reconstitution. This should help simplify the expensive stability studies of our materials during the IND enabling studies. We foresee no issue with this approach since both components are highly soluble in saline and achieve complete binding within a few minutes (Figure 7). Conclusion In summary, we successfully developed the lyophilization process for a hybrid polymer−inorganic nanoparticle system using a much shorter time than that reported for aminemodified MSNP (2 vs 6 days). 15 The antibody-conjugated PEG-PEI-silica nanoparticles were lyophilized in Tris buffer with 5% TL as the lyoprotectant. The optimized conditions produced lyophilized material with cake-like structure and retained hydrodynamic size, charge (zeta potential), siRNA loading ability, silencing efficacy, and cancer cell killing efficacy to that of the freshly made material. The freeze-dried nanoparticles can be stored at −20°C for at least 6 months. Longer term stability (up to 2 years) will be evaluated by a Good Manufacturing Practice-certified contract research organization as we prepare the material for clinical trials using the lyophilization and storage conditions optimized herein. The sol-gel MSNP synthesis and the layer-by-layer modification of PEI, PEG, antibody, and siRNA on MSNPs offer good synthesis reproducibility and scalability. We have scaled up the synthesis protocol to yield 6 g of MSNP, which is 50-fold higher than our small-scale synthesis (human dose is anticipated to be 140-350 mg NP per dose). MSNPs have the same size and morphology for both small and large-scale syntheses ( Figure S4). We have also reported the outstanding reproducibility of the nanoconstruct synthesis in terms of size (relative standard deviation [RSD; deviation from the mean] of 3.2% for 6 synthesis batches) and silencing efficacy (RSD of 2.3% from 6 batches). 5 The lyophilization conditions, storage conditions, and material evaluation should be applicable to other similar nanoparticle systems consisting of inorganic nanoparticle cores that are surface modified with cationic polymers and PEG and conjugated with biomolecules like antibodies. Supplementary materials Figure S1 cellular uptake of T-sirNa-NP to her2+ BT474 cells. Notes: BT474 were seeded in a 96-well plate at a density of 10,000 cells per well. The next day, cells were transfected with T-NP loaded with 2 wt.% of nontargeting scrambled sirNa tagged with DY677 at a concentration of 60 nM sirNa. One hour after transfection, cells were washed with D-PBs and incubated for 6 and 16 h for further intracellular transport of the siRNA. Images were taken on an EVOS FL Auto fluorescence microscope (Life Technologies) at a magnification of 400×. Abbreviation: PBs, phosphate-buffered saline. Publish your work in this journal Submit your manuscript here: http://www.dovepress.com/international-journal-of-nanomedicine-journal The International Journal of Nanomedicine is an international, peerreviewed journal focusing on the application of nanotechnology in diagnostics, therapeutics, and drug delivery systems throughout the biomedical field. This journal is indexed on PubMed Central, MedLine, CAS, SciSearch®, Current Contents®/Clinical Medicine, Journal Citation Reports/Science Edition, EMBase, Scopus and the Elsevier Bibliographic databases. The manuscript management system is completely online and includes a very quick and fair peer-review system, which is all easy to use. Visit http://www.dovepress.com/ testimonials.php to read real quotes from published authors.
2018-08-06T12:46:11.431Z
2018-07-01T00:00:00.000
{ "year": 2018, "sha1": "eac15e418f5dcccad43cf71343d41a4de802f0ff", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=43063", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9650ca96d8bb6592e090b83034e556437f9cee88", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
3292519
pes2o/s2orc
v3-fos-license
UGT1A1 polymorphisms in cancer: impact on irinotecan treatment Mutations in the UGT1A1 gene have been implicated in Gilbert syndrome, which shows mild hyperbilirubinemia, and a more aggressive childhood subtype, Crigler–Najjar syndrome. To date, more than 100 variants have been found in the UGT1A1 gene. Among them, UGT1A1*28 and UGT1A1*6 have been reported to be associated with severe toxicities in patients treated with irinotecan-based chemotherapy by increasing the dose of SN-38 (7-ethyl-10-hydroxycamptothecin), an active form of irinotecan. Many association studies and meta-analyses have demonstrated the contribution of UGT1A1*28 and UGT1A1*6 polymorphisms to the toxicities caused by irinotecan-based therapy. The aim of this review was to evaluate the impact of these variants upon the toxicities and the efficacy of irinotecan-based chemotherapy. Introduction Irinotecan hydrochloride, inhibiting topoisomerase I, is one of the key anticancer drugs in chemotherapy for several cancers such as colorectal cancer, lung cancer, gastric cancer, and gynecologic cancers. [1][2][3][4] The patients treated with irinotecan occasionally experience severe neutropenia and delayed diarrhea; however, the occurrence of these adverse reactions has been unpredictable and largely unexplained. 5 An active metabolite of irinotecan, SN-38 (7-ethyl-10-hydroxycamptothecin), is glucuronidated by uridine diphosphate glucuronosyltransferase 1As (UGT1As), such as UGT1A1, and is inactivated by forming the SN-38 glucuronide (SN-38G). Among these UGT1A enzymes, UGT1A1 protein has the highest ability to glucuronidate SN-38. 6 Various studies have demonstrated a relationship between UGT1A1 genotypes affecting SN-38 pharmacokinetics and the experienced toxicity. 7 The transport pathway of irinotecan is shown in Figure 1. In addition to UGT1A1 polymorphism, polymorphisms of carboxylesterase (CES) and ATP-binding cassette (ABC) genes have been reported to affect the metabolism of irinotecan. 8,9 In this review, the impact of UGT1A1 genotypes on irinotecan treatment will be discussed. UGT1A1 polymorphisms and disease susceptibility Mutations in the UGT1A1 gene have been implicated in Gilbert's syndrome, which shows mild hyperbilirubinemia, and a more aggressive childhood subtype, Crigler-Najjar syndrome. 10,11 A common cause of decreased UGT1A1 activity is the insertion of a TA in the TATA box at the promoter region of the UGT1A1 gene, which was named as UGT1A1*28. 10 Individuals with homozygous UGT1A1*28 had higher levels of serum bilirubin compared with those with heterozygous UGT1A1*28 or the wild-type allele. 10 Gilbert's syndrome, also known as constitutional hepatic dysfunction or familial nonhemolytic jaundice, is an inherited disorder of the liver resulting in an overabundance of bilirubin. Most of the patients with Gilbert's syndrome are asymptomatic; however, they sometimes present with episodes of mild intermittent jaundice due to predominantly unconjugated hyperbilirubinemia. Crigler-Najjar syndrome is a rare, but more severe, disorder of bilirubin metabolism and is divided into two distinct forms (types I and II) based upon the severity of the disease. Gilbert's syndrome is part of a continuous spectrum of altered glucuronidation that extends to the fatal Crigler-Najjar disease. Gilbert's syndrome is primarily linked to UGT1A1*28 variants, but other variants in the promoter and coding regions are also involved in the predisposition of the disease. 12 To date, more than 100 variants have been identified in the UGT1A1 gene. 13 Among these polymorphisms, the clinically important variants are listed in Table 1. [14][15][16][17][18][19] Recently, a large population-based cohort study, the Rotterdam Study, 20 investigated the association between UGT1A1 genotype and incidence of coronary heart disease (CHD). However, in this study, neither bilirubin nor UGT1A1*28 genotype was associated with development of CHD. Another large trial evaluating 1,780 unrelated individuals aged more than 24 years suggested that homozygous UGT1A1*28 alleles and higher serum level of bilirubin were related with lower risk of cardiovascular disease (CVD). 21 Serum bilirubin has a protective effect on CVD and CVD-related disease. It seems that individuals with Gilbert syndrome and UGT1A1*28 allele and having moderate elevation of serum bilirubin could have a lower risk of CHD and CVD. UGT1A1*28 allele and efficacy of irinotecan-based therapy Emerging data on the role of genetic variants in the UGT1A1 gene confirm that the UGT1A1*28 allele is associated with severe toxicities in irinotecan-based chemotherapy. 22 Additionally, it seems that patients with the allele were also associated with better outcome, despite severe toxicities. 22 A study by Toffoli et al, 22 conducted in 238 patients with metastatic colorectal cancers, showed that 63 UGT1A1 gene and irinotecan-based chemotherapy *28/*28 cases had a better response rate and progressionfree survival compared with *1/*1 cases. However, most of the other studies evaluating survival according to UGT1A1 genotypes failed to show the significance of UGT1A1 variants in terms of survival. A meta-analysis by Dias et al, 23,24 evaluating 10 studies using irinotecan-based chemotherapy, revealed that there was no significant efficacy in terms of response rate, progression-free survival, and overall survival. Additionally, another meta-analysis by Liu et al 25 also confirmed that the UGT1A1 genotype could not be a predictor for response rate and survival. These results might reflect a lower dose intensity of irinotecan in patients with *28/*28 or *1/*28 alleles, due to severe toxicities. Representative studies evaluated in these meta-analyses are listed in Table 2. 22,[26][27][28][29][30][31][32][33][34][35][36] Current recommendation for UGT1A1 genotyping in daily practice The US Food and Drug Administration recommends on the irinotecan drug label that patients with the *28/*28 genotype should receive a lower starting dose of irinotecan. 57 Additionally the recommendation also noted that "the precise dose reduction in this patient population is not known, and subsequent dose modifications should be considered based on individual patient tolerance to treatment". 57 According to European Society for Medical Oncology (ESMO) guidelines, testing for UGT1A1 polymorphisms should be considered only if severe toxicity potentially related to treatment with irinotecan occurs. The ESMO guideline noted that testing for UGT1A1 is particularly important when irinotecan is used at high doses (300-350 mg/m 2 ) but of less importance when it is administered at lower doses (125-180 mg/m 2 ). 58 According to the Japanese Society for Cancer of the Colon and Rectum (JSCCR) guidelines, it is especially desirable to test for a UGT1A1 genetic polymorphism before administering irinotecan to patients with a high serum bilirubin level, elderly patients, patients whose general condition is poor (eg, performance status 2 [PS2]), and patients in whom severe toxicity (especially neutropenia) developed after the previous administration of irinotecan. 59 The guidelines also noted that "irinotecan toxicity cannot be predicted with certainty on the basis of the presence of a UGT1A1 genetic polymorphism alone", and that "it is essential to monitor patients' general condition during treatment and to manage adverse drug reactions carefully, irrespective of whether a genetic polymorphism is detected". In the USA, single agent irinotecan (350 mg/m 2 , triweekly, monotherapy) is usually used as one of the "irinotecan-based therapies", so the doses of irinotecan are usually higher than in Europe (180 mg/m 2 , biweekly, combination) or Japan (150 mg/m 2 , biweekly, combination). Although the recommendations for UGT1A1 genotyping are different according to the doses of irinotecan which are clinically often used in daily practice, clinical usefulness should be always considered in all patients who receive irinotecan-based therapy. Conclusion Emerging data confirmed an increased risk of severe toxicities, such as neutropenia, in patients with UGT1A1*28 and/or UGT1A1*6 genotype when the patients received irinotecanbased chemotherapy. Homozygous variants and double heterozygous variants showed a higher risk of severe toxicities compared with single heterozygous variants. However, genotype-based studies suggest that MTD is clearly lower in patients with heterozygous UGT1A1 variants compared with those with wild-type alleles. Further clinical studies that include heterozygous UGT1A1 variants, in addition to homozygous variants, are needed to evaluate the clinical utility of UGT1A1 genotyping in patients treated with irinotecan-based therapy. On the other hand, although severe toxicities were clearly evident when the dose of irinotecan was high or intermediate, the incidence of these toxicities was significantly higher even when the dose of irinotecan was lower. Furthermore, clinical significance in terms of tumor response or survival was not found according to UGT1A1 genotypes. Further investigations, such as genotype-based therapy, are needed for increasing the efficacy and decreasing the toxicities for patients receiving irinotecan-based therapy. Disclosure The authors report no conflicts of interest in this work. submit your manuscript | www.dovepress.com Publish your work in this journal Submit your manuscript here: https://www.dovepress.com/pharmacogenomics-and-personalized-medicine-journal Pharmacogenomics and Personalized Medicine is an international, peerreviewed, open access journal characterizing the influence of genotype on pharmacology leading to the development of personalized treatment programs and individualized drug selection for improved safety, efficacy and sustainability. This journal is indexed on the American Chemical Society's Chemical Abstracts Service (CAS). The manuscript management system is completely online and includes a very quick and fair peer-review system, which is all easy to use. Visit http://www.dovepress. com/testimonials.php to read real quotes from published authors.
2018-04-03T04:37:28.955Z
2017-02-28T00:00:00.000
{ "year": 2017, "sha1": "0e58825473507e8825210d9433ab145499b56173", "oa_license": "CCBYNC", "oa_url": "https://www.dovepress.com/getfile.php?fileID=35221", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0e58825473507e8825210d9433ab145499b56173", "s2fieldsofstudy": [ "Biology", "Computer Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
229507935
pes2o/s2orc
v3-fos-license
Dynamical systems, celestial mechanics, and music: Pythagoras revisited Gioseffo Zarlino reintroduced the Pythagorean paradigm into Renaissance musical theory. In a similar fashion, Nicolaus Copernicus, Galileo Galilei, Johannes Kepler, and Isaac Newton reinvigorated Pythagorean ideas in celestial mechanics; Kepler and Newton explicitly invoked musical principles. Today, the theory of dynamical systems allows us to describe very different applications of physics, from the orbits of asteroids in the Solar System to the pitch of complex sounds. Our aim in this text is to review the overarching aims of our research in this field over the past quarter of a century. We demonstrate with a combination of dynamical systems theory and music theory the thread running from Pythagoras to Zarlino that allowed the latter to construct musical scales using the ideas of proportion known to the former, and we discuss how the modern theory of dynamical systems, with the study of resonances in nonlinear systems, returns to Pythagorean ideas of a Musica Universalis. Gioseffo Zarlino reintroduced the Pythagorean paradigm into Renaissance musical theory. In a similar fashion, Nicolaus Copernicus, Galileo Galilei, Johannes Kepler, and Isaac Newton reinvigorated Pythagorean ideas in celestial mechanics; Kepler and Newton explicitly invoked musical principles. Today, the theory of dynamical systems allows us to describe very different applications of physics, from the orbits of asteroids in the Solar System to the pitch of complex sounds. Our aim in this text is to review the overarching aims of our research in this field over the past quarter of a century. We demonstrate with a combination of dynamical systems theory and music theory the thread running from Pythagoras to Zarlino that allowed the latter to construct musical scales using the ideas of proportion known to the former, and we discuss how the modern theory of dynamical systems, with the study of resonances in nonlinear systems, returns to Pythagorean ideas of a Musica Universalis. Nature most excellent and compleat To the Pythagoreans, music represented the paradigm of order emerging from the primordial chaos, that is, the root of the cosmos. Moreover, this order could be understood through number, the tool used by God, the Great Geometer, for creating the universe. For, if music were governed by number -Pythagorean proportion -the same was true of the celestial bodies that play a heavenly symphony in the sky: the music of the spheres. At the same time, on an intermediate level, the temple represented the trait d'union between man and the heavens, the micro-and macro-cosmos. Within this metaphysical framework it is not at all strange that, as discussed by Alberti [1], the architecture of the temple should also follow the laws of numbers, that is, the particular proportions giving a sense of harmony and beauty through all the levels of creation. The Pythagorean approach lay almost dormant during mediaeval times. However, beginning in the High Middle Ages, a revival began to take shape. At the beginning of the fourteenth century, Dante [2, wrote that regarding the three principles of natural things, namely matter, privation, and form: Number exists not only in all of them together, but also, upon careful reflection, in each one individually; for this reason Pythagoras, as Aristotle says in the first book of the Physics, laid down even and odd as the principles of natural things, considering all things to have numerical aspect. With the advent of the Renaissance, Pythagorean ideas spread with new vigour. Many of the translations of Arabic books that had preserved classical Greek knowledge through mediaeval times took place in Venice where, thanks to maritime commerce, translators from the Arabic and from the Greek were available for producing Latin and vulgar versions of the texts. It is natural, then, that the reintroduction of Pythagorean ideas in musical theory and practice found full expression in Venice. The main protagonist was Gioseffo Zarlino, who consolidated the earlier work of Franchino Gaffurio and Francesco Maurolico. Zarlino, a Franciscan friar, was organist in Chioggia and then choirmaster at St. Mark's in Venice from 1565 to 1590. In architecture, its main standard-bearer was Leon Battista Alberti, quoted above, whose work was further developed by many of the great eclectic minds of the Renaissance, including Francesco de Giorgio Martini, Sandro Botticelli, Leonardo da Vinci, Francesco Giorgi and Andrea Palladio. With the nascence of modern experimental science following the work of Galileo Galilei -intellectual descendent of Nicolaus Copernicus, contemporary of Johannes Kepler, and academic forefather of Isaac Newton -there began a gradual process of bifurcation between arts and sciences. On one hand, there has been a progressive demystification of the role played by proportion in the arts, including in music, architecture, and painting. On the other hand, this has been accompanied by an increasing neglect of science for the Pythagorean ideas relating art, music, and natural philosophy. However, recent research in dynamical systems theory should change these views. Dynamical resonances predicted by theory describe very different types of behaviour, from the pitch of complex sounds to the orbits of celestial bodies in the Solar System. These achievements of modern science bring together in a surprising fashion physiological behaviour, astronomy, and the theory of numbers; that is, the micro-cosmos, the macro-cosmos, and Pythagorean number. In this text we discuss how the mathematics of proportion and aesthetics that developed from Pythagoras to the Renaissance has re-emerged in the science of dynamical systems. We highlight how modern theory allows new insight into the Pythagorean scientific conception reintroduced into music by Zarlino and others in Renaissance Venice. Pythagoras, musical intervals, and the Music of the Spheres For the Pythagorean school [52,65], the Cosmos, that is, our universe, that is, was nothing but the result of the order imposed by the Demiurge, the Great Geometer, on the primitive chaos. The identification of the god of creation with a great surveyor shows clearly the concepts that guided Pythagorean thought. The Pythagoreans saw that there is a unique tool to find order in the universe: mathematics, which may occur in its twin aspects, of geometry and arithmetic. These two, together with music and astronomy, became the four liberal arts of the quadrivium. The quadrivium brought about the gradual introduction of classical thought into mediaeval instruction. Its consolidation is attributed to Boethius in the 6th century, a main mediaeval intermediary from the Pythagoreans and classical antiquity to the neo-Platonists of the Renaissance. His near contemporary Proclus wrote [48] The Pythagoreans considered all mathematical science to be divided into four parts: one half they marked off as concerned with quantity, the other half with magnitude; and each of these they posited as twofold. Fig. 1 Pythagoras and the discovery of musical intervals in a blacksmith's forge illustrated in a mediaeval woodcut (Bayerische Staatsbibliothek). Today, Verdi's Il Trovatore and Wagner's Das Rheingold and Siegfried, among other works, all use the sounds of hammers on anvils. But real hammers on real anvils are notoriously nonmusical, so productions of these operas often use fabricated percussion instruments made to look anvil-like that do emit musical tones. As was already pointed out by scholars of musical history some centuries ago "upon examination and experiment it appears, that hammers of different size and weight will no more produce different tones upon the same anvil, than bows or clappers of different sizes will from the same string or bell. Indeed, both the hammers and anvils of antiquity must have been of a construction very different from those of our degenerate days, if they produced any tones that were strictly musical" [8]. Jones [31] (page 344) makes the suggestion that ancient Greeks might have used convex shield-like pieces of metal, which might possibly have been "sonorous anvils", and goes on to propose how this might resolve the question, but we are not convinced. A quantity can be considered in regard to its character by itself or in its relation to another quantity, magnitudes as either stationary or in motion. Arithmetic, then, studies quantities as such, music the relations between quantities, geometry magnitude at rest, spherics [i.e., astronomy] magnitude inherently moving. The properties of numbers were the most important subject of study since these properties could be observed at all levels of Creation, from the movement of the stars in the firmament -the macrocosm -down to man himself -the microcosm. It is within this context that the scientific study of music began. In the famous legend of the forge [3], first related by Nicomachus [44], illustrated in Figure 1, the discovery of harmonic musical intervals is attributed to Pythagoras himself as he listened to the sounds produced by hammers of different sizes that struck a large piece of red-hot iron on an anvil. Although this tradition is based on ancient myths and legends -which notably associate the musical sound of blacksmiths at work and the invention of two sciences: acoustics and metallurgy -most likely the Pythagoreans studied musical intervals not with hammers, but with the monochord and other stringed instruments, as we note in Fig. 1. Pythagoras' discovery is probably Table 1 The consonant intervals -ratios of frequencies between two tones -of modern Western music and the principal intervals of the so-called just scales are characterized by rational numbers. For example, when the frequency of a C is multiplied by 3, one gets a (just) G one octave higher. To get the interval C-G within an octave, one must divide the larger one by 2, that is 3/2. Likewise dividing the frequency of C by 3, one obtains the (just) F two octaves lower, and one must multiply that frequency by 4 to obtain the F in the octave above the C, giving the interval 4/3. the first in human history that can be qualified as a scientific theory; i.e., a description of a natural phenomenon in mathematical terms. With the finding that certain small integer relationships between the lengths of strings (see Table 1) produce harmonious sounds, the Pythagoreans put music at the centre of the intellectual effort of their school. In particular this was due to Archytas, "of all the Pythagoreans the most devoted to the study of music", who "tried to preserve what follows the principles of reason not only in the concords but also in the divisions of the tetrachords", according to Ptolomy [50,7]. This view of music as a paradigm of cosmic order crystallized completely with the music of the spheres: the idea that the planets through their movement in the sky produce subtle harmonies that only the ears of initiates -such as Pythagoras -could hear. Pythagorean thought in its esoteric key was greatly demystified by the birth of modern experimental science with Galileo Galilei. However, the concepts developed by the adepts of the Pythagorean school clearly represent an encrypted or at least an analogous version of the reality they were trying to describe. Intervals Zarlino and the reintroduction of Pythagorean concepts in the music of Renaissance Venice Gioseffo Zarlino is recognized as the main protagonist in the reintroduction of classical concepts into the theory of Western music. Zarlino produced important theoretical papers that were compiled in several volumes: Le Istitutioni Harmoniche of 1558, Dimonstrationi Harmoniche of 1571, and Sopplimenti Musicali of 1588, all published in Venice; they were collected in his Complete Works [63]. We must note here the disputes in the field of music between Zarlino and his pupil, Vincenzo Galilei, musician and the father of Galileo Galilei [20,16,60], which sound like a current-day bad-tempered argument between a theoretician and an experimentalist. Zarlino widened the range of Pythagorean harmonic intervals so as to define the major and minor scales that are today known as just; that is, scales in which all the intervals are defined by rational numbers. The inclusion of the third and sixth intervals, both major and minor, defined by rational relationships like the Pythagoreans' fourth and fifth but extended to the senarius -the group of numbers 1-6 -permitted the generation of the major and minor scales. His Sintono Diatonico, which is not his original invention, but in fact dates back to Ptolemy in the second century CE, represents a major step towards modern polyphonic music in anticipating the theory of harmonics of Sauveur [55] of the eighteenth century. The influence of the work of Zarlino spread throughout Europe in the succeeding centuries, and led to the diffusion of concepts like just scales, developed rigorously from Helmholtz [23] in the nineteenth century to current microtonal music. It is a mistake to regard microtonality as modern, however. Microtonality -that is, music implemented with musical scales using intervals smaller than a semitone -has historically characterized the music of a number of Asian cultures, but also in the European tradition there have been many examples. Already in Le Istitutioni Harmoniche Zarlino had represented a harpsichord with 19 notes per octave [62]. This proposal of Zarlino, and others, such as 24 or 31 notes per octave, highlights the convergence with microtonal subdivisions suggested by recent theoretical results that we shall discuss below. From the theoretical point of view, Zarlino extended the set of the harmonic intervals using the same approach developed by the Pythagorean school: that of proportional division. The intervals found by Pythagoras are those of the octave, fifth, fourth, and whole tone, which are summarized in Table 1. All these intervals can be obtained by division of the octave using the arithmetic and harmonic means. The ancient Greeks knew of the division of an interval into three different types of means between two quantities, a and b: 1. The arithmetic mean, or simply the mean; defined as 2. the harmonic mean, equal to the inverse of the arithmetic mean of the inverses, 3. and the geometric mean fourth can be obtained by calculating, respectively, the arithmetic and harmonic means of the octave (double arrows indicate the interval between notes (fifth or fourth), dotted double arrows indicate that the corresponding interval is formed with a note outside the octave); b) Zarlino major scale: adding to the notes represented in the previous figure the arithmetic means of the fifths represented in the same figure, and the whole tone 9/8 which is obtained as the ratio between the fifth and fourth, one obtains the Zarlino major scale (dotted arrows indicate arithmetic mean); c) Zarlino minor scale: although there are several variants to obtain a chromatic scale starting from basic intervals (for example the variant of Delezenne or the use of a minor semitone 25/24 to obtain the altered notes ( [34], p. 24)), a Zarlino minor scale can be obtained in a single step by calculating the harmonic means of the fifths shown in a) (dotted arrows indicate harmonic mean); d) Convergence with the golden scale of 12 notes: As mentioned in the text, to get all the possible fifths starting from the Pythagorean fifths and fourths in the octave is necessary to include the note 8/9 (a fifth below 4/3). If one calculates the arithmetic and harmonic means of this fifth interval one obtains the notes 16/15 and 10/9. If they are put into place, we obtain all the rational intervals of the 12-note golden scale (a dotted arrow indicates harmonic mean, a plain arrow indicates arithmetic mean) [14,21]. which coincides with the square-root of the product of the arithmetic mean and the harmonic mean. In other words, the geometric mean of the interval is the geometric mean of the arithmetic and harmonic means of the same interval, a result attributed to Archytas. Applying the arithmetic mean and the harmonic mean to the octave interval [1/1, 2/1] one obtains the values 3/2 and 4/3 corresponding to the fifth and the fourth, while the ratio between these two produces the interval corresponding to the whole tone, 9/8. The geometric mean was not used for the calculation of harmonic intervals, since its application usually produces irrational numbers, and so it was 'imperfect' from the perspective of Pythagorean doctrine. From a theoretical point of view, Zarlino's intervals can be obtained by calculating the arithmetic and harmonic means of fifths appearing in Pythagorean subdivisions. This construction, together with the scale it generates, is represented in Figure 2. The construction begins with the recognized Pythagorean intervals that are the octave and fifth. We may start with an octave of C and assume that in addition we know only of fifths. In other words, we have a C and another C an octave above (1/1; 2/1). Add a fifth to the first C. It is the note G (3/2), but now we can insert a fifth under the upper C, which is the note F (3/4). This note in relation to the lower C gives a fourth. In this way we generate F and G that can be defined as intervals of fifths with the two Cs forming the octave. As we use only fifths, the note G can generate a fifth above, which is 9/4, and the F a fifth below, which is 8/9. Let us now remain with the initial notes C to C (1/1, 2/1), and their upper and lower fifths, F, G, D and B ; 4/3, 3/2, 9/4, and 8/9. While the initial Pythagorean scheme uses only the numbers from 1 to 4, or their powers, Zarlino extends this to include 5 and 6; in this way one can include the intervals of sixths and thirds. These numbers appear naturally as different means of the existing intervals, for example - Figure 2b -including the arithmetic means gives the major scale of Zarlino (obtained without performing the arithmetic mean with the lower fifth of the F (8/9); it is necessary to include as a note the naturally generated interval between the Pythagorean F and G, this is a whole tone, 3/2 / 4/3 = 9/8). It is enough now to include the harmonic means to obtain the Zarlino minor scale (again the fifth lower than the F is not necessary); Figure 2c. Thus, starting only with the notes of the Pythagorean intervals, the octave and fifth, and the fourth and the whole tone as a consequence, and adding the arithmetic and harmonic means, the major scale and the minor scale of Zarlino are obtained. Science's return to Pythagoras through celestial mechanics The Pythagorean Musica Universalis or music of the spheres is today a metaphor, but the historical connections between music, mathematics, and astronomy have had a profound impact upon all these disciplines [5,49]. In the preface to his book De revolutionibus orbium coelestium (On the Revolutions of the Heavenly Spheres, [15]), Nicolaus Copernicus cites Pythagoreans as the most important influences on the development of his heliocentric model of the universe. In the century following his death the Inquisition banned De revolutionibus and all books advocating the Copernican system, which it called "the false Pythagorean doctrine, altogether contrary to Holy Scripture." This ban was flouted by Galileo Galilei in his Dialogo sopra i due massimi sistemi del mondo (Dialogue Concerning the Two Chief World Systems, [19]) with its account of conversations between a Copernican scientist, Salviati, who represents the author, a witty scholar, Sagredo, and a plodding Aristotelian, Simplicio. Galileo has Salviati say That the Pythagoreans held the science of numbers in high esteem, and that Plato himself admired the human understanding and believed it to partake of divinity simply because it understood the nature of numbers, I know very well; nor am I far from being of the same opinion. But that these mysteries which caused Pythagoras and his sect to have such veneration for the science of numbers are the follies that abound in the sayings and writings of the vulgar, I do not believe at all. Rather I know that, in order to prevent the things they admired from being exposed to the slander and scorn of the common people, the Pythagoreans condemned as sacrilegious the publication of the most hidden properties of numbers or of the incommensurable and irrational quantities which they investigated. They taught that anyone who had revealed them was tormented in the other world. Therefore I believe that some one of them, just to satisfy the common sort and free himself from their inquisitiveness, gave it out that the mysteries of numbers were those trifles which later spread among the vulgar. The work begins Several years ago there was published in Rome a salutary edict which, in order to obviate the dangerous tendencies of our present age, imposed a seasonable silence upon the Pythagorean opinion that the Earth moves. There were those who impudently asserted that this decree had its origin not in judicious inquiry, but in passion none too well informed. Complaints were to be heard that advisers who were totally unskilled at astronomical observations ought not to clip the wings of reflective intellects by means of rash prohibitions. It was when the inevitable occurred and Galileo was convicted by the Inquisition that he is famously held to have muttered "eppur si muove" ("yet it does move"). His contemporary Johannes Kepler's search for harmonic proportions in the Solar System led to his discovery of the laws of planetary motion. Kepler's initial studies in the subject consisted in attempting to reconcile planetary orbits with the geometries of the five Platonic solids; thus he placed an octahedron between the orbits of Mercury and Venus, an icosahedron between Venus and Earth, a dodecahedron between Earth and Mars, a tetrahedron between guthe of didisting (the (8) linner loc- This torque reduces the angular momentum L of the spinning cylinder of tea, with a spindown time The tea's angular momentum is L ¼ Ix, where I is the fluid's moment of inertia. For a cylinder of mass m, I $ mR 2 . Here, so I $ qR 4 H and the angular momentum is Finally, I put the angular momentum, Eq. (15), and the viscous torque, Eq. (12), into the spindown time, Eq. (13), and use the boundary-layer thickness r $ ffiffiffiffiffiffiffiffi ffi =x p from Eq. as I promised in Eq. (7) just by invoking the geometric mean. For the tea in front of me, R $ 4 cm and x $ 20 s À1 (a few revolutions per second). For water, a decent approxima- Mars and Jupiter and a cube between Jupiter and Saturn. This first result he published in Mysterium Cosmographicum (Cosmographic Mystery, [32]). He entitled his later book on the subject Harmonices Mundi (Harmonics of the World, [33]), after the Pythagorean teaching that had inspired him. Kepler describes himself falling asleep to the sound of the heavenly music, "warmed by having drunk a generous draught ... from the cup of Pythagoras". Kepler asked whether the greatest and least distances between a planet and the Sun (aphelion and perihelion) might approximate any of the harmonic ratios, but found they did not. He then looked at the relative maximum and minimum angular velocities of the planets (at perihelion and aphelion) measured from the Sun, and found that planets did seem to approximate harmonic proportions with respect to their own orbits, allowing them to be allotted musical intervals. He found that the maximum and minimum speeds of Saturn differed by an almost perfect 5/4 ratio, a major third. The extreme motions of Jupiter differed by 6/5, a minor third. The extremal speeds of Mars, Earth, and Venus approximated 3/2, a fifth; 16/15, a major semitone; and 25/24, a minor semitone, respectively. Kepler further examined the ratios between the fastest and slowest speeds of a planet and those of its neighbours, which he called converging and diverging motions. His finding that harmonic relationships structure the characteristics of the planetary orbits individually, and their relationships to one another, led to his three laws of planetary motion. It is beautiful, as we have seen pointed out very recently [39], that Keplerian planetary orbits display all three means discussed above (Fig. 3). The semimajor axis a is the arithmetic mean of the perihelion r min and the aphelion r max , their geometric mean is the semimajor axis b, and l, the semilatus rectum, is the harmonic mean. It is the semimajor axis a whose cube is proportional to the square of the orbital period in Kepler's third law. Isaac Newton, the discoverer of the secrets of gravity, characteristically did not acknowledge his debt to his immediate predecessors but saw in the Pythagorean music of the spheres a prior description of his own law of universal gravitation. A visitor wrote that "Mr. Newton believes that he has discovered quite clearly that the ancients like Pythagoras, Plato etc. had all the demonstrations that he has given on the true System of the World" [41]. To give the same note (pitch) in the vibration of two strings of different lengths and held by different masses but otherwise similar, according to the law discovered by Mersenne [42] it is necessary to vary the masses hanging from them in direct relation to square of their lengths. If we now think of two bodies at different distances from a centre of gravitational attraction, like two planets revolving around the Sun, to achieve the same force of attraction it is necessary for the masses to vary following this same law. That is, they should grow as the square of the distance to the centre of attraction. The subject has been discussed by several authors (see, for example, [22], p. 120), but invoking erroneously a dependency with the inverse square of the distance, which defines, instead, the variation of the gravitational force according to Newton's laws. It was Newton's pupil MacLaurin [38] who expressed the matter clearly: In general, that any musical chord may become unison to a lesser chord of the same kind, its tension must be increased in the same proportion as the square of its length is greater; and that the gravity of a planet may become equal to the gravity of another planet nearer to the sun, it must be increased in proportion as the square of its distance from the sun is greater. If therefore we should suppose musical chords extended from the sun to each planet, that all these chords might become unison, it would be requisite to increase or diminish their tensions in the same proportions as would be sufficient to render the gravities of the planets equal. And from the similitude of those proportions the celebrated doctrine of the harmony of the spheres is supposed to have been derived. Probably it is because this was enunciated clearly in print by the disciple, not by Newton himself, that little attention has been paid to this equivalence between gravitation and music. Recent results of scientific research in fields from particle physics to cosmology to nonlinear dynamics return us in curious and interesting ways to Pythagorean ideas. String theory, postulated as a unified description of gravity and particle physics, in which matter is hypothesized to consist at its lowest level of immensely tiny vibrating strings, takes us straight back to Pythagoras. The big-bang theory for the origin of the universe does not differ greatly from the Pythagorean myth of creation: as for the Pythagoreans, for modern physics too there exists something before the Cosmos, the so-called vacuum fluctuations, which are the equivalent of the Pythagorean primordial Chaos, and it is this chaos that is ordered, giving rise to the creation of matter in the universe. In addition, the distribution of the background radiation in the universe, which reflects the structure of the early universe, seems to be due Fig. 4 Measure of the quality of the approximation of all Zarlinian intervals, major and minor, provided by an equitempered scale with an arbitrary number of notes. In the abscissa is represented the number of notes of the equitempered scale and in the ordinate a global parameter σ describing the quality of the approximation to all consonant Zarlinian just intervals. σ is calculated as the summation over all just harmonic intervals of the quadratic difference between a just harmonic interval and the nearest interval produced by the corresponding equitempered scale. The minima noted in the abscissa correspond to proposed musical scales [14]. to the propagation of a primordial sound in a manner Pythagoras might well have appreciated [25], and a Platonic geometry of the universe has even been suggested [37]. As we shall describe in more detail below, both celestial mechanics and music theory have a mathematical basis in terms of dynamical systems. The existence of a special relationship defined by whole numbers and harmonic intervals as small as those of music can be associated with the properties of nonlinear dynamical systems. In particular, harmonic musical intervals seen as nonlinear responses of our sense of hearing [11,13,21], and the synchronized movements of the celestial bodies of our planetary system [17,58] may be described by resonances of two or more frequencies [12,9,10] in much the same way as first suggested by Pythagoras some 2500 years ago. The optimal number of notes in a scale Today most musical scales are equitempered, not just, which means that the frequency interval between every pair of adjacent notes has the same ratio. The long historical battle between just and equal temperament, many of whose protagonists form part of our present story, is nonetheless one that we shall not enter here, because it takes us away from our purpose. We shall simply comment that, in mathematical terms, the problem is that there can be no one perfect musical scale because the equation (3/2) n = 2 m , representing the circle of fifths, has no nonzero integer solution. We may consider the numbers of notes that correspond to the optimal division of an equitempered scale into an arbitrary number of notes in such a way as best to approximate the harmonic musical intervals established by Zarlino; see Table 1 and Figure 4. As shown, for example, in Lattard [34] (p. 41) the equal division into 53 intervals given by Holder and Mercatoranticipated in the 1st century BCE by the Chinese music theorist Ching Fang [40] -is excellent for comparing the Pythagorean and Zarlino scales; this division was also proposed and studied by Newton as being optimal among those of equal temperament. Other optimal values seen in Figure 4 have been proposed and used: for example, that of 19 by Costeley, one of 31 intervals by Huygens, and that of 41 by Janko [34]. And there is Zarlino's 19-note harpsichord [62]. If any interval is divided into arithmetic and harmonic means, the relationships formed between these means and the bounds of the closest intervals are the same: a a, b = |a, b| b . For example, the Pythagorean intervals within the octave satisfy this property: the fifth is equivalent to that of the fourth if reversed compared to the octave, and vice versa. The golden section, ϕ 2 = ϕ + 1, also satisfies this property: in a series of four terms generated by the golden ratio, the two intermediate or central terms are given by the harmonic and arithmetic means of the extremes; if the geometric series of four terms are given by the proportion of the golden ratio ϕ: ϕ −1 ; 1; ϕ; ϕ 2 , Fig. 6 Golden scale of 12 notes; this scale is obtained on the basis of the continued fraction expansion of the golden section until the fifth convergent 8/5, which is the quotient of two consecutive Fibonacci numbers, and of palindromic symmetry properties. Compare, for example, the solution of Newton [22]. The golden mean, ϕ, has the continued-fraction expansion ϕ = √ 5 + 1 2 = 1 + 1 1 + 1 1 + 1 1 + . . . , and the best rational approximations to ϕ are given by the convergents of this infinite continued fraction, arrived at by cutting it off at different levels in the expansion: 1/1, 2/1, 3/2, 5/3, 8/5, 13/8, and so on; the convergents of the golden mean are ratios of successive Fibonacci numbers [14]. the arithmetic and harmonic means of the extremes are which therefore coincide with the central terms. We must return to Kepler once more to note the relation to a Kepler triangle 1 2 + √ ϕ 2 = ϕ 2 formed by 3 squares in geometric progression 1; ϕ ; ϕ 2 ( Fig. 5): for two positive real numbers, their arithmetic mean, geometric mean, and harmonic mean are the lengths of the sides of a right triangle if and only if that triangle is a Kepler triangle [24]. Some years ago, we took advantage of this equivalence we have highlighted above, and to the known properties of the golden section in relation to nonlinear dynamics, to put forward a set of scales that we thought might be musically interesting, the golden scales [14]. The construction of golden scales is based solely on the properties of the golden section, the convergent continued fraction expansion of which coincides with the ratios of successive Fibonacci numbers, and palindromic symmetry. Figure 6 shows the construction of the 12-note golden scale. The notes reproduce all the intervals of the major and minor Zarlinian scales. From a mathematical point of view one can say that the correspondence between the Zarlinian and golden scale intervals is due to the inherent properties of palindromy of divisions in the arithmetic and harmonic means. The only difference is given by the augmented fourth that in the case of golden scale corresponds to the geometric mean of the range of an octave. The difficulties of dealing with this interval in musical termsthe difficulty in classifying it as either consonant or dissonant -are clearly evidenced by the name assigned to it: the Diabolus in Musica or tritone. In the golden scale of 12 notes it is the only irrational interval. This division is necessary in the construction of a chromatic scale given that otherwise there p/r q/s (p+q)/(r+s) Fig. 7 a) The simplest dynamical system: the simple pendulum, here represented by a swing forced with periodic impulses. b) Dynamics of a periodically forced swing: the regions of synchronization, known as Arnol'd tongues. Each region is born in a rational number on the horizontal axis where the period of the impulses that act on the swing is represented. The denominator is the number of complete oscillations of the swing that are performed during a number of impulses coincident with the numerator of this rational number. The vertical axis represents the intensity of impulses applied.c) A cut along a horizontal line produces the so-called devil's staircase. would remain too large an interval, compared to the major semitone, between the fourth and fifth. The importance of results that show the equivalence between the approach of proportions and the golden section is owing to the fundamental role that the latter plays as the most irrational number in the description of the behaviour of nonlinear dynamical systems [45]. There is a hierarchy among irrational numbers according to how difficult they are to approximate with rationals; it is in this sense that one irrational is more irrational than another, and this difficulty of approximation, which is related to the continued fraction expansion mentioned in Figure 6, is given by Hurwitz's theorem [26]. Nonlinear dynamics, resonance, and synchronization In order to explore the connection of the golden scales with nonlinear dynamics, we introduce as an example one of the simplest dynamical systems: the forced pendulum, which is equivalent to a child's swing (Figure 7). A phenomenon familiar to everyone who has ever played on a swing is that of synchronization: for each complete oscillation of the swing it receives a single impulse, from our legs, or from the arms of a pusher. Since we have one os- Fig. 8 The rings of Saturn, made up of a myriad of highly-reflective blocks of ice, have resonances (ring gaps), similar to musical intervals, within which material is almost absent. The most prominent is the Cassini division, the thick dark band in this image. cillation for each impulse, the frequencies of the swing and the impulses are equal, i.e., the ratio of frequencies is 1/1 = 1. The phenomenon of synchronization, first discovered and analysed by Huygens in pendulum clocks in 1665 [27], is one of the most universal examples of dynamical behaviour and can be observed at all levels of the physical world. For example, in celestial mechanics, we observe always the same face of the Moon from the Earth, as it takes the Moon exactly the same time to turn on itself as to make a revolution around the Earth. In other words, the Moon's rotational and orbital periods are synchronized. Returning to the swing, in Figure 7 we have a graph that represents all the dynamical behaviour of a swing possible when varying the intensity of the impulses (vertical axis) and their frequency (horizontal axis). There are regions of synchronization more complicated than 1/1, for example, 1/2 or 2/3, that have the shape of a tongue and which are, in fact, called Arnol'd tongues in honour of the Russian mathematician who studied them [6]. We may note that at every rational number an Arnol'd tongue is born. Such more complex synchronizations can be seen across the natural world, including in celestial mechanics [17,58]. Examples in our Solar System are Mercury, tidally locked with the Sun in a 3/2 spin-orbit resonance -a species of celestial fifth -the absence of asteroids in the Kirkwood gaps in the asteroid belt owing to resonances with Jupiter, and gaps in Saturn's rings such as the Cassini division (Figure 8), caused by resonances with moons. From a Pythagorean perspective Figure 7 is very interesting. Here is a simple physical system that can distinguish rational numbers, a seemingly purely mathematical concept, from all others. If one cuts the figure by a horizontal line and represents the intervals in which a given region of synchronization is stable, we get the so-called devil's staircase [10]. This staircase is made up of infinite steps: between two successive steps, there is always another. In addi- Fig. 9 Considering our auditory system as a general nonlinear system, and observing the analogy between the phase diagram ( Figure 7) and the consonance diagram of Plomp [47], one might expect that the regions of synchronization can describe the phenomenon of musical consonance. However, a fundamental problem remains: the stability of the regions of synchronization. tion, the staircase is self-similar, and fractal: if one zooms in to any piece of this staircase one sees that the part is equal to the full staircase (Figure 7c). The sizes of the various entrainment regions are ordered in a way related to a concept from number theory: Farey sequences. An n-Farey sequence is the increasing succession of rational numbers whose denominators are less than or equal to n. We call two rational numbers 'adjacent' if they are consecutive in the Farey sequence. A necessary and sufficient condition for p/q and r/s to be adjacent is |ps − qr| = 1. A rational number α belonging to the open interval (p/q; r/s) where p/q and r/s are adjacent will be called 'mediant' if there is no other rational in the interval having smaller denominator. It is known that α = (p + r)/(q + s) and is unique. Observation of Fig. 7 allows us to guess that the synchronization zone characterized by a mediant number of two adjacent rationals is the greatest of all the zones situated between those rationals. In addition it has, obviously, the least period. This property is generic and not specific of the case illustrated in Fig. 7. As a consequence, we have that between two solutions corresponding to successive convergents of the golden section, say n and n + 1 (which are represented by Fibonacci quotients and are also Farey adjacents), the widest region corresponds to the successive convergent of the golden section, say n + 2, that coincides also with their Farey sum. This creates a hierarchy in parameter space that follows a Farey tree. This means also that, starting with the first two convergents, 1/1 and 1/2, we can obtain all the convergents of the golden section by the Farey sum operation; any convergent is the widest synchronization region between the two parent regions. Wider stability intervals means also that such solutions are more robust to parameter perturbations and, thus, that are more relevant for the generic modelling of the dynamical system under study. As can be seen in the diagram of Figure 9, it might appear that synchronization should explain the harmonic intervals, given also the similarity of the consonance curves with those obtained, for example, through the psychoacoustic theory of Plomp [47], which basically affirms that musical intervals seem consonant if the frequency differences between components exceed a critical bandwidth. However, a fundamental problem presents itself: that of stability. A harmonic interval implies the presence of at least two independent frequencies (those which define the interval) and synchronization (such as that of the Earth-Moon system) becomes unstable in the presence of two incommensurable frequencies. This is a problem faced by all theories of music using the Pythagorean or Zarlinian just intervals. If an interval is represented by a rational ratio, like the fifth, a small detuning in one of the frequencies automatically breaks this rational ratio. Paradoxically, the smaller the detuning, the greater the whole numbers needed to describe the new rational ratio. It is clear that this mathematical instability does not correspond with our perception: the feeling of consonance is a maximum for the ratios defined by small numbers, for example 3/2 for the fifth, and degrades proportionally when the range is out of tune to a certain extent. This represents an essential drawback to fitting the existence of harmonic intervals and musical scales into a dynamical-systems paradigm. Quasiperiodic forcing and three-frequency resonance Ultimately the very existence of harmonic musical intervals in the brain may be due to the properties of our auditory system, which is a highly nonlinear dynamical system. A harmonic interval is a particular case of a chord, indeed a chord with only two notes. Although a harmonic interval is an example of a very simple chord, the above-described stability problem remains. We need at least two independent parameters for describing a harmonic interval, for example, the two fundamental frequencies of the notes composing the interval. Thus, as we have sketched in Fig. 9, we can consider our auditory system as a dynamical system in this case forced with two independent frequencies. Consequently, in order to tackle the problem of harmonic intervals one needs to consider a slightly more complex dynamical system than the periodically forced swing: a quasiperiodically forced swing. The system and its responses are represented in Figure 10. In this case we have regions of generalized synchronization. Within these regions a relation between three resonant frequencies is satisfied. In the case of simple synchronization there is a relationship between two resonant frequencies that can be described mathematically as where p, q are integers, f 1 the forcing frequency and f 2 the frequency of the response. It is easy to generalize this condition to three frequencies where p, q, r, are integers, f 1 , f 2 , the forcing frequencies, and f 3 , the frequency of the response. As is the case for simple synchronization, a section of the phase Figure 7 gives a quasiperiodically forced swing: the presence of two simultaneous periodic forces pushing with different frequencies destroys the stability of regions of synchronization. With a real swing, where one generally pushes once per period of the swing, this setup would not be so easy to carry out for an arbitrary forcing frequency. Dynamics of a quasiperiodically forced swing. (a) Three-frequency devil's staircase for forcing frequencies f 1 = 1, f 2 = 12/7. The generalized regions of synchronization represent resonances at three frequencies. (b) Devil's ramps: the global organization of three-frequency resonances as a function of the external frequency ratio and the intrinsic frequency [12]. portrait, shown in Figure 10a, shows the existence of a three-frequency devil's staircase shown in Figure 10b. The aspect that it is important to note about these three-frequency resonances is that they are also organized by means of hierarchical rules dictated by number theory, i.e., in a Pythagorean fashion [12]. Dynamical systems in man and the heavens: The examples of the auditory system (the microcosmos) and celestial mechanics (the macrocosmos) Resonances of three frequencies are not simply a mathematical curiosity but rather are a feature found at different levels of natural phenomena. In the Solar System, the behaviour of some bodies in the Kuiper belt and in Saturn's rings is described by three-frequency resonances [43]. In addition, for music the organization of these resonances corresponds with the perception of the Fig. 11 The solid lines represent the theoretical solutions provided by three-frequency resonances to the key psychophysical phenomenon of pitch shift of the missing fundamental presented in Figure 12. The various symbols represent the results of psychoacoustic experiments with three different human different subjects. As can be seen in the figure, the agreement between theory and experiment is rather good [11]. missing fundamental and accurately describes the phenomenon of pitch shift, as can be seen in Figure 11 [11,13]. Suppose that a periodic signal be presented to the ear. The pitch of the signal can be quantitatively well described by the frequency of the fundamental, say ω 0 ; see Fig. 12(a). The number of harmonics and their relative amplitudes gives the timbral characteristics to the sound. Now suppose that the fundamental and some of the first few higher harmonics are removed, Fig. 12(b). Although the timbral sensation changes, the pitch of the complex remains unchanged and equal to the missing fundamental. This is termed residue perception. Moreover, psychophysical experiments can be done by shifting the remaining partials, as shown in Fig. 12(c), whereon the perceived pitch also shifts in accordance with three-frequency resonances, as sketched in Fig. 12(d), and shown in psychoacoustic experiments in Figure 11. Thus, the perception of the pitch of complex sounds, and in particular the key phenomenon of perception of the missing fundamental, or residue perception, can be described by means of the dynamical properties of our auditory system [11,13]. This represents a key step in the development of a nonlinear approach for musical perception. It has also been suggested that the residue is one of the fundamental phenomena of musical perception and the basis of the consonant structure of musical intervals [57,53]. Historically, the theories of Rameau of the fundamental sound [51] and that of Tartini based on the third sound [56] represent anticipations of this perceptual and nonlinear origin of musical harmony. In the case of chords, the residue can perhaps be identified with the fundamental sound of Rameau "always the lowest and deepest part" and, therefore, Fig. 12 The phenomenon of the missing fundamental or residue: Fourier spectra and pitches of complex tones. Whereas pure tones have a sinusoidal waveform corresponding to a single frequency, almost all musical sounds are complex tones that consist of a lowest frequency component, or fundamental, together with higher frequency overtones. The fundamental plus overtones are together collectively called partials. a) A harmonic complex tone. The overtones are successive integer multiples k = 2, 3, 4 . . . of the fundamental ω 0 that determines the pitch. The partials of a harmonic complex tone are termed harmonics. b) Another harmonic complex tone. The fundamental and the first few higher harmonics have been removed. The pitch remains the same and equal to the missing fundamental. This pitch is known as virtual or residue pitch. c) An anharmonic complex tone. The partials, which are no longer harmonics, are obtained by a uniform shift ∆ω of the previous harmonic case (shown dashed). Although the difference combination tones between successive partials, ω C = ω 2 − ω 1 , remain unchanged and equal to the missing fundamental, the pitch shifts by a quantity ∆P that depends linearly on ∆ω. d) Pitch shift. Pitch as a function of the central frequency f = (k + 1)ω 0 + ∆ω of a three-component complex tone {kω 0 + ∆ω, (k + 1)ω 0 + ∆ω, (k + 2)ω 0 + ∆ω}. The pitch-shift effect is shown here for k = 6, 7, and 8. Three-component complex tones are often used in pitch experiments because they elicit a clear residue sensation and can easily be obtained by amplitude modulation of a pure tone of frequency f with another pure tone of frequency ω 0 . When ω 0 and f are rationally related, ∆ω = 0, and the three frequencies are successive multiples of some missing fundamental. At this point ∆P = 0, and the pitch is ω 0 , coincident with the frequency of the missing fundamental [11]. becomes the basis of the development of harmonious music. As Rameau put it the harmony of these consonances can be perfect only if the first sound is found below them, serving as their base and fundamental. [51] Moreover, we may quote from the composer and violinist Tartini's description of the third sound: Let the following intervals be played perfectly by a violinist simultaneously with a strong, sustained bow. A third sound will be heard ... The same will happen if the presented intervals are played by two violin players five or six steps apart, each playing his note at the same time, and always with a strong, sustained bow. A listener in the middle of the two players will hear this third sound much more ... [56] We think that Tartini is describing residue perception and not simply a combination tone here. In the second experimental situation described, the intensity in one ear due to the opposite violin should be -owing to the auditory shadow of the head -very low, and thus not able to produce a detectable combination tone. It is very probable that he is describing the perception of the residue in a dichotic situation; that of a different stimulus in each ear. In this way, the rules of musical composition advanced by Rameau and Tartini find a physical rationale in the theory of dynamical systems. These results suggest that there is something fundamental in harmonic musical intervals and bring us back to the original musical problem treated by the Pythagoreans: what is the origin of these numerical relations that describe so precisely the basic elements of aesthetics of auditory perception? Historical and archaeological research likewise leads one to consider that the existence of privileged intervals may be inherent to the physiology of the hearing system rather than being due to culture. For example, analysis of a bone flute found in a Neanderthal site in Slovenia [4,18] and other, still functional, prehistoric flutes from China [64] indicate that these instruments already produced consonant musical intervals. These findings are notable because not only do they highlight that musical behaviour may not be exclusive to Homo sapiens, and thus to human culture, but also that harmonic musical intervals are a phenomenon existing at the base of music and therefore may reflect the existence of universal physiological mechanisms in the auditory system of mammals and even perhaps of all life. Moreover, such a foundation prompts a revisiting of the Pythagorean conception of music as a paradigm of cosmological order and the role of numbers in such a pattern. In this paradigm, there are of course many other possible dynamical responses; for example, very complex or erratic behaviour, similar to what is observed in random or stochastic processes. That phenomenon is known as deterministic chaos and was first identified by the meteorologist Edward Lorenz in a highly simplified climate model [36] that he analysed with one of the first electronic computers. An example of this behaviour in the Solar System is provided by the moon of Saturn, Hyperion, which rotates in just Fig. 13 All is number, numeri regunt mundum, was the Pythagorean ethos. The cover of the Voyager golden records encode the information about where Earth is in the universe, and about how to play the record and extract the information from it, using numbers [54,46]. Image: NASA/JPL. such a chaotic way [61]. Of course this deterministic chaos is not the same thing as Pythagorean chaos; yet the modern term is a deliberate echo of the ancient Greeks [35]. The widespread presence of synchronization and generalized synchronization, as well as chaos, in planetary motions vindicates a key Pythagorean explanation of cosmological order. Musica Universalis? The only manmade object so far to leave our Solar System and enter interstellar space is Voyager 1, launched in 1977 and now navigating beyond the heliosphere, over 140 times further from the Sun than we are on Earth. Aboard both Voyager 1 and its twin, Voyager 2, which is nearly as far away from us, but headed in a completely different direction in space, there is music. The fortuitously but aptly named Golden Record (Fig. 13) -the designation comes from the record's material composition; it is gold plated copper -carried by the two Voyager spacecraft was put together by a group of people led by Carl Sagan to be sent into space like a message in a bottle. As Sagan wrote [54] I was delighted with the suggestion of sending a record ... we could send music. ... Perhaps a sufficiently advanced civilization would have made an inventory of the music of species on many planets and, by comparing our music with such a library, might be able to deduce a great deal about us. ... Because of the relation between music and mathematics, and the anticipated universality of mathematics, it may be that much more than our emotions are conveyed by the musical offering on the Voyager record. Sagan took some of these ideas from work of von Hoerner [59], who concluded that Some other civilisations in space may have no music at all, for various biological or mental reasons. Some others may have vastly different things which they call music but which are incomprehensible to us, for similar reasons. But it seems that some of our basic musical principles are universal enough to be expected at a good fraction of other places, too: a chromatic scale of exactly 5, 12, or 31 equal parts, from which rather arbitrary numbers and sequences can be selected for melodic scales. It has long been an idea that music may be a universal language, and the logical extension to that idea is that music is likely to be a common channel of communication with alien intelligences. Huygens [29] wrote on music and musical scales in extraterrestrial civilizations in his book Cosmotheoros, from which we think it worthwhile to quote his ideas at some length: It's the same with Musick as with Geometry, it's every where immutably the same, and always will be so. For all Harmony consists in Concord, and Concord is all the World over fixt according to the same invariable measure and proportion. So that in all Nations the difference and distance of Notes is the same, whether they be in a continued gradual progression, or the voice makes skips over one to the next. Nay very credible Authors report, that there's a sort of Bird in America, that can plainly sing in order six musical Notes: whence it follows that the Laws of Musick are unchangeably fix'd by Nature, and therefore the same Reason holds valid for their Musick, as we e'en now proposed for their Geometry. For why, supposing other Nations and Creatures, endued with Reason and Sense as well as we, should not they reap the Pleasures arising from these Senses as well as we too? I don't know what effect this Argument, from the immutable nature of these Arts, may have upon the Minds of others; I think it no inconsiderable or contemptible one, but of as great Strength as that which I made use of above to prove that the Planetarians had the sense of Seeing. But if they take delight in Harmony, 'tis twenty to one but that they have invented musical Instruments. For, if nothing else, they could scarce help lighting upon some or other by chance; the sound of a tight String, the noise of the Winds, or the whistling of Reeds, might have given them the hint. From these small beginnings they perhaps, as well as we, have advanced by degrees to the use of the Lute, Harp, Flute, and many string'd Instruments. But altho the Tones are certain and determinate, yet we find among different Nations a quite different manner and rule for Singing; as formerly among the Dorians, Phrygians, and Lydians, and in our time among the French, Italians, and Persians. In like manner it may so happen, that the Musick of the Inhabitants of the Planets may widely differ from all these, and yet be very good. But why we should look upon their Musick to be worse than ours, there's no reason can be given; neither can we well presume that they want the use of half-notes and quarter-notes, seeing the invention of half-notes is so obvious, and the use of 'em so agreeable to nature. Nay, to go a step farther, what if they should excel us in the Theory and practick part of Musick, and outdo us in Consorts of vocal and instrumental Musick, so artificially compos'd, that they shew their Skill by the mixtures of Discords and Concords? and of this last sort 'tis very likely the 5th and 3d in use with them. This is a very bold Assertion, but it may be true for ought we know, and the Inhabitants of the Planets may possibly have a greater insight into the Theory of Musick than has yet bin discover'd amongst us. For just such reasons Sagan and his collaborators sent music to whoever in the future might recover the wandering spacecraft. They sent Bach, Beethoven, and Mozart, but also music from around the world [46]. One piece that they included, prefacing a selection of the natural and human sounds of Earth and commissioned just for the Golden Record, explicitly encodes Pythagorean ideas: the giddy whirl of tones reflecting the motions of the Sun's planets in their orbits -a musical readout of Johannes Kepler's Harmonica Mundi, the sixteenth-century mathematical tract whose echoes may still be found in the formulas that make Voyager possible. Kepler's concept was realized on a computer at Bell Telephone Laboratories by composer Laurie Spiegel in collaboration with Yale professors John Rogers and Willie Ruff. Each frequency represents a planet; the highest pitch represents the motion of Mercury around the Sun as seen from Earth; the lowest frequency represents Jupiter's orbital motion. Inner planets circle the Sun more swiftly than the outer planets. The particular segment that appears on the record corresponds to very roughly a century of planetary motion. Kepler was enamored of a literal "music of the spheres," and I think he would have loved their haunting representation here. [54] We have argued here that Pythagorean ideas of a Musica Universalis are correct owing to the universality of resonances of nonlinear dynamical systems in the cosmos. If that conception is true then this Keplerian piece may act for its discoverers as a Rosetta Stone that unlocks for them the rest of Earth music. Coda We have presented here our vision of how current-day research in dynamical systems is a continuation of a millenary tradition linking mathematics, celestial mechanics, and music stretching back to Pythagoras. We continue to work on aspects of dynamical systems applied to music perception. The nervous system is of great complexity, both functional and structural, and in particular our auditory system is wired in an extraordinarily precise and complex way. We are currently working on models that take into account the physiology of the auditory system in order to explain some aspects of musical perception that may reflect the existence of an underlying dynamics. After applying the dynamics of three-frequency resonances to musical pitch and the theory of residue perception, we now seek a nonlinear theory for musical consonance based on the fundamental role of nonlinear dynamics in determining the physiological and psychoacoustic responses of the auditory system to stimulation by complex sounds.
2020-11-26T09:06:55.646Z
2020-11-23T00:00:00.000
{ "year": 2021, "sha1": "769a0ed34656658fbd857d37132d58eba124a207", "oa_license": null, "oa_url": "http://arxiv.org/pdf/2104.00998", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "37f693d432d8802b713abfaee9eb116136b01a08", "s2fieldsofstudy": [ "Philosophy" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
21613717
pes2o/s2orc
v3-fos-license
Topological edge properties of C60+12 n fullerenes Summary A molecular graph M is a simple graph in which atoms and chemical bonds are the vertices and edges of M, respectively. The molecular graph M is called a fullerene graph, if M is the molecular graph of a fullerene molecule. It is well-known that such molecules exist for even integers n ≥ 24 or n = 20. The aim of this paper is to investigate the topological properties of a class of fullerene molecules containing 60 + 12n carbon atoms. Introduction Throughout this paper the term "graph" refers to a finite and simple graph. The set of vertices and edges of a graph G are denoted by V(G) and E(G), respectively. Molecular graphs are graphs with vertices representing the atoms and edges representing the bonds. A bi-connected graph is a connected graph in which, by removing any vertex, the graph will remain connected. A graph in which all vertices have degree three is called a cubic graph. A fullerene graph is a cubic bi-connected planar graph whose faces are pentagons and hexagons. From Euler's theorem, one can easily see that such graphs have exactly 12 pentagonal and (n/2 − 10) hexagonal faces, where 20 ≤ n. It is not so difficult to prove that there is no fullerene with exactly 22 carbon atoms. After the discovery of buckminsterfullerene C 60 by Kroto and Smalley in 1985 [1,2], some mathematicians spent their time looking at the mathematical properties of these new materials. We refer to [3] for more information on the mathematical properties of fullerene graphs. Suppose G is a graph. A mapping f: G → G is an automorphism if and only if (i) f is one-to-one and (ii) f and its inverse preserve adjacency in G. The property P on G is called a topological property if P is preserved under each automorphism of G. A topological index is a number describing a topological property. It should be applicable in chemistry. The length of a shortest path connecting vertices u and v is called the topological distance between u and v, denoted by d (u,v). A topological index is considered to be distance-based, if it can be defined by a function d. [4][5][6] and references therein for more information on these graph invariants. A modification of the Szeged index was proposed by Milan Randić in [7]. Some mathematical properties of this topological index are investigated in [8,9]. One of the present authors (ARA) [10] proposed an edge version of the revised Szeged index as follows: where m 0 (e) denotes the number of vertices equidistant from u and v. In [11], some mathematical properties of this new graph invariant have been investigated. The aim of this paper is to compute PI, edge Szeged and edge revised Szeged indices of an infinite class F n of fullerenes with exactly 60 + 12n carbon atoms ( Figure 1). We encourage the interested readers to consult [12][13][14] for some extraordinary works on this topic and [15][16][17][18][19] for background materials and basic computational techniques. Our calculations are done with the aid of HyperChem [20], TopoCluj [21] and GAP [22]. Our notation is according to the standard books on graph theory. Results and Discussion Khadikar and co-authors [4] were the first scientists to consider the topological edge properties of molecules. In this section, we will compute the PI, edge Szeged and edge revised Szeged indices of F n . We can associate a 0-1 matrix A = [a ij ] to F n . The entry a ij is unity if and only if the vertices i and j are adjacent in F n . Since F n is a cubic cage graph, the number of units in each row of A is equal to 3. The distance matrix D = [d ij ] is another n × n matrix associated to F n . Here, d ij is the length of a minimal path connecting i and j, for i ≠ j, and zero otherwise. Our algorithm for computing the PI, edge Szeged and edge revised Szeged indices of the fullerene graph F n is as follows: We first draw the fullerene by HyperChem. Then we upload the hin file of the fullerene into TopoCluj. By computing the adjacency and distance matrices of F n by TopoCluj, we calculate the PI, Szeged and revised Szeged indices of our fullerene graph by some GAP programs. These computer programs are accessible from the authors upon request. In Figure 1 and Figure 2, the 2D and 3D perception of F n are depicted. We apply our mentioned method for some small numbers of n. Using our programs, we obtain seven exceptional cases, those of n = 1 to 7. In Table 1, the quantities m u (e), m v (e) and m 0 (e) = 90 + 18n -m u (e) -m v (e) for these exceptional cases are recorded. We notice that there are two cases, that is, when n is odd or even. If n is even then we have 12 different types of edges, Figure 3, and if n is odd then there are 13 different types of edges ( Figure 4). In Table 2, the quantities, PI, edge Szeged and edge revised Szeged indices for the exceptional cases 1 ≤ n ≤ 7 are recorded. In Table 3 and Table 4, the quantities of m u (e), m v (e) and m 0 (e) are calculated. By these tables and a case-by-case investigation on the molecular graph of F n led to the following observation: The PI, edge Szeged and edge revised Szeged indices of C 60+12n fullerenes can be computed by the following formulae: It is possible to find a proof for this observation by a tedious calculation on the molecular graph of F n . Conclusion In this paper a computational method for computing PI, edge Szeged and edge revised Szeged indices of fullerene graphs is presented. In [18,19], the authors considered the topological properties of fullerenes given by vertex contributions of its molecular graph. In this work, the topological properties of a class of fullerenes were given by edge contributions of its molecular graph. Our calculations with this and other classes of fullerenes suggest that the edge PI index can be computed by a polynomial of degree 2, whereas edge Szeged and edge revised Szeged indices are computed by polynomials of degree 3. It is clear that we cannot characterize fullerenes by one topological index, but we can think about the possibility of characterizing these molecular graphs by a finite set, Ω, of topological indices. We guess that Ω contains at least two topological indices A and B, such that A and B can be computed by edge and vertex contributions, respectively.
2017-04-20T14:04:22.868Z
2013-06-26T00:00:00.000
{ "year": 2013, "sha1": "6bf47cc2bed9c6fc84d886ed3599cc1f44d5f6a9", "oa_license": "CCBY", "oa_url": "https://www.beilstein-journals.org/bjnano/content/pdf/2190-4286-4-47.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6bf47cc2bed9c6fc84d886ed3599cc1f44d5f6a9", "s2fieldsofstudy": [ "Materials Science", "Physics" ], "extfieldsofstudy": [ "Physics", "Medicine" ] }
221990427
pes2o/s2orc
v3-fos-license
Optimal Search and Rescue Route Design Using an Improved Ant Colony Optimization This paper presents a search and rescue route design algorithm to improve the efficiency of maritime search and rescue. This algorithm is based on the basic Ant Colony Algorithm. To solve the problem that the Ant Colony Algorithm is easy to fall into local optimal solutions in the process of searching, the pheromone concentration updating strategy of the original Ant Colony Algorithm is improved. Compared with the original algorithm, the probability of finding the optimal solution in this paper can be improved by about 30% at most. The path weight based on the time of falling into the water is introduced to make the algorithm more realistic. The simulation results show that the improved algorithm can improve the search efficiency and speed, and can combine with the actual situation to get a better route due to the introduction of weight in the algorithm. Introduction In recent years, the number and scale of marine activities are increasing, so is the probability of marine accidents. Maritime search and rescue as the security of each personnel, has an important position. The route design of vessels is an essential activity in the process of search and rescue [2]. Designing a suitable route can shorten the search and rescue time effectively and improve the search and rescue efficiency. In addition, due to the complex maritime environment, the success of the search and rescue depends on many factors. For instance, the position of each search and rescue target will be moved due to the influence of Information Technology and Control 2020/3/49 wind and waves, which brings great disadvantage. In addition, since the survival rate is greatly improved if the drowning person is rescued within 24 hours of falling into the water, it is necessary to try to ensure that the rescue time for each person is not too long. Most route planning algorithms are usually based on ideal conditions. Therefore, a more suitable algorithm is needed for the design of search and rescue routes. At present, some research on route design has been put forward. For instance, a new multi-criteria ACO-based algorithm was proposed to solve a path planning problem for ships in the environment with static and dynamic obstacles [9]. Meanwhile, to save the searching time, a dynamic optimizing ship routing algorithm was applied to the dynamic ship routing in complex navigation conditions successfully [6]. Moreover, a path-planning algorithm, according to evolutionary algorithm, was come up in navigation situation [17,18]. Furthermore, an automatic method is developed to improve its searching efficiency using simulated annealing algorithm [8]. There are many algorithms, as Dijkstra, A*and so on, to calculate the optimal path. Due to the complexity of problem, many excellent algorithms, such as Ant Colony Optimization (ACO), Dragonfly Algorithm and Polar Bear Algorithm have appeared in recent years [11,14,16]. Among them, Ant Colony Algorithm has been widely applied in various fields of life due to its positive feedback, strong robustness and easy integration with other algorithms [3,4]. Inspired by the behavior of ants in finding food, Italian scholars M. Dorigo, V. Maniezzo and A. Colorni introduced Ant Colony Optimization in the early 1990s. Compared with other heuristic algorithms, the inspiration of this algorithm is unique. The original version of the algorithm is suitable for finding the optimal solution problems. However, with a lot of modification in this algorithms these days that make it capable of solving a wide range of problems [15]. The algorithm has been applied in the field of analyze 2D input images [12], path planning for mobile robot [1,10,23], plan the 3D measuring path for coordinate measuring machines [5] and multi-path routing in LEO satellite networks [7]. The increasing demand of calculation methods for solving optimization problems causes that parallelization and modification of the existing algorithms are necessary. Because the Ant Colony Algorithm is universal and easy to be integrated with other calcu-lation methods, there are several ways to optimize it. The optimization of the algorithm mainly includes time minimization, search efficiency improvement and parameter improvement [13]. This paper uses an improved ACO to design search and rescue routes. By improving the strategy of pheromone concentration updating in original ACO, the algorithm is more likely to found the optimal solution. This alleviates the defect that the Ant Colony Algorithm is easy to fall into the local optimum effectively. According to the actual situation of maritime search and rescue, the path weight based on the time of falling into the water is introduced into the algorithm. This makes the algorithm take the falling time of each target into consideration in the calculation process, so as to make the calculation of the optimal route more reasonable. The algorithm can timely adjust the search and rescue route with the increase of the falling time, this can ensure the timeliness of the rescue. Experimental results show that the improved algorithm can choose a path more effectively. Combining with the actual situation, the algorithm is introduced with the relevant weight to make the algorithm more meaningful in practical application rescue ships well. Ant Colony Optimization Ant Colony Optimization is a bionic optimization search algorithm, and has obvious advantages to solve some complex problems. It develops based on the behavior of ants in finding their short path to the food. During the process of searching food, each ant releases an amount of pheromone on the path where it passed. Meanwhile the ants always tend to travel towards the trail where the pheromone concentration is high. Therefore, the algorithm shows positive feedback. The higher the pheromone concentration on the path, the more ants will chose it. Eventually, the ants find the shortest path to the food in this way [20,22,24]. In the original Ant Colony Algorithm, the ant k will choose its next grid according to where P ij k (t) is the probability of ant k moving from grid node i to node j at time t; τ ij (t) is the intensity of the pheromone between grid cubes node i and j; η ij (t) represents the heuristic function between grid cubes i and j (In this paper, η ij (t) = 1/d ij , dij is the distance between satellite i P The som as t the opt Opt phe (1) Information Technology and Control 2020/3/49 440 where P ij k (t) is the probability of ant k moving from grid node i to node j at time t; τ ij (t) is the intensity of the pheromone between grid cubes node i and j; η ij (t) represents the heuristic function between grid cubes i and j (In this paper, η ij (t)=1/d ij , d ij is the distance between satellite i and satellite j); α and β are weighting parameters that show the relative influence of the pheromone and distance. When all the ants complete the travelling, the pheromone levels on each arc will be updated by volatilizing the old pheromone and adding the pheromones deposited by each ant. The pheromone updating formula is as follows: where P ij k (t) is the probability of ant k moving from grid node i to node j at time t; τ ij (t) is the intensity of the pheromone between grid cubes node i and j; η ij (t) represents the heuristic function between grid cubes i and j (In this paper, η ij (t) = 1/d ij , dij is the distance between satellite i and satellite j); α and β are weighting parameters that show the relative influence of the pheromone and distance. When all the ants complete the travelling, the pheromone levels on each arc will be updated by volatilizing the old pheromone and adding the pheromones deposited by each ant. The pheromone updating formula is as follows: where ρ is pheromone evaporation rate and (1-ρ) represents the pheromone residual rate, 0<ρ<1; ∆τ ij k (t) represents the amount of pheromone left by the ant k at the current iteration. There are three different algorithmic models for ∆τ ij k (t): Ant-Quantity, Ant-Cycle, Ant-Density. They were all proposed by M. Dorigo. Ant-Cycle is often used because of its good performance which can be expressed as follows where Q is a constant and L k represents the length of the path covered by the ant k. 3.Improved Ant Colony Optimization Ant Colony Algorithm shows good performance in solving shortest path, but it also has some shortcomings. In this paper, the Ant Colony Algorithm is optimized by improving the pheromone concentration update strategy, and path weights are introduced to calculate a reasonable route for search and rescue vessels. Improvement of Pheromone Update Rules The primary Ant Colony Algorithm still has some disadvantages during searching, such as the searching time is excessively long and the probabilistic selection may fall into local optimum. The improved Ant Colony Optimization can improve the local pheromone update rule, which makes the result more likely to achieve an optimal solution. The improved Ant Colony Optimization has introduced adaptive dynamic factors σ into pheromone update strategy, which makes the pheromone concentration reflect the path information better. The adaptive dynamic factor σ can make the pheromone concentration get a larger addition in better path by control the updating proportion adaptively of the optimal pheromone concentration in an iteration. The improved pheromone updating formula can be shown as: where σ represents the adaptive dynamic factors and b represents the coefficient of the d adaptive dynamic factors; L min is the shortest length at the current iteration; L � is the average of length for all ants at the current iteration, μ is the coefficient of σ . The value of μ depends on L K . The closer the value of L K is to the value of L min , the larger the value of μ. The adaptive dynamic factor σ above is the inverse tangent function . The output curve of the function is smooth, and the value of the function is in the range (0,1). Based on the Equation (8), a conclusion can be drawn that where P ij k (t) is the probability of ant k moving from grid node i to node j at time t; τ ij (t) is the intensity of the pheromone between grid cubes node i and j; η ij (t) represents the heuristic function between grid cubes i and j (In this paper, η ij (t) = 1/d ij , dij is the distance between satellite i and satellite j); α and β are weighting parameters that show the relative influence of the pheromone and distance. When all the ants complete the travelling, the pheromone levels on each arc will be updated by volatilizing the old pheromone and adding the pheromones deposited by each ant. The pheromone updating formula is as follows: where ρ is pheromone evaporation rate and (1-ρ) represents the pheromone residual rate, 0<ρ<1; ∆τ ij k (t) represents the amount of pheromone left by the ant k at the current iteration. There are three different algorithmic models for ∆τ ij k (t): Ant-Quantity, Ant-Cycle, Ant-Density. They were all proposed by M. Dorigo. Ant-Cycle is often used because of its good performance which can be expressed as follows where Q is a constant and L k represents the length of the path covered by the ant k. 3.Improved Ant Colony Optimization Ant Colony Algorithm shows good performance in solving shortest path, but it also has some shortcomings. In this paper, the Ant Colony Algorithm is optimized by improving the pheromone concentration update strategy, and path weights are introduced to calculate a reasonable route for search and rescue vessels. Improvement of Pheromone Update Rules The primary Ant Colony Algorithm still has some disadvantages during searching, such as the searching time is excessively long and the probabilistic selection may fall into local optimum. The improved Ant Colony Optimization can improve the local pheromone update rule, which makes the result more likely to achieve an optimal solution. The improved Ant Colony Optimization has introduced adaptive dynamic factors σ into pheromone update strategy, which makes the pheromone concentration reflect the path information better. The adaptive dynamic factor σ can make the pheromone concentration get a larger addition in better path by control the updating proportion adaptively of the optimal pheromone concentration in an iteration. The improved pheromone updating formula can be shown as: where σ represents the adaptive dynamic factors and b represents the coefficient of the d adaptive dynamic factors; L min is the shortest length at the current iteration; L � is the average of length for all ants at the current iteration, μ is the coefficient of σ . The value of μ depends on L K . The closer the value of L K is to the value of L min , the larger the value of μ. The adaptive dynamic factor σ above is the inverse tangent function where ρ is pheromone evaporation rate and (1-ρ) represents the pheromone residual rate, 0<ρ<1; ∆τ ij k (t) represents the amount of pheromone left by the ant k at the current iteration. There are three different algorithmic models for ∆τ ij k (t): Ant-Quantity, Ant-Cycle, Ant-Density. They were all proposed by M. Dorigo. Ant-Cycle is often used because of its good performance which can be expressed as follows where P ij k (t) is the probability of ant k moving from grid node i to node j at time t; τ ij (t) is the intensity of the pheromone between grid cubes node i and j; η ij (t) represents the heuristic function between grid cubes i and j (In this paper, η ij (t) = 1/d ij , dij is the distance between satellite i and satellite j); α and β are weighting parameters that show the relative influence of the pheromone and distance. When all the ants complete the travelling, the pheromone levels on each arc will be updated by volatilizing the old pheromone and adding the pheromones deposited by each ant. The pheromone updating formula is as follows: where ρ is pheromone evaporation rate and (1-ρ) represents the pheromone residual rate, 0<ρ<1; ∆τ ij k (t) represents the amount of pheromone left by the ant k at the current iteration. There are three different algorithmic models for ∆τ ij k (t): Ant-Quantity, Ant-Cycle, Ant-Density. They were all proposed by M. Dorigo. Ant-Cycle is often used because of its good performance which can be expressed as follows where Q is a constant and L k represents the length of the path covered by the ant k. 3.Improved Ant Colony Optimization Ant Colony Algorithm shows good performance in solving shortest path, but it also has some shortcomings. In this paper, the Ant Colony Algorithm is optimized by improving the pheromone concentration update strategy, and path weights are introduced to calculate a reasonable route for search and rescue vessels. Improvement of Pheromone Update Rules The primary Ant Colony Algorithm still has some disadvantages during searching, such as the searching time is excessively long and the probabilistic selection may fall into local optimum. The improved Ant Colony Optimization can improve the local pheromone update rule, which makes the result more likely to achieve an optimal solution. The improved Ant Colony Optimization has introduced adaptive dynamic factors σ into pheromone update strategy, which makes the pheromone concentration reflect the path information better. The adaptive dynamic factor σ can make the pheromone concentration get a larger addition in better path by control the updating proportion adaptively of the optimal pheromone concentration in an iteration. The improved pheromone updating formula can be shown as: where σ represents the adaptive dynamic factors and b represents the coefficient of the d adaptive dynamic factors; L min is the shortest length at the current iteration; L � is the average of length for all ants at the current iteration, μ is the coefficient of σ . The value of μ depends on L K . The closer the value of L K is to the value of L min , the larger the value of μ. The adaptive dynamic factor σ above is the inverse tangent function . The output curve of the function is smooth, and the value of the function is in the range (0,1). Based on the Equation (8), a conclusion can be drawn that (4) where Q is a constant and L k represents the length of the path covered by the ant k. 3.Improved Ant Colony Optimization Ant Colony Algorithm shows good performance in solving shortest path, but it also has some shortcomings. In this paper, the Ant Colony Algorithm is optimized by improving the pheromone concentration update strategy, and path weights are introduced to calculate a reasonable route for search and rescue vessels. Improvement of Pheromone Update Rules The primary Ant Colony Algorithm still has some disadvantages during searching, such as the searching time is excessively long and the probabilistic selec-tion may fall into local optimum. The improved Ant Colony Optimization can improve the local pheromone update rule, which makes the result more likely to achieve an optimal solution. The improved Ant Colony Optimization has introduced adaptive dynamic factors σ into pheromone update strategy, which makes the pheromone concentration reflect the path information better. The adaptive dynamic factor σ can make the pheromone concentration get a larger addition in better path by control the updating proportion adaptively of the optimal pheromone concentration in an iteration. The improved pheromone updating formula can be shown as: ij intensity of the pheromone between grid cubes node i and j; η ij (t) represents the heuristic function between grid cubes i and j (In this paper, η ij (t) = 1/d ij , dij is the distance between satellite i and satellite j); α and β are weighting parameters that show the relative influence of the pheromone and distance. When all the ants complete the travelling, the pheromone levels on each arc will be updated by volatilizing the old pheromone and adding the pheromones deposited by each ant. The pheromone updating formula is as follows: where ρ is pheromone evaporation rate and (1-ρ) represents the pheromone residual rate, 0<ρ<1; ∆τ ij k (t) represents the amount of pheromone left by the ant k at the current iteration. There are three different algorithmic models for ∆τ ij k (t): Ant-Quantity, Ant-Cycle, Ant-Density. They were all proposed by M. Dorigo. Ant-Cycle is often used because of its good performance which can be expressed as follows where Q is a constant and L k represents the length of the path covered by the ant k. 3.Improved Ant Colony Optimization Ant Colony Algorithm shows good performance in solving shortest path, but it also has some shortcomings. In this paper, the Ant Colony Algorithm is optimized by improving the pheromone concentration update strategy, and path weights are introduced to calculate a reasonable route for search and rescue vessels. as the searching time is excessively long and the probabilistic selection may fall into local optimum. The improved Ant Colony Optimization can improve the local pheromone update rule, which makes the result more likely to achieve an optimal solution. The improved Ant Colony Optimization has introduced adaptive dynamic factors σ into pheromone update strategy, which makes the pheromone concentration reflect the path information better. The adaptive dynamic factor σ can make the pheromone concentration get a larger addition in better path by control the updating proportion adaptively of the optimal pheromone concentration in an iteration. The improved pheromone updating formula can be shown as: where σ represents the adaptive dynamic factors and b represents the coefficient of the d adaptive dynamic factors; L min is the shortest length at the current iteration; L � is the average of length for all ants at the current iteration, μ is the coefficient of σ . The value of μ depends on L K . The closer the value of L K is to the value of L min , the larger the value of μ. The adaptive dynamic factor σ above is the inverse tangent function . The output curve of the function is smooth, and the value of the function is in the range (0,1). Based on the Equation (8), a conclusion can be drawn that (5) intensity of the pheromone between grid cubes node i and j; η ij (t) represents the heuristic function between grid cubes i and j (In this paper, η ij (t) = 1/d ij , dij is the distance between satellite i and satellite j); α and β are weighting parameters that show the relative influence of the pheromone and distance. When all the ants complete the travelling, the pheromone levels on each arc will be updated by volatilizing the old pheromone and adding the pheromones deposited by each ant. The pheromone updating formula is as follows: where ρ is pheromone evaporation rate and (1-ρ) represents the pheromone residual rate, 0<ρ<1; ∆τ ij k (t) represents the amount of pheromone left by the ant k at the current iteration. There are three different algorithmic models for ∆τ ij k (t): Ant-Quantity, Ant-Cycle, Ant-Density. They were all proposed by M. Dorigo. Ant-Cycle is often used because of its good performance which can be expressed as follows where Q is a constant and L k represents the length of the path covered by the ant k. 3.Improved Ant Colony Optimization Ant Colony Algorithm shows good performance in solving shortest path, but it also has some shortcomings. In this paper, the Ant Colony Algorithm is optimized by improving the pheromone concentration update strategy, and path weights are introduced to calculate a reasonable route for search and rescue vessels. as the searching time is excessively long and the probabilistic selection may fall into local optimum. The improved Ant Colony Optimization can improve the local pheromone update rule, which makes the result more likely to achieve an optimal solution. The improved Ant Colony Optimization has introduced adaptive dynamic factors σ into pheromone update strategy, which makes the pheromone concentration reflect the path information better. The adaptive dynamic factor σ can make the pheromone concentration get a larger addition in better path by control the updating proportion adaptively of the optimal pheromone concentration in an iteration. The improved pheromone updating formula can be shown as: where σ represents the adaptive dynamic factors and b represents the coefficient of the d adaptive dynamic factors; L min is the shortest length at the current iteration; L � is the average of length for all ants at the current iteration, μ is the coefficient of σ . The value of μ depends on L K . The closer the value of L K is to the value of L min , the larger the value of μ. The adaptive dynamic factor σ above is the inverse tangent function . The output curve of the function is smooth, and the value of the function is in the range (0,1). Based on the Equation (8), a conclusion can be drawn that (6) intensity of the pheromone between grid cubes node i and j; η ij (t) represents the heuristic function between grid cubes i and j (In this paper, η ij (t) = 1/d ij , dij is the distance between satellite i and satellite j); α and β are weighting parameters that show the relative influence of the pheromone and distance. When all the ants complete the travelling, the pheromone levels on each arc will be updated by volatilizing the old pheromone and adding the pheromones deposited by each ant. The pheromone updating formula is as follows: where ρ is pheromone evaporation rate and (1-ρ) represents the pheromone residual rate, 0<ρ<1; ∆τ ij k (t) represents the amount of pheromone left by the ant k at the current iteration. There are three different algorithmic models for ∆τ ij k (t): Ant-Quantity, Ant-Cycle, Ant-Density. They were all proposed by M. Dorigo. Ant-Cycle is often used because of its good performance which can be expressed as follows where Q is a constant and L k represents the length of the path covered by the ant k. 3.Improved Ant Colony Optimization Ant Colony Algorithm shows good performance in solving shortest path, but it also has some shortcomings. In this paper, the Ant Colony Algorithm is optimized by improving the pheromone concentration update strategy, and path weights are introduced to calculate a reasonable route for search and rescue vessels. as the searching time is excessively long and the probabilistic selection may fall into local optimum. The improved Ant Colony Optimization can improve the local pheromone update rule, which makes the result more likely to achieve an optimal solution. The improved Ant Colony Optimization has introduced adaptive dynamic factors σ into pheromone update strategy, which makes the pheromone concentration reflect the path information better. The adaptive dynamic factor σ can make the pheromone concentration get a larger addition in better path by control the updating proportion adaptively of the optimal pheromone concentration in an iteration. The improved pheromone updating formula can be shown as: where σ represents the adaptive dynamic factors and b represents the coefficient of the d adaptive dynamic factors; L min is the shortest length at the current iteration; L � is the average of length for all ants at the current iteration, μ is the coefficient of σ . The value of μ depends on L K . The closer the value of L K is to the value of L min , the larger the value of μ. The adaptive dynamic factor σ above is the inverse tangent function . The output curve of the function is smooth, and the value of the function is in the range (0,1). Based on the Equation (8), a conclusion can be drawn that (7) where σ represents the adaptive dynamic factors and b represents the coefficient of the d adaptive dynamic factors; L min is the shortest length at the current iteration; L is the average of length for all ants at the current iteration, μ is the coefficient of σ. The value of μ depends on L K . The closer the value of L K is to the value of L min , the larger the value of μ. The adaptive dynamic factor σ above is the inverse tangent function function between grid cubes i and j (In this paper, η ij (t) = 1/d ij , dij is the distance between satellite i and satellite j); α and β are weighting parameters that show the relative influence of the pheromone and distance. When all the ants complete the travelling, the pheromone levels on each arc will be updated by volatilizing the old pheromone and adding the pheromones deposited by each ant. The pheromone updating formula is as follows: where ρ is pheromone evaporation rate and (1-ρ) represents the pheromone residual rate, 0<ρ<1; ∆τ ij k (t) represents the amount of pheromone left by the ant k at the current iteration. There are three different algorithmic models for ∆τ ij k (t): Ant-Quantity, Ant-Cycle, Ant-Density. They were all proposed by M. Dorigo. Ant-Cycle is often used because of its good performance which can be expressed as follows where Q is a constant and L k represents the length of the path covered by the ant k. 3.Improved Ant Colony Optimization Ant Colony Algorithm shows good performance in solving shortest path, but it also has some shortcomings. In this paper, the Ant Colony Algorithm is optimized by improving the pheromone concentration update strategy, and path weights are introduced to calculate a reasonable route for search and rescue vessels. Optimization can improve the local pheromone update rule, which makes the result more likely to achieve an optimal solution. The improved Ant Colony Optimization has introduced adaptive dynamic factors σ into pheromone update strategy, which makes the pheromone concentration reflect the path information better. The adaptive dynamic factor σ can make the pheromone concentration get a larger addition in better path by control the updating proportion adaptively of the optimal pheromone concentration in an iteration. The improved pheromone updating formula can be shown as: where σ represents the adaptive dynamic factors and b represents the coefficient of the d adaptive dynamic factors; L min is the shortest length at the current iteration; L � is the average of length for all ants at the current iteration, μ is the coefficient of σ . The value of μ depends on L K . The closer the value of L K is to the value of L min , the larger the value of μ. The adaptive dynamic factor σ above is the inverse tangent function . The output curve of the function is smooth, and the value of the function is in the range (0,1). Based on the Equation (8), a conclusion can be drawn that (8) where node i and j; η ij (t) represents the heuristic function between grid cubes i and j (In this paper, η ij (t) = 1/d ij , dij is the distance between satellite i and satellite j); α and β are weighting parameters that show the relative influence of the pheromone and distance. When all the ants complete the travelling, the pheromone levels on each arc will be updated by volatilizing the old pheromone and adding the pheromones deposited by each ant. The pheromone updating formula is as follows: where ρ is pheromone evaporation rate and (1-ρ) represents the pheromone residual rate, 0<ρ<1; ∆τ ij k (t) represents the amount of pheromone left by the ant k at the current iteration. There are three different algorithmic models for ∆τ ij k (t): Ant-Quantity, Ant-Cycle, Ant-Density. They were all proposed by M. Dorigo. Ant-Cycle is often used because of its good performance which can be expressed as follows where Q is a constant and L k represents the length of the path covered by the ant k. 3.Improved Ant Colony Optimization Ant Colony Algorithm shows good performance in solving shortest path, but it also has some shortcomings. In this paper, the Ant Colony Algorithm is optimized by improving the pheromone concentration update strategy, and path weights are introduced to calculate a reasonable route for search and rescue vessels. the probabilistic selection may fall into local optimum. The improved Ant Colony Optimization can improve the local pheromone update rule, which makes the result more likely to achieve an optimal solution. The improved Ant Colony Optimization has introduced adaptive dynamic factors σ into pheromone update strategy, which makes the pheromone concentration reflect the path information better. The adaptive dynamic factor σ can make the pheromone concentration get a larger addition in better path by control the updating proportion adaptively of the optimal pheromone concentration in an iteration. The improved pheromone updating formula can be shown as: where σ represents the adaptive dynamic factors and b represents the coefficient of the d adaptive dynamic factors; L min is the shortest length at the current iteration; L � is the average of length for all ants at the current iteration, μ is the coefficient of σ . The value of μ depends on L K . The closer the value of L K is to the value of L min , the larger the value of μ. The adaptive dynamic factor σ above is the inverse tangent function The output curve of the function is smooth, and the value of the function is in the range (0,1). Based on the Equation (8), a conclusion can be drawn that when is larger and the search path is longer, the dynamic factor is closer to 0. On the contrary, when is smaller and the search path is shorter, the dynamic factor is closer to 1. The increment of pheromone in each cycle, in the basic Ant Colony Algorithm, just depends on the total distance travelled by each ant. However, the adaptive dynamic factors selected in this paper can adjust the update of pheromone adaptively. With help of that, pheromones can also obtain the increment according to the optimal solution and average length of the current iteration. The hyperbolic tangent function with different values of γ are shown in Figure 1 below. It is obvious that the function looks different when the parameter γ is varied. The higher the value of γ is, the higher the sensitivity of the function is. The increment of pheromone in each cycle, in the basic Ant Colony Algorithm, just depends on the total distance travelled by each ant. However, the adaptive dynamic factors selected in this paper can adjust the update of pheromone adaptively. With help of that, pheromones can also obtain the increment according to the optimal solution and average length of the current iteration. The hyperbolic tangent function with different values of γ are shown in Figure 1 below. It is obvious that the function looks different when the parameter γ is varied. The higher the value of γ is, the higher the sensitivity of the function is. Figure 1 Graph of dynamic factor function. Ant Colony Algorithm Based on Path Weight The mathematical model of Ant Colony Algorithm is an ideal assumption. However, there are many external factors in the actual maritime search and rescue operation. In order to increase the practicability of Ant Colony Algorithm in maritime search and rescue route selection, the path weight matrix is introduced into the algorithm. When choosing search and rescue routes, optimal path is not necessarily the shortest path, time of falling into the water of each search and rescue target should also be taken into account. Search and rescue vessels should have priority to reach T1、T2、T3、T4. According to the duration of each target point, the weight value of all routes are calculated as follows: where, T j is the falling time of target j; ε is a parameter that controls the degree of impact of the fall time, 0<ε<1. The value of weight will change with the increase of time. Since the exponential function is included in the formula, the longer the falling time of the target point is, the faster the weight value will increase. That is, when the falling time exceeds 20 hours, the weight value will increase sharply. According to the weight value of each path, the corresponding state transition probability can be obtained as: where , φ ij is the weight of path (i,j) at time t; γ is the factor of weight. Overall Procedure of Improved Ant Colony Optimization According to the description of above improvement method, the procedures of the design of search and rescue routes are described as follows: Step1. Enter the coordinates of each target point received. Set the parameters according to the actual situation Step2. Initialize algorithm. Step3. Each ant starts at the starting point, the initial position of the search vessel, constructs its route according to Equation (10). Update the value of every ant's tabu table and L K . Step4. When all ants reach the end, record the value of L K and L � in this iteration. The updating rule in Equations (5)-(7) is applied Ant Colony Algorithm Based on Path Weight The mathematical model of Ant Colony Algorithm is an ideal assumption. However, there are many external factors in the actual maritime search and rescue operation. In order to increase the practicability of Ant Colony Algorithm in maritime search and rescue route selection, the path weight matrix is introduced into the algorithm. When choosing search and rescue routes, optimal path is not necessarily the shortest path, time of falling into the water of each search and rescue target should also be taken into account. Search and rescue vessels should have priority to reach the target with a longer time to fall into the water if the distance between the two roads is not much different. In this paper, the weight of each path is valued depends on how long the next target point is in the water, such as T1, T2, T3, T4. According to the duration of each target point, the weight value of all routes are calculated as follows: the dynamic factor σ is closer to 0. On the contrary, when L K is smaller and the search path is shorter, the dynamic factor σ is closer to 1. The increment of pheromone in each cycle, in the basic Ant Colony Algorithm, just depends on the total distance travelled by each ant. However, the adaptive dynamic factors selected in this paper can adjust the update of pheromone adaptively. With help of that, pheromones can also obtain the increment according to the optimal solution and average length of the current iteration. The hyperbolic tangent function with different values of γ are shown in Figure 1 below. It is obvious that the function looks different when the parameter γ is varied. The higher the value of γ is, the higher the sensitivity of the function is. Figure 1 Graph of dynamic factor function. Ant Colony Algorithm Based on Path Weight The mathematical model of Ant Colony Algorithm is an ideal assumption. However, there are many external factors in the actual maritime search and rescue operation. In order to increase the practicability of Ant Colony Algorithm in maritime search and rescue route selection, the path weight matrix is introduced into the algorithm. When choosing search and rescue routes, optimal path is not necessarily the shortest path, time of falling into the water of each search and rescue target should also be taken into account. Search and rescue vessels should have priority to reach water if the distance between the two roads is not much different. In this paper, the weight of each path is valued depends on how long the next target point is in the water, such as T1、T2、T3、T4. According to the duration of each target point, the weight value of all routes are calculated as follows: where, T j is the falling time of target j; ε is a parameter that controls the degree of impact of the fall time, 0<ε<1. The value of weight will change with the increase of time. Since the exponential function is included in the formula, the longer the falling time of the target point is, the faster the weight value will increase. That is, when the falling time exceeds 20 hours, the weight value will increase sharply. According to the weight value of each path, the corresponding state transition probability can be obtained as: where , φ ij is the weight of path (i,j) at time t; γ is the factor of weight. Overall Procedure of Improved Ant Colony Optimization According to the description of above improvement method, the procedures of the design of search and rescue routes are described as follows: Step1. Enter the coordinates of each target point received. Set the parameters according to the actual situation Step2. Initialize algorithm. Step3. Each ant starts at the starting point, the initial position of the search vessel, constructs its route according to Equation (10). Update the value of every ant's tabu table and L K . Step4. When all ants reach the end, record the value of L K and L � in this iteration. The updating rule in Equations (5)-(7) is applied (9) where, T j is the falling time of target j; ε is a parameter that controls the degree of impact of the fall time, 0<ε<1. The value of weight will change with the increase of time. Since the exponential function is included in the formula, the longer the falling time of the target point is, the faster the weight value will increase. That is, when the falling time exceeds 20 hours, the weight value will increase sharply.. According to t1he weight value of each path, the corresponding state transition probability can be obtained as: the dynamic factor σ is closer to 0. On the contrary, when L K is smaller and the search path is shorter, the dynamic factor σ is closer to 1. The increment of pheromone in each cycle, in the basic Ant Colony Algorithm, just depends on the total distance travelled by each ant. However, the adaptive dynamic factors selected in this paper can adjust the update of pheromone adaptively. With help of that, pheromones can also obtain the increment according to the optimal solution and average length of the current iteration. The hyperbolic tangent function with different values of γ are shown in Figure 1 below. It is obvious that the function looks different when the parameter γ is varied. The higher the value of γ is, the higher the sensitivity of the function is. Figure 1 Graph of dynamic factor function. Ant Colony Algorithm Based on Path Weight The mathematical model of Ant Colony Algorithm is an ideal assumption. However, there are many external factors in the actual maritime search and rescue operation. In order to increase the practicability of Ant Colony Algorithm in maritime search and rescue route selection, the path weight matrix is introduced into the algorithm. When choosing search and rescue routes, optimal path is not necessarily the shortest path, time of falling into the water of each search and rescue target should also be taken into account. Search and rescue vessels should have priority to reach water if the distance between the two roads is not much different. In this paper, the weight of each path is valued depends on how long the next target point is in the water, such as T1、T2、T3、T4. According to the duration of each target point, the weight value of all routes are calculated as follows: where, T j is the falling time of target j; ε is a parameter that controls the degree of impact of the fall time, 0<ε<1. The value of weight will change with the increase of time. Since the exponential function is included in the formula, the longer the falling time of the target point is, the faster the weight value will increase. That is, when the falling time exceeds 20 hours, the weight value will increase sharply. According to the weight value of each path, the corresponding state transition probability can be obtained as: where , φ ij is the weight of path (i,j) at time t; γ is the factor of weight. Overall Procedure of Improved Ant Colony Optimization According to the description of above improvement method, the procedures of the design of search and rescue routes are described as follows: Step1. Enter the coordinates of each target point received. Set the parameters according to the actual situation Step2. Initialize algorithm. Step3. Each ant starts at the starting point, the initial position of the search vessel, constructs its route according to Equation (10). Update the value of every ant's tabu table and L K . Step4. When all ants reach the end, record the value of L K and L � in this iteration. The updating rule in Equations (5)-(7) is applied , , where φ ij is the weight of path (i,j) at time t; γ is the factor of weight. Overall Procedure of Improved Ant Colony Optimization According to the description of above improvement method, the procedures of the design of search and rescue routes are described as follows: Step1. Enter the coordinates of each target point received. Set the parameters according to the actual situation Step2. Initialize algorithm. Step3. Each ant starts at the starting point, the initial position of the search vessel, constructs its route according to Equation (10). Update the value of every ant's tabu table and L K . Step4. When all ants reach the end, record the value of L K and L in this iteration. The updating rule in Equations (5)-(7) is applied to change pheromone level. Step5. Update the global optimal solution. If the current shortest path length is shorter than the global shortest path length, the global solution is replaced by the current solution. Step6. If the largest iteration number reaches, the calculation stops. Otherwise, go to Step3. The pseudocode of the improved algorithm is as follows: Algorithm Simulation This article uses MATLAB simulation for both the original and improved algorithms in different grid maps to verify the effectiveness and reliability of the improved ACO. The simulation experiment set a point as the starting point and have other 10 points. It aims to find the shortest path by going through 10 points from the starting point. Experimental method is used to determine the optimal combination of each parameter in the algorithm, and the values of parameters are shown in Table 1, where α, β and γ represent the weight factors of pheromone, distance and falling time, respectively. The different collocation of these parameters may affect the calculation, and the values of these parameters are generally between [0.7-1]. Pheromone recurrence factor ρ determines the residual pheromone after each round of search. The reasonable value of this parameter can prevent the pheromone concentration from increasing rapidly and make the algorithm fall into local optimum. λ determines the function im-age of the adaptive dynamic factor, and the value in this paper is between [1][2]. Moreover, the number of ants(k) is twice the number of target points. Table 1 The values of parameters parameter value Pheromone factor (α) 0.7 Heuristic factor (β) 0.9 Time factor (γ) 1 The parameter of function (λ) 2 Pheromone amount(Q) 100 Maximum iterations(m) 200 When the weight of path is not introduced (φ ij = 1), the shortest one is the optimal path. Table 2 shows a comparison between the results of the basic ACO, the improved ACO and another modified Ant Colony mentioned in literature [22,24]. The experiment simulates 10, 15 and 20 target points, respectively, and was performed 1000 times each. Table 2(a) shows the probability of finding the optimal solution under different conditions, that's the probability of finding the optimal solution in 1000 trials. It is observed that the improved algorithm is more likely to find the optimal solution by the algorithm in this paper has improved the pheromone updating strategy. Due to the introduction of adaptive dynamic factor in pheromone updating strategy, the improved algorithm makes pheromone updating process more reasonable. In this way, the problem of the algorithm easily falling into the local optimal solution are alleviated, and the probability of finding the optimal solution is improved. Calculate the average length of all paths calculated by each group and normalizing the length of the average path (the length of the average path / the length of the shortest path). The results are shown in Table 2(b). It can be concluded that the results of normalizing the length of the average path in the improved algorithm is smaller. That is, the search results of the algorithm mainly focus on the shortest path and some suboptimal path Count the average of iterations in each experiment shows in Table 2(c). It can be seen that the algorithm has good convergence and can find the optimal solution quickly. In order to prove the improvement effect of the algorithm, preform the non-parametric ranking based Friedman test in Table 3. The ranking is on account of Table 3 Ranking the indicators original ACO algorithm in [21] algorithm in [19] improved ACO The test statistics are calculated as follows: 2 = 12 3 × 4 × 5 (12 2 + 8 2 + 6 2 + 4 2 )−3 ×3×5 = 7 (11) Since X 2 is greater than X 2 0.05 [2] , it can be concluded that the calculation results of the four algorithms are significantly different. That is, the improved algorithm is significantly improved compared with other algorithms. Figure 2 shows the convergence of different algorithms. It can be seen more intuitively that the original ACO may stop searching in the early stages of optimization and fall into local optimum. However, the addition of the adaptive dynamic factors , the improved algorithm will keep searching to find the optimal solution. Table 2(a) shows the probability of finding the optimal solution under different conditions, that's the probability of finding the optimal solution in 1000 trials. It is observed that the improved algorithm is more likely to find the optimal solution by the algorithm in this paper has improved the pheromone updating strategy. Due to the introduction of adaptive dynamic factor in pheromone updating strategy, the improved algorithm makes pheromone updating process more reasonable. In this way, the problem of the algorithm easily falling into the local optimal solution are alleviated, and the probability of finding the optimal solution is improved. Probability of search Calculate the average length of all paths calculated by each group and normalizing the length of the average path (the length of the average path / the length of the shortest path). The results are shown in Table 2(b). It can be concluded that the results of normalizing the length of the average path in the improved algorithm is smaller. That is, the search results of the algorithm mainly focus on the shortest path and some suboptimal path Count the average of iterations in each experiment shows in Table 2(c). It can be seen that the algorithm has good convergence and can find the optimal solution quickly. In order to prove the improvement effect of the algorithm, preform the non-parametric ranking based Friedman test in Table 3. The ranking is on account of the calculation results of each algorithm under three indexes. Based on this, the comparison of each algorithm are analyzed. Table3 Ranking the indicators original ACO algorithm in [21] algorithm in [19] improved ACO average path 4 2 3 1 average iteration times 4 3 1 2 The test statistics are calculated as follows: Since X 2 is greater than X 0.05 [2] 2 , it can be concluded that the calculation results of the four algorithms are significantly different. That is, the improved algorithm is significantly improved compared with other algorithms. Figure 2 shows the convergence of different algorithms. It can be seen more intuitively that the original ACO may stop searching in the early stages of optimization and fall into local optimum. However, the addition of the adaptive dynamic factors α , the improved algorithm will keep searching to find the optimal solution. Figure 2 Contrast diagram of convergence. When the path weight is not introduced, the algorithm is required to find the shortest route as many other ACOs. In the experiment No. 1 in Table 2, the optimal path optimization result is shown in Figure 3(a). The total distance of the path is 798.4493. When the path weight is not introduced, the algorithm is required to find the shortest route as many other ACOs. In the experiment No. 1 in Table 2, the optimal path optimization result is shown in Figure 3(a). The total distance of the path is 798.4493. When the path weight is introduced as need, the weight value of each path is related to the corresponding falling time. The optimal path optimization result is shown in Figure 3(b). The total distance of the path is 817.5914. The comparison test results show that when the time factor is introduced as the path weight, as the Ant Colony Algorithm is used to design the search and rescue route, the optimal route may become longer, but it will go through the target point with a longer time of falling into the water first. In addition, as the time of drowning increases, the route will be optimized in real time, as shown in Figure 3(c). If a person falls into the water for too long in the process of search and rescue, the route will be changed in real time to ensure the survival of the personnel. In this way, everyone's rescue time can be guaranteed to ensure the survival rate. This has important practical significance in search and rescue operations. The Application of Improved ACO on Search and Rescue Based on Multi-Objective Maritime search and rescue system is mainly divided into two parts, monitoring center and user terminal. Here is the working principle of the system. When a person falls into water, the user terminal will send positioning information to the monitoring center which will design a reasonable search and rescue route afterwards. Then, the ship will search and rescue according to that. In the process of maritime search and rescue, there are usually many targets. The targets may drift due to the action of water flow and wind, and their positions will change. Meanwhile, the number of search and rescue vessels is often more than one. Therefore, this algorithm can also be used to design reasonable search and rescue routes for multiple vessels, in order to achieve higher efficiency in search and rescue. MATLAB is used to simulate the search and rescue route design, the experiment assumed that there are two vessels. The result of route design is shown in Figure 4 and in Figure 4(a) is the route designed with 10 target points, while there are 20 target points, the route is shown in Figure 4(b). It can be proved that this algorithm can design reasonable route for multi-target search and rescue. In the course of route design, the falling time factor is taken into account, and the route can be optimized in real time during search and rescue. The effect of the algorithm in the actual application is shown in Figures 5-6. There are 10 search and rescue points distributed in one sea area, and two search and rescue vessels on the shore. Each search and res- Based on the location and time information of each point received, the monitoring center designs a reasonable search and rescue route for the two vessels to achieve better results. The route design results of the two ships are shown in Figure 5(a). In addition, as the search time increases, the route can be adjusted in real time in Figure 5(b). This can avoid the danger of some targets being too long due to falling water, so as to ensure the survival rate of personnel. The experiment proves that the improved ACO can design routes for multiple targets effectively, and can be applied to multiple search and rescue vessels. Since the time of falling into the water is taken into account in the algorithm, it is more in line with actual The experiment proves that the improved ACO can design routes for multiple targets effectively, and can be applied to multiple search and rescue vessels. Since the time of falling into the water is taken into account in the algorithm, it is more in line with actual falling time factor is taken into account, and the route can be optimized in real time during search and rescue. The effect of the algorithm in the actual application is shown in Figures 5-6. There are 10 search and rescue points distributed in one sea area, and two search and rescue vessels on the shore. Each search and rescue point sends out the The experiment proves that the improved ACO can design routes for multiple targets effectively, and can be applied to multiple search and rescue vessels. Since the time of falling into the water is taken into account in Information Technology and Control 2020/3/49 446 needs. The use of this algorithm to design search and rescue routes can improve the timeliness of search and rescue effectively, and has great significance for maritime search and rescue operations. Conclusion This article proposes a solution to the problem that the original ACO is easy to fall into local optimum during searching. By adding inverse tangent function as dynamic factor in updating pheromone, the improved Ant Colony Optimization can adaptively adjust the pheromone update strategy of the optimal solution in each iteration. It can effectively alleviate the problem that the Ant Colony Algorithm is prone to fall into the local optimal solution, and the probability of getting an optimal solution increase. At the same time, the path weight is introduced into the algorithm, so that the obtained optimal path is more in line with the requirements in practical applications. In addition, the improved Ant Colony Optimization can be applied to maritime search and rescue activities. Simulation results show that the algorithm can effectively design a reasonable search and rescue routes for multiple vessels. This is of great significance in maritime search and rescue.
2020-09-28T16:28:26.065Z
2020-09-24T00:00:00.000
{ "year": 2020, "sha1": "53257eda0993e792eb82e4a7342b58e766557ff4", "oa_license": null, "oa_url": "https://itc.ktu.lt/index.php/ITC/article/download/25295/14387", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "53257eda0993e792eb82e4a7342b58e766557ff4", "s2fieldsofstudy": [ "Engineering", "Computer Science", "Environmental Science" ], "extfieldsofstudy": [ "Computer Science" ] }
246725927
pes2o/s2orc
v3-fos-license
Juridification of concept “artificial intelligence” and limits of using its technology in litigation Active development of artificial intelligence (AI) technology states the problem of integrating this phenomenon into legal reality, limits of using this technology in social practices regulated by law and, ultimately, the development of an optimal model for legal regulation of AI. This article focuses on the problem of developing the legal content of the concept of AI including some methodological and ontological foundations of such work. The author suggests certain invariant characteristics of AI significant for legal regulation, which, if adopted by the legal scientific community, could be used as a scientifically grounded basis for constructing specific options for legal regulation that correspond to the needs of a particular sphere of social practice. The author believes that the scientifically grounded legal concept of AI is largely able to determine the direction and framework of applied legal research on the multifaceted problems of using AI technology in social interactions including ministering of justice, to separate the related legal issues and problems from issues of ethical, philosophical, technological and other nature. Introduction The philosophy of artificial intelligence (hereinafter referred to as "AI") implies a distinction between weak AI and strong AI [1][2]. This article focuses on weak AI. Discussion of the legal aspects of a strong AI functioning seems premature since its creation is a scientific and technical problem; fundamental possibility of its successful solution remains debatable today. In October 2019, the National Strategy for the Development of AI in the Russian Federation for the period up to 2030 was approved [3]. Following it, in August 2020, the Concept for the development of interfacing between AI and robotics technologies until 2024 was approved [4]. The goals of the Concept imply the definition of approaches to the legal regulation of the use of AI in various spheres of social relations and the creation of prerequisites for the foundations of such legal regulation. At the same time, the Concept describes a number of problems of regulating relations in the field of AI, among which, there are problems that can be entered into the procedural context of the ministering of justice, namely: 1) the issue of the possibility of legal delegation of decision-making to AI systems (conceptual problem of a legal nature); 2) using probabilistic assessments for decisionmaking by AI systems and the impossibility in some cases of a complete explanation of their decision; 3) maintaining a balance between the requirements for the protection of personal data and the need to use them to train AI systems; 4) the tasks of developing and clarifying terms and definitions in the field of AI technologies. The identified problems can be reduced to the issues of a more general theoretical nature: 1) the problem of developing a legal concept of AI (juridification of AI concept); 2) the problem of determining the limits of the use of AI technology in judicial enforcement. Juridification of AI concept The Concept draws attention to the lack of a clear understanding of the term AI and to the fact that this leads to terminological problems in building regulation. At the same time, when solving this problem, it is proposed: 1) to focus on the variability of specific definitions depending on the industry the AI technology is applied in; 2) if possible, avoid introducing a normative definition of the term being uniform for all sectors into Russian legislation. It seems that the problem is somewhat deeper than formulating legal definitions in normative acts (by the way, such definitions already exist [3,5]). Prior to proposing specific formulations for the definition of the term, it is necessary to determine the very concept of AI possible to be used in law, to understand that there is AI not so much and not only from a technical, but also from a legal perspective, to understand how to write this phenomenon being new for law in the system of generally accepted legal concepts, how to relate to AI from the viewpoint of law and what place should be given to this phenomenon in legal taxonomy. Juridification of the concept of AI means conceptualization of this phenomenon in a system of specific concepts and categories the lawyer's professional consciousness operates with. Specific definitions may indeed be different depending on their purposes. However, before considering the definitions of the term AI, it is necessary to define invariant characteristics since the invariant sets the framework of legal regulation and the permissible limits of definitions variability. Such an invariant must be scientifically grounded since it is the scientific approach enabling to regard the methodological and ontological foundations of the analysis. I.N. Tarasov expressed and argued the opinion that the use of definitions of AI based on the description of technical and technological characteristics is senseless in jurisprudence as it does not influence legal regulation, and incorporation of AI into legal reality requires either an independent term for designating it in the field of law or filling the concept with specific legal content, while the development of a conceptual-categorical apparatus according to the rules and regarding the science of law is a necessary condition for effective and correct legal regulation [6]. We do not completely agree with the statement regarding the use of an independent term (as the choice of the signifier here can be arbitrary) but we fully share the opinion of I.N. Tarasova on the need for theoretical developments aimed at the actual legal understanding of AI and the ways of integrating it into legal reality. Perhaps the statement about the complete uselessness of the definitions of AI based on the description of its technical and technological characteristics is too categorical but in any case such definitions should not be transferred to the sphere of law without a preliminary theoretical analysis of the possibility, methods and limits of their use for the purposes of legal regulation. Methodological bases of AI concept juridization In the literature devoted to the methodology of legal science, it is noted that legal concepts are formed in different ways [7]. First, it can be organic concepts formed as a result of professional reflection on legal practice or its doctrinal design, or arisen as a result of the theoretical organization of legal reality, its conceptualization within the framework of a certain legal theory. In both cases, such concepts are initially a product of legal thinking. Secondly, these can be concepts associated with law that arise in non-legal spheres of thought and practice and are involved in legal circulation without changing their volume and content. Such concepts have no proper legal content. Thirdly, these can be concepts consolidated by law that arise in non-legal spheres of knowledge but subsequently adapted to the needs of legal practice; consequently, their content changes and acquires a specific legal character. Thus, the concept is subject to juridization. The concept of legal regulation of relations in the field of AI proceeds from their association of this concept (Section 6): it is proposed to build and harmonize the ontology of the subject area by the efforts of the expert community and specialized technical committees under the Federal Agency for Technical Regulation and Metrology of the Russian Federation; it is proposed, where necessary, to use the definitions contained in the documents on standardization, or to give definitions relevant specifically for this area of regulation for legal regulation. In my opinion, the technical characteristics poorly contribute to organically fitting AI as a phenomenon into legal reality. Although as a palliative solution, this is perfectly acceptable. Perhaps it is worth trying to consolidate the non-legal concept of AI into law, to breathe legal content into it. By the way, the desire of lawyers to consolidate the concept of AI can be traced in the scientific literature. However, such a desire does not exclude cases of abundant borrowing, the inclusion of technical aspects in the content of the developed legal concept [8]. How can the concept of AI be consolidated into law? Two approaches are possible here: essentialist and constructivist ones. The essentialist approach presupposes the desire to identify and fix the essence of the phenomenon under study in the concept, to determine, to grasp the "what" nature of artificial intelligence in the legal concept. This approach implicitly assumes that there is some immutable essence of AI, which can and should be identified by lawyers and recorded in the corresponding legal concept. Essentialist approach describes the properties of artificial intelligence, it is descriptive. The constructivist approach implies the rejection of the desire for analysis, identification and representation in the legal concept of the essence of artificial intelligence. This approach does not describe the entity but ascribes properties, it is ascriptive. The legal concept of artificial intelligence here may not coincide at all with its technological content. This approach seems to me more promising from a practical perspective. The essentialist approach leads to the active inclusion of the concept of "artificial intelligence" proposed by representatives of legal science of various signs of a technological nature in the content of the definitions [8,9], which is more characteristic of the association rather than this concept consolidation. Within the framework of the same approach, a discussion arises about endowing artificial intelligence with a sign of subjectivity (it will be discussed below), when such a possibility is discussed from the standpoint of assessing the cognitive properties of artificial intelligence [10]. At the same time, it seems that we, lawyers, cannot say anything about the essence of artificial intelligence since it is initially not a legal but technological phenomenon. However, we can credit artificial intelligence with those legally significant properties that correspond to the tasks of legal regulation. The constructivist approach is utilitarian. If it is necessary to include the technology of artificial intelligence in legal circulation, no identifying of the technological essence of this phenomenon is required. It is enough to credit artificial intelligence with the properties significant from the viewpoint of law that would serve the implementation of certain practically significant tasks. Thus, it seems that the concept of AI can and should be consolidated within the framework of the utilitarianconstructivist approach rather than the essentialist one. 2.2 Ontological foundations of consolidating AI concept into law The issue of subjectivity is a key issue that is explicitly or implicitly raised in the legal literature in relation to the ontological legal status of AI. Often, researchers in the field of jurisprudence try to include the property of subjectivity in the concept of AI. Such proposals are substantiated either through arguments of a utilitarian nature (the need to include AI into legal circulation), or through attempts to reveal the essence of AI that constitutes subjectivity; sometimes such proposals are not substantiated at all but are proposed de lege ferenda (draft law of Grishin) [5,[11][12][13][14][15][16][17][18][19][20]. In order to include a phenomenon in legal circulation, it is not necessary to ascribe the properties of subjectivity; the subject of law and the subject of activity should not be identified. According to the just remark of Professor S.I. Arkhipov, the subject of law does not have to be identified with a participant in legal relations or with the legal role he plays [21]. In this sense, the technology of artificial intelligence, if necessary, can fulfill the legal role assigned to it by a person even without the status of a subject of law. If we associate subjectivity with cognitive properties, then we get into the subject field of the general theory of law and philosophy of AI. From the standpoint of the general theory of law, the question about the features constituting legal subjectivity arises. As applied to a person, subjectivity, as a rule, is substantiated either through the attributive properties of consciousness (will, the ability to think and make autonomous decisions, self-awareness), or through an axiological approach. In the first option, we cannot assert that a weak AI thinks, which has been long discussed in the philosophy of AI [1,2,22,23]. Moreover, some authoritative researchers deny the possibility of a positive answer to the question of consciousness even in the prospect of creating a strong AI [1]. Finally, the very scientific concept of consciousness still remains somewhat vague. According to the apt remark of Prof. E.A. Mamchur, no one is unambiguously able to answer the question of what consciousness is today [24]. In relation to law, this approach cannot be considered universal. For example, a person who does not possess all the fullness of mental properties is anyhow a subject of law (incapable). At the same time, some animals such as the great primates are not endowed with legal subjectivity despite the fact that they demonstrate certain mental properties that are characteristic for a person as well (the ability to experience psychological reactions, self-identification, empathy, etc.) [25]. As for the axiological approach, where arguments of an ethical nature come to the fore, where the attitude towards a person as a social value recognized by law is a sufficient and limiting basis for giving him the status of a subject [26,27], then, obviously, this approach is inapplicable to AI due to perceiving it as a means due to its official character [28]. In addition, the anthropocentrism of law prevents a serious discussion of the issue of AI subjectivity in the framework of this field. Reflecting on the subjectivity of AI means striving to overcome the anthropocentrism of law, which is doubtful in itself (more detailed argumentation is given in [6]). The attitude towards a weak AI is an attitude towards a means not a value, while the status of a subject of law presupposes an attitude towards it as a self-sufficient value, and not as a means. In contrast to the essentialist, the utilitarianconstructivist approach to the definition of the legal content of the AI concept enables to overcome all the designated controversial aspects. Thus, from the standpoint of utilitarian constructivism, the question of whether artificial intelligence has mental properties that reproduce or imitate individual cognitive abilities of human conscience is deactivated. Law regulates not thinking but behavior. In this regard, it does not matter whether artificial intelligence thinks or only imitates thinking, how complete this imitation is, whether it is sufficient to raise the question of subjectivity, etc. It is important that artificial intelligence is capable of autonomous rational action, specifically, making decisions without operator's intervention. Due to the anthropocentrism of law, the issue of the subjectivity of artificial intelligence becomes irrelevant. Artificial intelligence technology, if necessary, can perform the legal role assigned by a person even not possessing the status of a subject of law. Here, too, numerous technological aspects of the concept of AI are losing their significance since they do not matter for the legal regulation of using this technology in legal practice. The technological nature can be simply stated by lawyers without disclosing its essence in the content of the concept, only in order to emphasize the anthropogenic nature of AI. However, this also seems redundant since the concept of technology itself presupposes and includes this feature. Thus, all these issues not only cease to be debatable, they are simply removed from the current agenda. When constructing a scientifically grounded invariant of the legal AI concept, it is important not so much to formulate a specific definition as to fix the key characteristics important for legal regulation since these characteristics will remain unchanged when formulating options for specific regulatory definitions. Taking into account the above argumentation, we believe that such characteristics could be as follows: 1) capacity of AI technology for autonomous rational action (independent decision-making without operator's intervention); 2) objectivity of AI technology (understood as a fundamental focus on the lack of legal capacity in artificial intelligence); 3) the service nature of artificial intelligence (understood as the limitation of the goals and objectives of AI functioning by human needs). Limits of using AI technology in judicial enforcement Is it possible to entrust judicial enforcement to a nonsubject of law? Can AI perform law enforcement functions in litigation? Such question formulation avoids the well-known discussion in the procedural doctrine about the limits of the meaning of the concept of justice (whether the concept of justice covers judicial enforcement that is not related to the resolution of disputes about law (writ proceedings, special proceedings)). I believe that the first question can be answered in the affirmative, and the second -in the negative. The arguments for my position are as follows. Judicial enforcement requires not only perfunctory measures but also meaningful legal thinking including dealing with evaluation categories. Even in indisputable proceedings, the judge evaluates the written evidence according to his/her inner conviction, exercises judicial discretion, qualifies the legal relationship, assesses it for indisputability, etc. The principle of free not formal assessment of evidence is implemented in the trial. All this requires operating with specific legal meanings. Unlike humans, AI is incapable of operating with meanings. Back in the early 80s of the last century, J. Searle being a well-known expert in the field of philosophy of AI conducted the mental experiment "Chinese room" and convincingly demonstrated that AI operates exclusively with syntax, and not semantics [1,23]. The AI activity space is a sphere of bare form without any content. In this regard, it is hardly possible to entrust AI with the function of value judgments in the framework of judicial enforcement. Consequently, it is hardly possible to entrust AI with the function of value judgments in the framework of judicial enforcement. In addition, the use of AI in assessing evidence would contradict the principle of free assessment of evidence based on the inner conviction of the court (AI cannot have an inner conviction since AI does not operate on meanings). The limits of using AI technology in litigation are limited by the concept "predictive justice") [28]. Within the framework of this concept, AI can only be used in an instrumental sense as a means of analyzing large amounts of data. At the same time, this issue can be resolved differently in relation to alternative forms of dispute resolution, in particular, in arbitration, the competence of which is of a contractual nature rather than a public-law one. When referring to arbitration, the parties must understand the possibilities and limitations of this form of enforcement, their will and expression of their will to appeal to an arbitrator must coincide and not contain defects. Therefore, if the parties voluntarily and consciously submit the dispute to the AI for resolution, understand and accept the impossibility of meaningful enforcement and agree with this, then there is no reason to deprive them of such an opportunity. Conclusion The effectiveness of legal regulation of using AI technology in various areas of social practice including law enforcement largely depends on the scientifically grounded legal concept of AI, that is, on the legal characteristics of this non-legal phenomenon, on the way lawyers perceive and understand this phenomenon, on the role of AI in legal taxonomy; The task of forming the legal concept of AI is not reduced to the formulation of specific legal definitions and cannot be solved at this level; juridization of the concept of AI requires theoretical legal reflection, thinking through the content of this concept in the context of law, applying the means and methods of legal science; juridization of the AI concept means expansion of its content by defining legally significant characteristics and properties; The result of the juridization of the AI concept should be a set of invariable (invariant) legal characteristics while the specific definitions of this term in regulatory acts may differ depending on the needs of the practice of legal regulation; The legal concept of AI will determine the proper legal framework for the study of this multidimensional phenomenon, promising directions of applied developments in the formation and optimization of the model of legal regulation; When forming a legally significant concept of AI, it is proposed to abandon the descriptive essentialist approach aimed to identify the essence of AI in favor of the ascriptive constructivist approach, which involves attributing the following legal properties to the content of the concept of AI: on the one hand, these are the properties significant for the purposes of legal regulation, and on the other hand, they restrain the limits of legal regulation; The following are proposed as invariant legally significant elements of the content of the AI concept: a) objectivity of AI (understood as the refusal of attempts to ascribe any properties of legal subjectivity to AI); b) autonomy (understood as the ability of an AI to act including making decisions without operator intervention); c) service nature (understood as the limitation of the goals and objectives of AI functioning by human needs); The limits of using AI in law enforcement depend on the form of law enforcement: in the administration of justice, AI can only be used for the purpose of analyzing large amounts of data since AI does not operate on meanings but acts at a formal, syntactic level, which excludes the possibility of meaningful work with evaluative legal concepts, and also contradicts the principle of free evaluation of evidence; in those areas of law enforcement where the competence is of a
2022-02-11T16:06:25.084Z
2022-01-01T00:00:00.000
{ "year": 2022, "sha1": "62ae083d4f7bf7bff7dad009409d1cc0e0ffd70d", "oa_license": "CCBY", "oa_url": "https://www.shs-conferences.org/articles/shsconf/pdf/2022/04/shsconf_eac-law2021_00108.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "20688e30023ad7e064132133c02000f56e971538", "s2fieldsofstudy": [ "Law" ], "extfieldsofstudy": [] }
36263536
pes2o/s2orc
v3-fos-license
Irreversible electroporation and the pancreas: What we know and where we are going? Pancreatic adenocarcinoma continues to have a poor prognosis with 1 and 5 years survival rates of 27% and 6% respectively. The gold standard of treatment is resection, however, only approximately 10% of patients present with resectable disease. Approximately 40% of patients present with disease that is too locally advanced to resect. There is great interest in improving outcomes in this patient population and ablation techniques have been investigated as a potential solution. Unfortunately early investigations into thermal ablation techniques, particularly radiofrequency ablation, resulted in unacceptably high morbidity rates. Irreversible electroporation (IRE) has been introduced and is promising as it does not rely on thermal energy and has shown an ability to leave structural cells such as blood vessels and bile ducts intact during animal studies. IRE also does not suffer from heat sink effect, a concern given the large number of blood vessels surrounding the pancreas. IRE showed significant promise during preclinical animal trials and as such has moved on to clinical testing. There are as of yet only a few studies which look at the applications of IRE within humans in the setting of pancreatic adenocarcinoma. This paper reviews the basic principles, techniques, and current clinical data available on IRE. http://www.wjgnet.com/esps/ INTRODUCTION Pancreatic cancer, despite extensive research, remains one of the most aggressive cancers, having a poor prognosis with 1 and 5 years survival rates of 27% and 6% respectively [1] . According to the American Cancer Society and World Health Organization 46420 patients were diagnosed with pancreatic cancer in the United States in 2014 and 338000 in the world in 2012 [1,2] . In the United States 39590 of those patients died in 2014, making it the fourth leading cause of death in both women and men with the prevalence increasing by 1.3% per year as well [1] . Only approximately 10% of these patients present with local disease, which is considered surgically res ectable, however even in these patients the 5 year survival rate remains low at 24% [1] . Of the remaining 90% of patients approximately 50% present with metastatic disease, leaving about 40% presenting with localized disease, which is considered surgically unresectable, generally secondary to encasement of adjacent vessels such as the portal vein, celiac artery, and superior mesenteric artery [1] . Patients without metastatic disease, but deemed unresectable due to locally advanced disease are now classified as locally advanced pancreatic cancer (LAPC). While surgical resection, when a viable option, remains the gold standard the majority of patients will receive chemotherapy and/or radiation therapy. The mainstay of chemotherapy in pancreatic adenocar cinoma for close to fifty years was 5-florouracil (5-FU) monotherapy, despite a mean survival of less than 6 mo [3] . In the late 1990s gemcitabine was introduced and demonstrated a survival benefit as compared 5-FU and thus replaced it as first line therapy [3,4] . As gemcitabine became firmly established as the first line chemotherapeutic agent multiple trials looked at combining gemcitabine with a variety of other chemo therapeutic agents, however, only a few demonstrated a survival benefit [3,5] . The combination of gemcitabine with capecitabine showed a trend toward improved survival with post hoc analysis of two randomized controlled trials showing statistically significant improvement in overall survival in patients with a good performance status [68] . In 2011 a new trial found that FOLFIRINOX (5-FU, leucovorin, irinotecan, and oxaliplatin) demonstrated a significant overall survival benefit in chemotherapy naive patients as compared to gemcitabine alone [9] . Lastly, a study in 2013 revealed a survival benefit when nabpaclitaxel was combined with gemcitabine as compared to gemcitabine alone [10] . Improving chemotherapeutic options for pancreatic adenocarcinoma remains an active area of research with multiple ongoing studies. Radiation therapy has been used in the setting of pancreatic adenocarcinoma both in the neoadjuvant setting and in an attempt to reduce local recurrence rates after resection. Attempting to prevent local recurrence after resection seemed like a natural role for radiation therapy, however, to date studies have shown a mixed response [1113] . This controversial area is the focus of the APACT trial which will hopefully provide a clearer answer [14] . The role of radiation therapy in the neoadjuvant setting is also as of yet unclear with a few studies showing some promise [14,15] . This is also an area of active study, with the recent clear definition of borderline resectable disease assisting in making future studies comparable [14,15] . After the introduction of ablation, interest surrounded it as a possible way of improving patient outcomes in this difficult disease process. Initial investigations into ablation as a possible therapy centered on thermal techniques, with radiofrequency ablation (RFA) being the most studied modality. The reported morbidity rates were regrettably unacceptably high in the majority of these published studies [1619] . Anatomy at least partially accounts for this elevated morbidity as the pancreas is surrounded by multiple delicate structures such as the common bile and pancreatic ducts. Several vessels, including the celiac artery, superior mesenteric artery, portal vein, and splenic vein also surround the pancreas further complicating and restricting efficacy of thermal ablation techniques primarily as a result of heat sink effect [20,21] . When heat sink effect, defined as tissue cooling during ablation by adjacent blood vessels, occurs the temperature surrounding major vessels does not attain high enough levels to manifest cell death. Although microwave ablation (MWA) has been shown to be less susceptible to heat sink affect it remains vulnerable to the phenomenon [22] . The above difficulties associated with the pancreas anatomically also provide a significant obstacle to other thermal ablation techniques including cryoablation, high intensity focal ultrasonography, and MWA which to date have not been as well studied as RFA. Irreversible electroporation (IRE) provides a unique alternative, allowing tissue ablation without being reliant on thermal effects. It also has the added ability of maintaining the scaffolding of surrounding tissues, making it of great interest in this anatomically complex area. IRE TECHNIQUE Reversible electroporation has been used for many years in the basic science setting to implant foreign molecules into cells [23,24] . Reversible electroporation works by applying an electrical field across the membrane causing the membrane to become porous, through a yet incom pletely understood process [23,25] . This lets the investigator introduce a desired molecule, such as RNA or DNA, into Young SJ. Irreversible electroporation and the pancreas the cell [25,26] . IRE uses this theory but applies a higher voltage leading to cell death by apoptosis. Although the exact mechanism by which IRE induces apoptosis is not clear, it appears to be via permanent nanopore formation and resultant ion disruption [27] . As previously noted, thermally based techniques struggle with high morbidity when treating pancreatic adenocarcinoma due to the delicate structures in close proximity [28] . IRE on the other hand has been shown, in animal studies, to produce apoptosis of cancer cells while sparing the delicate surrounding scaffolding, including bile ducts and blood vessels [2931] . This distinctive property makes IRE a desirable modality, particularly given the structurally rich pancreatic region. IRE also provides the benefit of yielding apoptosis, rather than liquefactive necrosis as in thermal techniques, pardoning it from the burdens of heat sink phenomenon [29] . While initially IRE was thought to not induce any thermal effects recent studies have shown that a small area of thermal effect is likely present immediately adjacent to the probe [32] . The unique mechanism of IRE results in a few necessary precautions during its utilization. High voltages created are by IRE and produce significant muscular contractions [33] . It is for this reason the patient must be placed under general anesthesia with full neuromuscular blockade [33] . The blockade is tested with a twitch technique prior to starting. ECG monitoring is also required to monitor for arrhythmias, which are rare and typically transient. The concern of arrhythmia leads some authors to promote the placement and use of arterial lines. Currently there is one commercially available IRE machine, the NanoKnife (Angio Dynamics, Queensburry, New York). This device supports either unipolar or bipolar probes. The more commonly used unipolar probes require placement in pairs, which is technically challenging as they must be placed in parallel orientation and spaced no further than 1.52.0 cm apart. The probes create a relatively small ablation field (approximately 2-3 cm) [3436] and therefore it is common for multiple probe pairs to be placed, and/or the probes to be repositioned several times during the procedure. Probes can be placed percutaneously, laproscopically, or using an open surgical approach. When placed intraoperatively, intraoperative ultrasound is used [3739] . When placed percutaneously both ultrasound and CT placement have been descri bed [40,41] . After probe placement the ablation device is set to produce high voltages, usually between 15003000 V in pulses of 70100 microseconds. Typically 90 such pulses are delivered which only takes a few minutes, after which the ablation is complete. Once the intended ablations have been performed the patient will typically undergo imaging, either by intraoperative ultrasound, contrast enhanced ultrasound, or CT to ensure that the lesion has been satisfactorily covered. After finishing the IRE procedure the patient is observed with the average length of admission varying significantly in the available studies from a same day discharge to admission for two weeks or more [29,37,3941] . AVAILABLE DATA A search of the Pubmed database with the terms "IRE AND pancreatic cancer" yielded 34 results, of which 6 studies were found to be case reports, case series, or prospective trials related to IRE and pancreatic cancer without significant patient overlap. Those studies are reviewed here. The remainder represented review articles (n = 16), animal studies (n = 5), or prior publi cations on a patient set that was reused as discussed below (n = 4). Two studies were excluded as they were case reports only discussing a complication, and therefore not felt to be relevant to this discussion. A single study was eliminated as it was a review of anesthetic requirements during IRE. Martin and his group have published multiple studies on pancreatic cancer and IRE [37,38,42,43] , because of sig nificant patient overlap only two of these studies are included and discussed here. Table 1 provides some of the most pertinent data for the 6 below described studies. In 2013 Martin et al [38] compared a group of fifty-four prospectively gathered IRE patients with pancreatic cancer, retrospectively to a group of eighty-five patients who received only chemotherapy and/or radiation. All of the patients had LAPC disease with none being considered borderline resectable or having metastatic disease. The two groups were matched using propensity scores based on age, size of tumor, performance status, cardiac comorbidities, and pulmonary comorbidities. Of the fifty-four IRE patients fifty-two (96%) patients underwent open surgical ablation and two (4%) underwent laparoscopic ablation. Nineteen patients underwent IRE followed by en bloc resection, after surgical restaging. Forty seven of the fifty-four (87%) IRE patients underwent post procedural chemotherapy while ten (19%) of them underwent post procedural radiation therapy. In a ninety day follow up period thirty two of the fifty-four (59%) IRE patients had adverse events. The average time from diagnosis to treatment was 5.1 mo with a range of 1 to 32 mo. The average length of hospital stay was 7 d. When the IRE and chemoradiation only groups were compared the IRE group had a better overall survival (20.2 mo vs 11 mo, P = 0.03), progressionfree survival (14 mo vs 6 mo, P = 0.01), and distant progressionfree survival (15 mo vs 9 mo, P = 0.02). However, the survival curves of the two groups appeared to converge back together at twenty months, which was postulated to be secondary to rapid progression of distant metastatic disease by the authors. Martin et al [37] also recently published a series of forty eight patients who had borderline resectable or LAPC disease in which they used IRE in an attempt to obtain a margin free, or R0, resection. Twenty three (48%) of the patients had LAPC while twenty five (52%) had borderline resectable disease. Of note, nineteen of these metastasis. All of the procedures were performed using CT guidance and patients were discharged either the same or next day. No grade three toxicities occurred per SIR reporting guidelines. One patient (7%) developed a pneumothorax, while two (14%) others had subclinical complications (small hematoma seen on follow up imaging and subclinical pancreatitis). Two of the fourteen (14%) patients were able to undergo subsequent rese ction. The median event free survival (EFS) was 6.7 mo, and at 6 mo 70% of the patient cohort remained alive. Additionally the projected overall survival was statistically longer for patients with localized disease as compared to those with metastatic disease (P = 0.02). No difference was seen in the overall survival between the patients who did and did not undergo resection, possibly as a result of the few deaths in the resection group. Månsson et al [41] published a case series of five patients treated with US guided percutaneous IRE ablation. The patients all presented with jaundice and were deemed nonsurgical candidates, presumably from LAPC although this was not specified. The patients underwent contrast enhanced US to ensure complete ablation. No grade three or higher complications occurred within the first 30 d. One (20%) patient did develop subclinical pancreatitis. Limited follow up data was presented, but 60% of patients were alive at six months, with two (40%) demonstrating no evidence of recurrence. In 2012 Bagla et al [44] published a case report of a single patient with LAPC who was treated with US guided IRE, followed by a CT to confirm probe placement. This patient underwent two separate ablations two weeks apart due to tumor size. The patient developed liver metastasis at the 3 mo follow up exam, which were subsequently treated with RFA. The patient had no evidence of recurrent disease at the 6 mo follow up patients seem to be included in the previously discussed study by Martin et al [38] . Thirty three of the forty eight (69%) had undergone preoperative chemotherapy and thirty one (65%) underwent preoperative radiation therapy [12] . Thirty one of the forty eight (65%) patients underwent R0 resections with the remaining undergoing R1 resections (35%). Adverse events were recorded for 90 d and developed in eighteen of the forty eight (38%) patients. At twenty four months twenty eight patients (58%) had developed recurrence, the majority of which involved the liver or peritoneum. Paiella et al [39] published a prospective study of ten patients who underwent IRE for LAPC utilizing a laparoscopic approach with intraoperative ultrasound (US) guidance. All patients who underwent IRE had previously undergone chemotherapy or chemoradiation therapy. The average length of hospital stay was 9.5 d with 1 patient (10%) developing a postoperative abs cess. One other patient (10%) died of septic shock, which was attributed to complications of ulcerative colitis rather than the procedure. The average time of diagnosis to treatment was 9.2 mo. The average overall survival was 7.5 mo following the procedure, with diagnosis to death time averaging 16.8 mo. Three of the ten (30%) patients received post procedural chemotherapy. After treatment, four (40%) patients showed partial response, three (30%) had stable disease burden, and three (30%) demonstrated progressive disease per RECIST criteria. Narayanan et al [40] published a series of fourteen patients who underwent percutaneous IRE in 2012. Eleven (79%) of the patients had disease localized to the pancreas, one (7%) had a sub centimeter lung metastasis, one (7%) had a sub centimeter liver metastasis, and one (7%) had a solitary peritoneal exam and no significant complications were noted. DISCUSSION Pancreatic cancer is the fourth leading cause of cancer related death in the US [1] . Despite considerable and meaningful research into surgical techniques and chemoradiation therapy, survival rates remain poor at 27% and 6% at 1 and 5 years respectively [1] . The majority of patients with pancreatic cancer present with unresectable disease, either due to LAPC (approximately 40%) or metastases (approximately 50%) [1] . Only approximately 10% of patients are considered surgically resectable at presentation, and unfortunately even in this group survival at 5 years is only 24% [1] . IRE appears to hold great promise for improving survival in nonresectable patients, most clearly in the LAPC group. Animal studies have shown IRE has the ability to destroy cancer cells while leaving crucial underlying anatomic scaffolding such as blood vessels and bile ducts intact [29] . This is of paramount importance given the location of the pancreas and resultant high morbidity seen when thermal ablation techniques have been employed [19] . Human data is limited, with only 6 relatively small case series published to date. The most promising data comes from the largest series by Martin et al [38] which revealed improved overall survival, progression free survival, and distant progressionfree survival when comparing patients who underwent IRE with those who underwent chemotherapy and/or radiation therapy alone. In this study the overall survival showed significant improvement, rising from 11 to 20.2 mo. This improvement of 9 mo is particularly encouraging given the notably poor prognosis of pancreatic cancer and continued difficulty in attaining improved survival with various other novel treatment methodologies such as new chemotherapeutic agents. With early data demonstrating the possibility of prolonging overall survival of longer than 6 mo it appears that adding IRE may be of great value for patients without hope for cure. In this particular setting quiescing morbidity is the primary objective however, as clearly demonstrated by several authors, on occasion IRE can be used to downstage patients giving them a chance at curative therapy. The use of IRE to provide definitive therapy has also being investigated by Martin et al [38] in their attempts to expand the population of patients able to undergo R0 resections. These advances are vastly promising in regards to the treatment of pancreatic adenocarcinoma, yet they also raise several poignant questions. Currently IRE is being delivered in a range from maximally invasive (open surgical placement) to minimally invasive (percutaneous placement), with laparoscopic placement falling somewhere in between. It appears likely that both the open surgical placement and percu taneous placement techniques are of benefit. Open surgical placement has the best data to support its use thus far and also allows the surgeon to surgically stage the patient and consider proceeding to resection. Percutaneous placement appears to reduce morbidity and potentially hospital stay, although this point would need further clarification given the long average hospital admission seen in the Mansson et al [43] paper of 14 d. Reducing morbidity and hospital stay could be of great importance in maintaining quality of life when the disease is likely to remain unresectable and the goal is palliation. Further investigation into patient selection criteria will be essential in order to differentiate those patients best treated by open, from those best treated with percutaneous, placement. In their paper Narayanan et al [39] discussed this in brief, pointing out that certain patients, such as those with large varices, would likely not be best treated via the percutaneous approach. Recent studies have demonstrated that stroma plays a larger than previously recognized role in regards to cancer characteristics, indicating this may be a critical area of future investigation [4548] . Epithelial cancers such as pancreatic cancer are believed to be maximally affected by stromal cells [49] . The stromal activity prevents drug concentration and may at least partially account for the relatively poor response to chemotherapy seen in pancreatic cancer [50,51] . Disruption of the stromal cells and the cancer cells may help improve outcomes, and to some extent explain the encouraging outcomes which have been seen in early IRE studies. This also raises the question as to whether or not IRE's potential to disrupt the stromal effect could produce better outcomes in patients presenting with limited metastatic disease as well. It also highlights the importance of investigating the possible synergistic effects IRE and chemotherapy could obtain. More data evaluating outcomes in patients with LAPC is also needed in the form of large case cohorts, and more importantly in the form of randomized con trolled trials comparing this technique to radiation and chemotherapy alone. During these investigations the delineation of patient selection will be paramount, as there is likely a group of patients that will confer a good survival benefit, while others will likely not benefit from this invasive procedure. The Martin et al [37] paper describing the use of IRE to obtain R0 resections is of marked interest, however, again more data is needed in this newly introduced novel realm. In conclusion IRE remains a new, exciting area of research in pancreatic cancer with multiple promising possible applications that will require investigation in the future. globocan.iarc.fr
2018-04-03T03:41:07.365Z
2015-08-27T00:00:00.000
{ "year": 2015, "sha1": "bd7246b9e0ef1a917150268e52e3a4154362a7f9", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.4240/wjgs.v7.i8.138", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "624011a70266113429dd957c622268c8cf51ae7f", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119285752
pes2o/s2orc
v3-fos-license
Asymptotic-preserving Particle-In-Cell methods for the Vlasov-Maxwell system near quasi-neutrality In this article, we design Asymptotic-Preserving Particle-In-Cell methods for the Vlasov-Maxwell system in the quasi-neutral limit, this limit being characterized by a Debye length negligible compared to the space scale of the problem. These methods are consistent discretizations of the Vlasov-Maxwell system which, in the quasi-neutral limit, remain stable and are consistent with a quasi-neutral model (in this quasi-neutral model, the electric field is computed by means of a generalized Ohm law). The derivation of Asymptotic-Preserving methods is not straightforward since the quasi-neutral model is a singular limit of the Vlasov-Maxwell model. The key step is a reformulation of the Vlasov-Maxwell system which unifies the two models in a single set of equations with a smooth transition from one to another. As demonstrated in various and demanding numerical simulations, the Asymptotic-Preserving methods are able to treat efficiently both quasi-neutral plasmas and non-neutral plasmas, making them particularly well suited for complex problems involving dense plasmas with localized non-neutral regions. Introduction In a plasma, the Coulomb interaction between charged particles tends to restore the charge neutrality, while the thermal motion tends to disturb it. These opposing phenomena introduce a typical length of the separation between the electron (n e ) and the ion (n i ) densities, called the Debye length, and a typical oscillation period of the electrons, called the (electron) plasma period. These parameters depend essentially on the density and the thermal velocity of the particles. When the scales of interest are large compared to the Debye length, the charge separations may be neglected. In other words, the plasma may be assumed quasi-neutral. In that case, the Poisson equation is meaningless for the computation of the electric field 1 conditions P1 and P3. However, it is preferable to start from the standard Vlasov-Maxell equations, which ensures the condition P1, and to use the reformulated equations as a guideline to obtain schemes that meet also the condition P3. This approach allows us to use discretizations with good and well-known properties. The reformulated equations show that the source term of the Maxwell-Ampère equation must be predicted using a discretization of the generalized Ohm law in which the electric field is made implicit. We consider two different discretizations of the generalized Ohm law: the first one, namely the AP-Moment discretization, is an Eulerian approximation; the other, the AP-Particle discretization, relies on a partial Lagrangian approximation (using an advance of the particles). Coupled with a suitable particle pusher, these discretizations prove to be stable in the quasi-neutral limit (condition P2). Particle-In-Cell (PIC) methods are well documented to lack consistency with the Gauss law, which can be the source of non-physical results in the numerical simulations [3,36,4]. There are several ways of enforcing the Gauss law in Particle-In-Cell methods (see [3] for a thorough review). In the present work, we focus on the elliptic correction of the electric field, which is a simple and robust procedure, and we adapt it to the Asymptotic-Preserving framework. The Asymptotic-Preserving methods are semi-implicit and share some similarities with the Direct Implicit [37,17,16,32] and Implicit Moment methods [44,10,53,45,46]. However, the aim of these methods and the way they are derived are quite different. These methods are designed to be free from the usual stability constraints on the explicit methods, in order to study large-scale phenomena, but without the aim to ensure the consistency with a well-identified asymptotic model. The organization of the paper is the following. In Section 2, we scale the Vlasov-Maxwell system, then identify the quasi-neutral model and finally reformulate the Vlasov-Maxwell in a set of equations for which the quasi-neutral model is a regular limit. These operations are performed first on the standard Vlasov-Maxwell system, then on the Vlasov-Maxwell system with elliptic correction. Section 3 is devoted to the derivation of the AP-Moment and AP-Particle schemes. Reference explicit schemes are also presented to illustrate the defects of the standard discretizations of the Vlasov-Maxwell equations. In Section 4.1, without the purpose to be exhaustive, this topic being the subject of an active research for many years, the AP schemes are compared with other semi-implicit or implicit PIC methods for the Vlasov-Maxwell equations. In Section 4.2, the AP schemes are compared with the AP schemes developed for the Vlasov-Poisson equations in [22]. Finally, in Section 5, the AP schemes are tested on various and demanding simulations: the classical Landau damping; the expansion of a plasma slab into vacuum; a one-dimensional model of POS; the propagation of a KMC wave in a two-dimensional model of POS. 2 The Vlasov-Maxwell system and its quasi-neutral limit 2 .1 The Vlasov-Maxwell system For simplicity, the ions are supposed to form a motionless and uniform background density, denoted by n i . The electron evolution is described using a distribution function f depending on the space variable x ∈ Ω x ⊂ R 3 , the microscopic velocity v ∈ Ω v ⊂ R 3 and the time t ∈ R + . The electron density n, the electrical charge and current densities, ρ and J, as well as the stress tensor S are defined from the distribution function by where m is the electron mass, E the electric field, and B the magnetic field. The electric and magnetic fields are created by the particles (self-consistent fields) and satisfy the following Maxwell equations: where c is the speed of light, µ 0 the vacuum permeability and ǫ 0 the vacuum permittivity. The above equations (1)-(5) form the so-called Vlasov-Maxwell system. Of course, this system must be supplemented with initial and boundary conditions specific to each problem. Several examples of problems with their initial and boundary conditions are presented in Section 5, devoted to numerical simulations. In the Vlasov-Maxwell system, the Maxwell-Gauss equation (4) (or Gauss law) and the Maxwell-Thomson equation (5) are actually consequences of the other three equations. The integration of the Vlasov equation (1) over Ω v gives the continuity equation which translates the conservation of the electron and ion densities. Combining this continuity equation with the divergence of the Maxwell-Ampère equation (2), we obtain Taking the divergence of the Maxwell-Faraday equation (3), we find The above two equations show that the Gauss law and Maxwell-Thomson equation hold true for t > 0 as soon as they hold true at t = 0. Scaling of the Vlasov-Maxwell system Consider a plasma characterized by the following dimensional parameters: x 0 for the space scale, t 0 for the time scale, n 0 (taken equal to n i ) for the density, v 0 (taken equal to x 0 /t 0 ) for the velocity, v th,0 for the electron thermal velocity and (E 0 , B 0 ) for the electromagnetic field. The Debye length λ D (the typical length of charge separation) and the (electron) plasma period τ p (the typical period of electron oscillations) are defined by: As explained in the introduction, a plasma is considered to be quasi-neutral when the Debye length is very small compared to the space scale at which the plasma is observed. Thus, the parameter quantifies how close to quasi-neutrality the plasma is. To investigate the quasi-neutral limit (λ → 0), we scale the Vlasov-Maxwell system so that it depends only on the parameter λ. Using the dimensionless variables the Vlasov-Maxwell system can be rewritten as (dropping the stars for the sake of readability): where The parameter M is the Mach number, η the ratio of the electric energy to the drift energy, α the ratio of the typical velocity to the speed of light and β the induction electric field to the typical electric field. The scaling relations defining the quasi-neutral regime are very similar to the most common assumptions of MHD models: λ ≪ 1, α ≪ 1, M = 1, η = 1, and β = 1. A vanishing dimensionless Debye length λ → 0 provides the quasi-neutrality assumption thanks to the Gauss law. The choice M = η = 1 means that the drift energy, the thermal energy and the electric energy remain the same order of magnitude. Furthermore, this choice implies τ p /t 0 = λ, which means that the parameter λ controls the smallness of the plasma period with respect to the time scale of the problem. Therefore, this quasi-neutral regime is a low frequency asymptotic, which explains the stability issues on the time step encountered by standard explicit discretizations. The identity β = 1 is related to the so-called "frozen-field" assumption, translating the property that, in a dense plasma, the magnetic field is convected with the plasma flow. The quasi-neutral regime is also inconsistent with the propagation of electromagnetic waves at the speed of light, hence α ≪ 1. This defines another small scale, which is identified to λ in the sequel, so that the re-scaled system becomes: The quasi-neutral model The scaling of the Vlasov-Maxwell system allows us to identify (formally) the quasi-neutral limit of the Vlasov-Maxwell system. Setting λ = 0 in (14)- (18), we obtain The singular nature of the quasi-neutral limit appears clearly: both the Maxwell-Ampère equation (20) and the Gauss law (22) degenerate. The electric field E can no longer be computed explicitly with the degenerate Maxwell-Ampère equation (20). Taking the time derivative of (20) together with the curl of (21), we obtain the identity which provides a means of computing the solenoidal part of the electric field. But the Gauss law, which is degenerate, does not allow us to compute the irrotational part of the electric field. While the evolution of the electric field is governed by the displacement current in the Maxwell regime (i.e. when the sources do not dominate), it is given by the particles current in the quasi neutral limit. In this latter regime, an auxiliary equation is required to derive an Ohm law and explicit the dependence of the right hand side of (24) with respect to the electric field. It is thus necessary to use moments of the Vlasov equation (as in [26,18,23,5,22,30]). Multiplying the Vlasov equation by v, then integrating over Ω v , we find the following relation on the current density, which can be viewed as a generalized Ohm law, Using this relation together with (32), we obtain an equation that determines entirely E: Finally, provided that ∇ × B = J at the initial time, the quasi-neutral model can be (formally) rewritten as Reformulation of the Vlasov-Maxwell system The purpose of this section is to manufacture a set of equations, equivalent to the original Vlasov-Maxwell system (14)- (18), for which the quasi-neutral limit is regular. In other words, the quasi-neutral model (27)-(31) must be recovered when λ is set to zero in this new set of equations. To obtain such equations, we reproduce on (14)-(18) the operations performed in the previous section on (19)- (23). The time derivative of (15) together with the curl of (16) yield Then, combining the above relation with the generalized Ohm law (25), we find Finally, the reformulated Vlasov-Maxwell system is This system is equivalent to the Vlasov-Maxwell system (14)- (18) provided that the Maxwell-Ampère equation (15) is satisfied at the initial time. Enforcement of the Gauss law In standard Particle-In-Cell methods, the source terms of the Maxwell equations, namely the discrete charge and current densities, are computed using a particle-to-grid assignment. The grid quantities do not satisfy the discrete equivalent of the continuity equation and thus the Gauss law is not enforced, which can be the source of non-physical results in the numerical simulations, as pointed out in [6,3]. Different procedures are successfully used to correct this deficiency; we refer to [3] for a thorough review. Two main approaches can be identified. The first one consists in computing a correction of the electric field, this correction being the solution of an elliptic equation (or, in some variants, a parabolic or hyperbolic equation). The second approach modifies the particle-to-grid assignement so that the charge and current densities satisfy the discrete continuity equation. It is interesting to focus on the first approach, and more specifically on the elliptic correction, because the Gauss law, used to compute the correction, degenerates in the quasi-neutral limit. Therefore an adaptation of the elliptic corection needs to be provided. A rigorous way to add the elliptic correction is to consider a generalized formulation of the Maxwell equations in which the Gauss law is explicitly enforced by means of a correction field (a Lagrange multiplier) [4]. The generalized formulation that we use here, also called Boris correction [8], is slightly different, though equivalent, to the one proposed in [4]. This formulation involves an electric fieldẼ which does not satisfy the Gauss law, a corrected electric field E which does satisfy the Gauss law and a correction field p: The boundary conditions prescribed on p are chosen to be compatible with those on E. The generalized formulation is well-posed even if the continuity equation is not satisfied at each time t and is equivalent to the standard Maxwell system as soon as the continuity equation is satisfied at each time t. Combining (42) and (44), we deduce that p is the solution of the elliptic equation When the continuity equation holds true at each time t, the generalized equations (39)- (44) are equivalent to the standard equations (14)- (18). Indeed, in this case, the electric fieldẼ satisfies the Gauss law λ 2 ∇ ·Ẽ = 1 − n. Therefore, p vanishes and E =Ẽ. At the time-discretized level, the generalized formulation provides us with an obvious way to compute an electric field satisfying the Gauss law. First, an electric field is computed using (40)-(41); then, this electric field is corrected using (44) and (45). If we add the elliptic correction to the quasi-neutral model, we obtaiñ The degeneracy of the Gauss law prevents from computing p directly. To obtain an equation on p, we need again to use the moments of the Vlasov equation. However, the continuity equation is assumed to be not exactly satisfied by the moments of the distribution function. Following the spirit of the Boris correction, this unconsistency is corrected in the equation by means of an electrostatic deviation of the electric field, giving rise to the following modified continuity equation Introducing these definitions into the time derivative of the Gauss law yields to which is finally equivalent, assuming that the Gauss law is satisfied at initial times, to Remark 2.2. The source term of this equation is the difference beetween the time evolution of the density produced by the Vlasov equation and that of the electric field divergence predicted thanks to the moments. This point will be clarified with the time semi-discretization introduced in the sequel (see section 3.3.1). The equation (48) provides a means of preserving the consistency with the Gauss law whatever the values of λ and is therefore AP in the quasi-neutral limit. The quasi-neutral model with correction can thus be stated as follows Similarly, the reformulated Vlasov-Maxwell system with correction can be rewritten as Asymptotic-Preserving schemes Now that the Vlasov-Maxwell system has been scaled and the quasi-neutral model has been identified, we can state rigorously the properties that an Asymptotic-Preserving discretization must satisfy. P2. The stability conditions on the time step and the mesh size do not depend on λ. In this section, we derive two Asymptotic-Preserving schemes, called AP-Moment scheme and AP-Particle scheme. They use a Yee finite-difference approximation for the fields [50] (with a regular rectilinear grid) and standard assignment-interpolation procedures for the particles, such as the nearest grid point or cloudin-cell procedures [6,33]. Other choices could have been made: finite volumes or finite elements instead of finite differences for instance. The schemes are presented in a three-dimensional spatial setting and, for simplicity, the domain is assumed to be a rectangular parallelepiped with periodic boundary conditions. It is straightforward to derive one-dimensional and two-dimensional versions of the schemes. We also present two reference explicit schemes. These schemes illustrate the defects of the standard discretizations of the Vlasov-Maxwell equations and will be compared to the Asymptotic-Preserving schemes in the numerical tests. Finally, we derive 3.1 Definitions and notation Discrete fields and discrete vector calculus operators We consider different kinds of discrete fields on the grid (which is rectilinear and regular): primal and dual scalar fields, edge scalar field, primal and dual vector fields, primal symmetric second-order tensor field. • The values of a primal scalar field are located at the vertices of the cells, while the values of a dual field are located at the centers of the cells. The values of an edge scalar field are located at the center of the edges. • The components of a primal vector field are located at the center of the edges: the x-, y-, and zcomponents are located at the edges oriented in the x-, y-, and z-direction, respectively. The components of a dual vector field are located at the center of the faces: the x-, y-, and z-components are located at the faces normal to the x-, y-, and z-direction, respectively. • The diagonal components of a primal symmetric second-order tensor field are located at the vertices of the grid. The xy-, xz-and yz-components are located at the center of the faces normal to the x-direction, y-direction, and z-direction, respectively. Discrete differential operators can be defined on the discrete fields defined above by using central finite differences (and assuming periodic boundary conditions). • A discrete curl operator ∇ h × is defined for the primal and dual vector fields. When applied to a primal vector field (resp. dual vector field), the discrete curl operator yields a dual vector field (resp. primal vector field). Furthermore, if F h is a primal vector field and G h a dual vector field, then • A discrete divergence operator ∇ h · is defined for the primal and dual vector fields. When applied to a primal vector field (resp. a dual vector field), the discrete divergence operator yields a primal scalar (resp. a dual scalar field). If F h is a primal or a dual field, then • A discrete gradient operator ∇ h is defined for the primal and dual scalar fields. When applied to a primal scalar field (resp. a dual scalar field), the discrete gradient operator yields a primal vector (resp. a dual vector field). • A discrete divergence operator ∇ h · is defined for the primal tensor field. It yields a primal vector field. • A discrete cross product × h between a primal vector field and a dual vector field is defined. It yields a primal vector field. Such a discrete operator is built using local averages. Notation • The grid spacings in the x-, y-, and z-direction are denoted by ∆x, ∆y, and ∆z, respectively. Let • The time interval is discretized with a uniform time step ∆t. Let t γ = γ∆t, for any γ ∈ R + . • The discrete electric and magnetic fields at time t γ are denoted by E γ h and B γ h . The discrete electric field is a primal vector field while the discrete magnetic field is dual vector field. The discrete correction field, denoted by p h , is a primal scalar field. • Let N be the number of particles. The vectors containing the position and the velocity of the particles at time t γ are denoted by X γ N and V γ N , respectively. The position and velocity vectors of the jth particle at time t γ are denoted by X γ N,j and V γ N,j , respectively. • The value of a field F h interpolated at the position X N,j is denoted by F h (X N,j ). • The discrete electron density accumulated from the particles at position X N as a primal scalar field (resp. an edge scalar field) is denoted by n h (X N ) (resp.n h (X N )). The discrete current accumulated from the particles of position X N and velocity V N as a primal vector field is denoted by The discrete second-order moment accumulated from the particles of position X N and velocity V N as a primal tensor is denoted by S h (X N , V N ). Reference explicit schemes The first reference scheme combines a Boris scheme for advancing the particles and a leap-frog discretization for the Maxwell equations [6, Chapter 15]: where The above equations are solved in the order (66), (64), (63), (65), (68), (67), so that the scheme is fully explicit. This scheme is subject to a number of stability conditions that are given below (between brackets, the conditions are expressed with dimensional parameters and without the scaling assumptions). The time step must resolve the plasma period: The grid spacing must resolve the Debye length: where ζ is a parameter depending on the assignment-interpolation procedure. Otherwise, aliasing will heat up the plasma. Furthermore, the time step and the grid spacing must satisfy a Courant condition involving the velocity of the electrons, and and another one involving the speed of light, The first three constraints are due to the explicit discretization of the particle motion, while the last constraint is due to the explicit discretization of the Maxwell equations. Note that the constant in the right-hand side of the Courant condition (75) is specific to the Yee finite-difference discretization [50]. Because of the stability conditions (72) and (73), the above scheme is unstable in the quasi-neutral limit. Therefore, it does not satisfy the condition P2, and hence is not Asymptotic-Preserving. In the second reference scheme, the leap-frog discretization of the Maxwell equations (65)-(66) is replaced by an implicit θ-scheme (with 1 2 ≤ θ ≤ 1) but the sources remain explicit: The θ-scheme is unconditionally stable for 1 2 ≤ θ ≤ 1. Therefore, this second scheme is not subject to the stability condition (75). However, it is still subject to the conditions (72) and (73), and hence is not Asymptotic-Preserving. The computation ofẼ m+1 h and B m+1 h requires the solution of a linear system, which makes this scheme slightly more costly than the first one. Other implicit schemes than the θ-scheme could have been used; see for instance [2,9,11]. The properties of the θ-scheme (stability, energy conservation, dispersion) are recalled in A. General framework To derive AP schemes, it is preferable to start from the standard Vlasov-Maxell equations (39)-(44) than from the reformulated equations (55)-(60). It allows us to use discretizations with good and well-known properties. The reformulation operations described in Sections 2.4 and 2.5 are however used as a guideline to obtain schemes consistent with the quasi-neutral model. In this section, we derive the common general structure of two AP schemes, called AP-Moment and AP-Particle schemes. Their specificities are addressed in the next two sections. 1. First, the Maxwell equations are discretized using a θ-scheme (with 1 2 ≤ θ ≤ 1): h . An implicit discretization of the Maxwell equations is needed to avoid a Courant condition similar to (75). The currentJ m+1 h used as source term in the discrete Maxwell-Ampère equation (78) is defined by an approximation of the generalized Ohm law (25). It is crucial to make the electric field implicit in this approximation to ensure the consistency with the quasi-neutral model. We consider two types of approximation. The first one, called AP-Moment, is based on an Eulerian integration. The second one, called AP-Particle, relies on a partial Lagrangian approximation (using an advance of the particles). In both approximations, the current J m+1 h can be written in the formJ where A m h is a linear operator defined by for any fields E h and B h . In practice, the electric fieldẼ m+1 h and the magnetic field B m+1 h are computed by solving this linear system. It is invertible and well-conditionned even when λ ≪ ∆t and λ ≪ h. Indeed, using the identity (61), we obtain the inequality When λ ≪ ∆t and λ ≪ h, the densityn h (X m N ) is expected to be close to 1 (the charge separation is negligible at scales far larger than λ). Therefore, the quantity (λ 2 /∆t 2 ) + min n h (X m N ) is close to 1. 2. In a second step, the electric field is corrected. Equations (42) and (44) are discretized as follows: The charge density n m+1 h used as source term in (83) is built using an approximation of the moment equations (46) and (47): In the above equations, n m+1 h is a primal scalar field and J m+1 h is a primal vector field. It is essential to make the electric field implicit in this approximation to guarantee the consistency with the quasi-neutral model. From (83)-(86), we deduce that the correction p h is solution of the equation Just as the linear system (81), the above linear system (87) is invertible and well-conditionned even when λ ≪ ∆t and λ ≪ h. Using the divergence of (78), it can be rewritten in a simpler form: Note that the right-hand side of (88) evaluates the inconsistency of the Gauss law at time t m which amounts to the difference beetween the density accumulated from the particles and the electric field implicitly predicted at the previous time step. In practice, the correction field p h is computed with (88), then the electric field is corrected with (84). 3. Finally, the particles are advanced with a Boris-like scheme: where With this particle pusher, the scheme is subject to the Courant condition (74). However it is not subject to conditions (72) and (73) and thus satisfies Property P2. Moreover, it presents favorable conservation properties. The choice of the particle pusher is discussed in more detail in B. AP-Moment scheme The AP-Moment Scheme relies on the following approximation of the generalized Ohm law (25): This amounts to set Proof. For simplicity, we fix θ = 1. First, the system (81)-(82) yields The Ampère law is assumed to be satisfied at the precedent time step, with so that the following approximation holds defining a consistent discretization of the reformulated Ampère law (34). Next, the equations (83)-(86) and (95) provide Assuming that the Gauss law as well as the continuity equations are satisfied at the previous time step so that the following approximations hold the following equation can be stated which defines a time discretization of the reformulated Gauss law (58) provided that the correction at time level m and m − 1 vanish. AP-Particle scheme In the AP-Particle scheme, the contributions of J × B and ∇ · S in the generalized Ohm law (25) are approximated with a particle advance. Observing that with Proof. When the number of particles is sufficiently large, the following appoximation is valid: This proves that the AP-Particle scheme shares the same consistency properties than the AP-Moment scheme. Fully implicit, Direct Implicit and Implicit Moment methods Some level of implicitness is required in the discretization of the Vlasov-Maxwell equations to obtain AP schemes. It is needed to ensure the consistency with the quasi-neutral model and the stability in the quasi-neutral limit. Other implicit kinetic methods have been developed since the 1980s. Designed for the simulation of large-scale phenomena, they are derived to relax the main stability conditions that explicit methods must satisfy. We refer to [39] for a recent review. In this section, we discuss the AP character of the three main classes of implicit methods (the fully implicit methods, the Implicit Moment methods and the Direct Implicit methods) and compare them with the AP-Moment and AP-Particle schemes. Theoretically, a fully implicit discretization of the Vlasov-Maxwell system is stable for any discretization parameters. The resulting problem is a huge system of coupled nonlinear equations (the particle equations and the field equations). Impressive realizations have been achieved in the past few years with the use of Jacobian-Free-Newton-Krylov solvers, preconditioning, optimized and often massively parallel implementations [13,14,15,42,51]. Nevertheless, fully implicit discretizations are still too costly for multi-dimensional simulations. In fully implicit discretizations, the electric field at the advanced time level E n+1 h occurs in the discretization of the terms ∂E/∂t and ∇ × E in the Maxwell equations and is also involved in the definition of the source terms, via the particle equations. Therefore the consistency with the quasi-neutral model is recovered when the coupled system is solved. The Implicit Moment methods [44,10,53,45,52,46,40,43] are semi-implicit methods. They decouple the field equations from the particle equations by using macroscopic evolution equations on ρ and J to predict the sources of the field equations at the advanced time level ρ n+1 h and J n+1 h . This approach reduces dramatically the computational cost compared to fully implicit methods, while retaining favorable stability properties. The use of moment equations to predict the sources at the advanced time level is a common point with the AP-Moment and AP-Particle schemes. In particular, the discretization of these moment equations is very similar in the Implicit Moment methods and the AP-Moment scheme. The Direct Implicit methods [17,37,16,32] are also semi-implicit. They enjoy the same accuracy and stability properties as the Implicit Moment methods. They rely on a linearization of the fully implicit discretization, which decouple the field equations from the particle equations. The sources of the field equations are linearized around explicitly extrapolated positions of the particles. The source prediction in the Direct Implicit methods can actually be interpreted as an approximation of the moment equations, comparable to the one used in the AP-Particle scheme. The extrapolation step plays essentially the same role as the first particle advance in the AP-Particle scheme (though being more intricate in the Direct Implicit approach). From this brief review, we can infer that the fully implicit, Implicit Moment and Direct Implicit methods are generally AP in the quasi-neutral limit. The fully implicit methods are far more costly than the AP-Moment and AP-Particle schemes. The Implicit Moment and Direct Implicit methods share some similarities with the AP-Moment and AP-Particle schemes in their formulation. However, their motivation and the methodology used to derive them differ significantly. The aim of the AP-Moment and AP-Particle schemes is to be consistent with a clearly defined quasi-neutral model, not to relax stability conditions. The elimination of the stability conditions related to the plasma period and the speed of light is a consequence of scaling assumptions made in Section 2.2 for the definition of the quasi-neutral limit (the scaled plasma period and the ratio of the typical velocity to the speed of light vanish in the quasi-neutral limit). The derivation of the AP-Moment and AP-Particle schemes relies on a reformulation of the Vlasov-Maxwell system which unifies the Vlasov-Maxwell model and the quasi-neutral model in a single set of equations. This reformulation highlights the terms that need to be built or implicited in order to ensure consistency with the quasi-neutral model. This methodology allows us to limit the computational cost of the schemes to what is necessary in view of the AP property. Furthermore, this methodology could be applied to quasi-neutral models with a more reduced complexity, leading to more efficient numerical methods for some kinds of problems. AP-Moment and AP-Particle schemes in the electrostatic regime The electrostatic regime is characterized by a vanishing magnetic field. In the dimensionless system (9)- (13), this amounts to the asymptotic β → 0. In this regime, the electric field is irrotational, since the Maxwell-Faraday equation (11) simplifies into ∇ × E = 0, and it is assumed to derive from a scalar potential (there exists a scalar field φ such that E = −∇φ). As a consequence, the Gauss law is sufficient to determine completely the electric field and the Vlasov-Maxwell system reduces to the Vlasov-Poisson system: Let us examine the AP-Moment and AP-Particle schemes in the electrostatic regime. Setting the magnetic field to zero, they read with, for the AP-Moment scheme, and, for the AP-Particle scheme, In a one-dimensional spatial setting, the AP-Moment and AP-Particle schemes can be simplified further. Indeed, for any primal vector field F h , there is a primal scalar field φ h such that This equation determines entirely the potential φ m+1 h and thus the electric field E m+1 h . Therefore, the Maxwell-Ampère equation can be disregarded and the schemes reduce to the equations (106), (107) and (110). We remark that the AP-Moment scheme is, in this case, equivalent to the PIC-AP2 scheme introduced in [22]. Numerical simulations In order to assess their efficiency and investigate their properties, the AP schemes are tested on various problems and compared with reference explicit schemes (those described in Section 3.2). The first two problems, namely the classical Landau damping and the expansion of a plasma slab into vacuum, are onedimensional in space and velocity and purely electrostatic. The two other problems, which describe a POS (Plasma Opening Switch), involve magnetized plasmas. The first one is a very simplified model of POS (onedimensional in space and two-dimensional in velocity); the second one, more realistic, is two-dimensional in space and velocity and shows the propagation of a KMC wave. The framework of the above problems is not exactly the general framework described in Sections 2 and 3 (the problems are one-dimensional or two-dimensional, the magnetic field is sometimes disregarded, the ions are not always motionless). The numerical schemes are implemented accordingly. These developments being straightforward, we do not detail them. However, for sake of clarity, we state the version of the Vlasov-Maxwell system used for each problem. All the simulations are performed with the cloud-in-cell assignement-interpolation method and the parameter θ is always taken equal to 1 in the θ-schemes. Landau damping When a plasma in a spatially homogeneous equilibrium state is slightly perturbed, it returns exponentially fast, with oscillations, toward its initial equilibrium state. This is the so-called Landau damping. This problem allows us to test the ability of the AP schemes to accurately reproduce phenomena occurring at the Debye length and plasma period scales. The non-neutral model used for this problem is a one-species, one-dimensional, electrostatic model (the magnetic field is disregarded, the ions are motionless and form a uniform background). Scaled as in Section 2.2, the equations are The space domain is [0, 4π] and the scaled Debye length λ is taken equal to 1. The initial electron density follows a Maxwellian distribution with a small spatial perturbation: where α = 5 · 10 −2 . Periodic and homogeneous Dirichlet boundary conditions are prescribed for the particles and the electric field, respectively. An analytical approximation of the electric field can be computed by applying a Laplace-Fourier transform to the linearized system; see [20] for the detailed calculation. Keeping only the dominating mode, the others being quickly damped, the following approximation is obtained: Numerical simulations are performed with discretization parameters smaller than the Debye length and the plasma period. The numerical results, together with the analytical approximations given by the formula (114), are represented in Figure 1. We observe that the AP schemes reproduce quite accurately the oscillation period of the Landau damping but they are more dissipative than the reference explicit scheme (the first one described in Section 3.2). Note that the discrepancy between the numerical results and the analytical approximation at the first oscillation is due to the neglected Laplace-Fourier modes in the analytical approximation. The conservation properties of the AP schemes for the plasma oscillations could probably be improved with a more careful design of the time discretization, with centered or high-order approximations instead of backward approximations. Plasma expansion The second problem is the expansion of a plasma slab [28,22,41]. The plasma is initially confined in a small area at the center of the domain and surrounded by vacuum. A non-neutral sheath forms at the plasma-vacuum transition and the large electric field created in this sheath accelerates the ions, leading to the plasma expansion. While the quasi-neutral model is able to account for the ion motion and the plasma expansion in the plasma bulk, it is inaccurate in the sheath. Therefore, this problem allows us to verify, on the one hand, the consistency of the AP schemes with the quasi-neutral model and their stability in the quasi-neutral regime and, the other hand, the consistency with the non-neutral plasma description, with the transition from one regime to the other. It is also an excellent test for the energy conservation properties of the schemes since it relies on a kinetic energy transfer from the electrons to the ions via the electric field created in the sheath. The non-neutral model used for this problem is a two-species, one-dimensional, electrostatic model (the magnetic field is disregarded, the ions are not motionless). This test case is stated and implemented in dimensional variables, the equations being: where the indices i and e denote the quantities related to the ions and the electrons, respectively. The space domain is [−L, L] with L = 1 m. The problem being symmetric, the simulations are actually performed only on the half domain [0, L]. The ion mass is such that m i = 1836 m e . The initial ion density is equal to n i0 for x ∈ [0, 0.02] and is zero in the rest of the domain (different values of n i0 are used in the simulations). The initial electron density n e0 satisfies the Maxwell-Boltzmann relation n e0 = n i0 exp(φ 0 ), where φ 0 is the electrostatic potential, solution of −∆φ 0 = e(n i0 − n e0 )/ǫ 0 (this nonlinear problem is solved numerically). The initial electron and ion velocities follow Maxwellian distributions with zero mean velocities and respective temperatures T e and T i . The electron temperature T e is chosen so that the initial electron thermal velocity is equal to v th,e = 1 m·s −1 and the ion temperature is such that T i = 10 −3 T e . To represent the symmetry at the left end of the half domain, a specular reflection condition is prescribed for the distribution functions (i.e. exiting particles are reinjected with reversed velocities) and an homogeneous Dirichlet condition is enforced on the electric field. At the right end, an absorbing condition is enforced on the distribution functions (i.e. exiting particles are not reinjected) and an homegeneous Neumann condition is enforced on the electric field. In addition to the time and space scales related to the electron motion, this problem involves time scales related to the ion motion, namely the ion plasma period τ pi = m i ǫ 0 /(e 2 n i0 ) and the ion accoustic wave speed c s = k B T e /m i . The speed of the plasma expansion is theoretically the same order as c s [28]. Ref A first series of simulations is performed with a low-density plasma (the initial density n i0 is adjusted to obtain λ D = 10 −3 m and τ p = 10 −3 s). The mesh size and the time step are smaller than the Debye length and the plasma period. The numerical results are represented in Figure 2 and, to facilitate the comparisons, they are expressed with the same normalized quantities as in [28,22]. The AP schemes reproduce correctly the physics of the plasma expansion: the plasma reaches the position x T ≈ 136λ D at the end of the simulation, there is a large electric field at the plasma edge, the (thermal) electron kinetic energy is converted into (drift) ion kinetic during the simulation. The total energy of the system is conserved, which demonstrates the good conservation properties of the schemes. Furthermore, the results of the AP schemes are almost identical to the results of the reference explicit scheme (the first one described in Section 3.2) and are in agreement with the results of [28,22]. A second series of simulations is performed with a high-density plasma (the initial density n i0 is adjusted to get λ D = 10 −4 m and τ p = 10 −4 s). The reference explicit scheme is still used with discretization parameters resolving the Debye lenth and the plasma period, whereas the AP schemes are now used with discretization parameters significantly larger than the Debye length and the plasma period (the number of particles is also reduced). The Maxwell-Boltzmann relation used in the previous simulations to compute the initial electron density turns out to be inaccurate with a coarse mesh. It is thus better to make the initial electron density equal to the ion density. The numerical results are represented in Figure 3. As in the previous simulations, the AP schemes reproduce correctly the physics of the plasma expansion and the energy conservation is satisfactory. The results provided by the AP schemes are even quite close to the results obtained with the reference explicit scheme, despite the important difference of computational cost. Note that, in Figure 3(a), the oscillations in the reference scheme results are not physical and are due to the insufficient number of particles. The outputs of the AP-Moment scheme and the AP-Particle schemes are similar but the discretization used for the AP-Moment scheme is finer. This shows that the AP-Particle scheme is less diffusive than the AP-Moment scheme and provides a better tracking of the plasma-vacuum interface. Finally, these simulations demonstrate the ability of the AP schemes to simulate quasi-neutral problems with moderate computational costs. However, it is not possible to use too coarse discretizations, owing to the deterioration of the energy conservation. This is a well-known drawback of the semi-implicit methods [25], due to fast particles crossing more than one cell in a time step. This point should be investigated in subsequent realizations. A one-dimensional model of POS A Plasma Opening Switch (POS) is a device used to deliver a large current with a rapid increase of its impedance [24]. It consists of a coaxial cylindrical transmission line filled whith a high-density plasma and connected to an input power generator [55]. In a first phase, the conduction phase, the plasma short-circuits the two electrodes of the transmission line and prevents the power to be delivered to the load. Then, the interaction of the electromagnetic wave with the plasma leads to the formation of a vacuum gap and finally to the total opening of the plasma, making possible the transmission of the current to the load. During the opening of the plasma, charge separation phenomena are essential and non-neutral sheath appear at the plasma edge [27,47]. A very simplified model of POS, one-dimensional in space and two-dimensional in velocity, is considered in this section. This model has been introduced in [23]. Let us denote by (x, y, z) the three-dimensional Cartesian coordinates. The particles move only along the x-direction but have a velocity both in the xdirection and the y-direction so that the electromagnetic field generated by the particles has only components E x , E y and B z . Therefore, the two-species Vlasov-Maxwell model reduces to the following equations: where A inc = 1.8 · 10 8 V·m −1 and t inc = 10 −8 s. Transparent boundary conditions are prescribed at each end of the domain to avoid wave reflections. Ref Four series of simulations are performed (their main characteristics are collected in Table 1). In the Low-a simulations, the AP schemes and the reference explicit scheme (the second one described in Section 3.2) are used with discretization parameters that resolve the Debye length and the plasma period. The numerical results, represented in Figure 4, are comparable for the three schemes. In particular, the level of numerical noise is equivalent. We observe that the electrons and the ions are accelerated by the incident wave (see Figures 4(c) and 4(d)). The electrons being less massive than the ions, they are accelerated more strongly and are expelled from the plasma edge, which breaks the quasi-neutrality in this zone (see Figures 4(a) and 4(b)). This charge separation creates a large electric field E x (see Figure 4(e)). As for the magnetic field, it is significantly transmitted through the plasma (see Figure 4(f)). The Low-b simulations deal with a higher initial plasma density. The reference explicit scheme still uses discretization parameters that resolve the Debye length and the plasma period, while the AP schemes use a mesh size 20 times larger than the Debye length and a reduced number of particles. Despite the huge difference of computational cost between the simulations, the numerical results are indistinguishable (see Table 1: One-dimensional model of POS. Plasma parameters and discretization parameters used in the different simulations. The initial density n 0 is in m −3 , the initial Debye length λ D in m and the plasma period τ p in s. The value N p is the total number of particles. For the high-density cases (High-a and High-b), the computational cost of the reference explicit scheme is prohibitive, so that no simulation is carried out with this scheme. The AP schemes are used with very coarse discretizations. For the most demanding case, the mesh size is 10 4 larger than the Debye length, the time step is 100 times larger than the plasma period and the number of numerical particles is only 10 5 . The numerical results are shown in Figure 6. In contrast to the low-density simulations, the incident magnetic field is almost entirely reflected at the plasma edge (due to the very large electron current). The level of numerical noise in the outputs remains moderate, even in the High-b simulation. These simulations demonstrate the ability of the AP schemes to handle high-density plasmas and vacuum-plasma transitions with very coarse discretizations. Propagation of a KMC wave in a POS We now consider a two-dimensional model of POS where the plasma density is not uniform in the transverse direction to the electromagnetic wave propagation. In this configuration a magnetic shock wave, the socalled KMC wave, propagates into the plasma [49]. The existence of KMC waves can be derived from the quasi-neutral equations (see C). Consequently, their simulation offers a means to verify the consistency with the quasi-neutral limit in an electromagnetic context. The model is a one-species, two-dimensional model. The ions are motionless but their density is not uniform. Let us denote by (x, y, z) the three-dimensional Cartesian coordinates. The particle motion is restricted to the (x, y)-plane and thus the electromagnetic field generated by the particles has only components E x , E y , and B z . Finally, the Vlasov-Maxwell system simplifies into Table 2: Propagation of a KMC wave in a POS. Plasma parameters and discretization parameters used in the different simulations. The inital densities n min and n max are in m −3 , the electron temperature T e in eV, the initial Debye length λ D in m, the theoretical KMC wave speed in m·s −1 . The value N p is the number of particles. Config. n min n max T e λ D V KMC Grid ∆x ∆y N p (a) 10 19 10 20 6 · 10 4 1.69 · 10 −6 25 · 10 6 100 × 100 2 · 10 −3 3 · 10 −4 10 5 (b) 10 19 10 21 6 · 10 2 5.35 · 10 −8 25 · 10 6 400 × 100 5 · 10 −4 3 · 10 −4 4 · 10 6 A transverse electromagnetic wave is sent from the left end of the domain. Transparent boundary conditions are prescribed at each end of the domain to avoid wave reflections. The initial plasma density is such that ∂ y 1 n0 (x, y) = 10 2 (n max −n min )/(n max n min ) for 0.01 ≤ y ≤ 0.02. Therefore, according to the theory presented in C, the incident wave should propagate into the plasma as a KMC wave (between the lines y = 0.01 and y = 0.02). Moreover, the expected speed of this KMC wave is Two simulations are carried out on this problem, both with the AP-Moment scheme (note that the Gauss elliptic correction is not implemented in these simulations). The plasma and discretization parameters are specified in Table 2. The evolution of the magnetic field for the simulation (a), reported in Figure 7, shows a rapid magnetization of the plasma in the region where the density gradient is located. The electrons emitted at the cathode produce a current which prevents a uniform penetration of the magnetic field in the plasma. This current is gradually deflected with the propagation of the magnetic field, as depicted in Figures 7(c) and 7(d). We evaluate the speed of the KMC waves in the simulations by marking the position of a certain magnetic level set at different times (see Figure 8). The observed values are in agreement with the theoretical values: in the simulation (a), it is 76% of the theoretical value, while in the simulation (b), it reaches 82% of the theoretical value. These good numerical results are obtained with discretizations that do not resolve the Debye length. In particular, for the discretization (a), the mesh size is 10 4 times larger than the Debye length. Conclusion We have derived two Asymptotic-Preserving Particle-In-Cell methods for the Vlasov-Maxwell system in the quasi-neutral limit. The scaling assumptions made for the definition of the quasi-neutral limit, similar to those used to derive the MHD models, yield a kinetic quasi-neutral model where the electric field is computed by means of a generalized Ohm law. The Asymptotic-Preserving methods are consistent with either the quasi-neutral model or the Vlasov-Maxwell model according to how the discretization parameters resolve the plasma parameters, which make them able to simulate complex plasma problems with a reasonable computational cost. No rigorous numerical analysis is provided in this article and this should be the subject of future work. However, the numerous numerical investigations demonstrate conclusively the efficiency of the methods to account for phenomena evolving at the plasma period and Debye length scales, as well as quasi-neutral phenomena usually well described by the MHD theory. In particular, they are able to cope with vacuum-dense plasma interfaces and the formation of non-neutral sheaths. Although other semiimplicit Particle-In-Cell methods (the Direct Implicit and Implicit Moment methods) present the same kind of Asymptotic-Preserving properties, the methodology developed in this article is new and provides a rigorous framework for the quasi-neutral limit problem. Furthermore, it opens the way for addressing more singular asymptotics and deriving more efficient numerical methods for some kinds of problems. Config. A Yee finite differences and θ-scheme for the Maxwell equations We consider the discretization of the homogeneous d-dimensional Maxwell equations (d =1, 2 or 3) with Yee finite differences for the space approximation and a θ-scheme for the time integration. Using the discrete operators introduced in Section 3.1, the discrete equations are The above scheme is second-order accurate in space. It is first-order accurate in time for θ = 1 2 and secondorder accurate for θ = 1 2 . The following energy balance holds true: where E m h = ǫ0 2 E m h 2 + 1 2µ0 B m h 2 . Therefore, the θ-scheme is unconditionally stable for θ ∈ [ 1 2 , 1]. It is dissipative for θ ∈] 1 2 , 1] (all the more dissipative than θ is large) and energy-conserving for θ = 1 2 . Unlike the leap-frog scheme, the θ-scheme is dispersive even in a one-dimensional setting and with a Courant number equal to 1. B Comparison of some particle pushers for the AP schemes We examine the properties of some particle pushers within the AP schemes. The particle pushers we consider are variants of the Boris scheme: with (a, b, c) ∈ {0, 1} 3 . Numerical simulations show that the electric field and the velocity must be made implicit (b = 1, c = 1) to overcome the stability condition (72). They also show that, whatever the choice for (a, b, c), the scheme remains subject to the Courant condition (74). An explicit discretization of the position (a = 0) is preferable to an implicit discretization (a = 1), since it yields a scheme easier to solve and more accurate. In particular, in the case of a constant electric field and a zero magnetic field, the choice a = 0 yields a symplectic scheme, unlike the choice a = 1. Symplectic time-integration schemes ensure excellent conservation properties and an accurate behavior in long-time simulations [29]. C KMC waves The existence of KMC waves can be derived from the quasi-neutral equations (with motionless ions) presented in Section 2.3. Neglecting the inertia term ∂ t J and the pressure term ∇ · S in the generalized Ohm law (25) and rewriting it with dimensional variables, we obtain the relation In the two-dimensional setting (116)-(117), if the density does not vary along the x-axis and does vary along the y-axis, the above equation simplifies into a Burgers-like nonlinear hyperbolic equation: This equation admits shock wave solutions, with a speed They are the so-called KMC waves, named after Kingsep, Mokhov and Chukbar.
2015-09-11T15:09:17.000Z
2015-09-11T00:00:00.000
{ "year": 2015, "sha1": "541f5a7224ac6cd3586e419f8b4087d35f65aeff", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "36a21c69f86cd09138ed23a45b8302e732516817", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
265638064
pes2o/s2orc
v3-fos-license
ILLOCUTIONARY ACT ON DONALD TRUMP’S SPEECH “CORONAVIRUS TASK FORCE BRIEFING” IN 2020 This research aimed at identifying the type of illocutionary act and the function of illocutionary act in President Donald Trump’s speech about Coronavirus Task Force Briefing in 2020. This research was designed by using Qualitative research method. The data was analyzed by using the theory of Miles, et all (2014:31-33). The result of this research showed that there were five types illocutionary act namely: verdictives, exercitives, commisives, behabitives, and expositives. Those illocutionary act found function to as evaluate, suggest, advice, promise, appreciate, and explain. Based on the results, it can be concluded expositives become as the dominant types of illocutionary act used in Donald Trump’s speech. It is suggested that illocutionary should be learned and more understand deeply. A. Introduction Pragmatics is the study of how language is used in communication. Therefore, in pragmatics, we study about speaker's meaning that is how meaning is communicated based on its context. Pragmatics is concerned with the study of meanings as communicated by a speaker on Sapir (1921:3), explained that language is a purely human and non-instinctive method of communicating ideas, emotions, and desire by means of a system of voluntarily produced symbols.They also use the language to express their feeling, idea, and thought through convey the language with others and to deliver information in English.In this study, the researcher collected the data from documentation namely from the Donald Trump's speech.In collecting the data, the researcher did the following steps: 1. Download the video of Donald trump's speech (https://youtu.be/5ZQZW7INTT8). Printed out the transcription of Donald Trump's speech. 3. Reading and comprehending the speech which would be analyzed. 4. Put a sign or underlined each sentence that includes the kinds of illocutionary act. 5. Arranging and making a list of the data which had been clasified as the illocutionary act. The researcher analysed the data by using Miles,et all theory.According to (Miles, et all 2014:31-33), there are 3 activities in analyzing qualitative data: Data condensation is part of analysis. Data condensation referred to the process of selecting the data that appear in the full corpus (body) of written-up field notes, interview transcripts, documents and other empirical material.By condensing, we are making data stronger.In this stage, researcher will selected the data needed which contains the illocutionary act on Donald Trump's Speech "Corona Virus Task Force Briefing" in 2020.Then, the researcher will classified the data for each category.The purpose of this activity is to make researcher easier to classify data. After the researcher got the data needed, researcher displayed the data in a table form to make researcher easier in drawing conclusions. Drawing and verifying conclusion The third stream of analysis activity was conclusion drawing and verification.In Research Finding Briefly, the result of illocutionary act found in Donald Trump's speech can be shown on table 1. below: f.Explaining Explaining is an expression to explain the details about something to someone to make them more understand.This is describing verbally about a situation, facts and data according to the time in the applicable laws. Discussion From means that the speaker provided some goodness about their business for a lot of people.The speaker try to make sure a lot of people to be trust of their business. The third type is commissives. Commisives is a word phrase or sentences that give a promise or be responsible for something.Similarly, Austin in Alston D. Conclusion and Suggestion Based ( writer) and interpreted by a listener (or reader).It means that more to do with the analysis of what people mean by their utterances whether the words or phrases it those utterances might mean by themselves.Language is a structured system of communication used by humans, based on speech and gesture (spoken language), sign, or often writing.As humans being, people always need to relate to other human beings.It takes significant elements because they can provide what they want to say and they can show that they are expressed.Language plays a necessary part in the lives all of us and is our most characteristic of human possession.Based Generally, a display was an organized compressed assembly of information that allows conclusion drawing and action.Data display helps us to understand what is happening and to do something-either analyze further or take action based on that understanding.Designing displaysdeciding on the rows and columns of a matrix for qualitative data and deciding which data, in which form, should be entered in the cells-are analytic analysis this level, researcher give descriptions or describe the result of analyzing the data.from the start of data collection, the qualitative analyst interprets what things means by noting patterns, explanations, casual flows, and propositions.Conclusions were verified as the analyst proceeds. Figure Figure 1.Components of data analysis: interactive model """""" Based on theory of Austin in Alston (1998:85) stated that there are five (5) types of illocutionary acts are verdictives, exercitives, commisives, behabitives, and expositives.The researcher has analyzed Donald trump's speech and discovered five types of illocutionary acts as the theory stated.There were 39 utterances in that speech was analyzed by the researcher.The types of illocutionary acts that has discovered then classified in accordance with the types.The researcher also discovered the realization the use of illocutionary act in that speech, then classified into its illocutionary types.As the result of this research, the types of illocutionary act as Austin in Alston stated has found with the realization also.Every utterances was analized and classified as the theory stated.From the data analysis in the Donald Trump's speech the researcher found that there are some types of illocutionary act wihich used in the speech.Furthermore, this research only focused on the thory of Austin in Alston (1998:85).a. Verdictives Verdictives is an utterance that focused on the speakers views about the looks of the object directly and reality.This are examples of verdictives in Donald Trump's utterances: Data sample 1 "I think it's going to be a very acceptable package.It's a very big package and a very acceptable package.It'll be good for our country, good for the airlines, good for a lot of people."This utterance include as verdictives because provided an evaluation about the airlines.Data sample 2 "We built a great, great energy business in the United States, so we have tens a thousands of jobs."This utterance include as verdictives because the speaker talking about the quality energy of business in United States.Data sample 3 "That's a tremendous statement and we continue to pray for him and his fast recovery."This utterance include as verdictives because provide and an evaluation about intensive care about Boris Johnson Exercitives is the imperative sentences, advicing, and giving advice to other.This are examples of exercitives in Donald Trump's utterances: Data sample 1 "I think it's going to be a very acceptable package.It's a very big package and a very acceptable package.It'll be good for our country, good for the airlines, good for a We don't want that to happen."This utterance included as excertives because the speaker advice people about the oil production.Data sample 3 "That's a tremendous statement and we continue to pray for him and his fast recovery" ll be probably putting out a proposal and giving them some of the details, some of the very powerful details over the weekend.It's moving along quickly."This utterance include as commisives because provided a promise that would be done in a short time.Data sample 2 "And we are going to be in a position to do a lot to help them so that they keep their employees and they save their businesses, and that'll be taking place I think you can say over the weekend."And hopefully we're going to be opening up.We can call it opening very, very, very, very soon I hope."This utterance include as commissive because the speaker promise something in a short time to do.The oil industry does better than it's doing right now."And we want to thank all of the heroes on the front lines as they fight to save American lives."This utterance include as behabitives because provided an appreciation to heroes on the front lines as they fight to save American lives.As the New York metropolitan area continues its battle against the outbreak, the full power of the federal government is there to support them.As you know the Javits Center has now been fully converted into a 3,000 bed hospital, one of the largest anywhere in the country, and by the incredible professionals, I have to say the Corps of Engineers, what they can do is just incredibleus that the United States is blessed with the most advanced healthcare and the most skilled healthcare workers anywhere in the planet."This utterance include as expositives because give the detail information about the American medical system by comparing it with United States.We had the top doctors in the country, some international doctors, mental health, big factor, not only as the virus inflicted immense physical suffering on many people, but also mental and emotional suffering as well."This utterance include as expositives because give the detail information about how the America ready and take serious to face the virus.The detail information about expositives related with expression of illocutionary act is available in appendix.Based on the data analysis there are five types of illocutionary act which is has the result of the function they are evaluating, an opinion to evaluate something.Such as when someone doing something and need to be fixed.It is critically examines a program, activity, policy, or the like.This involves gathering information about program activities and outcomes.It is purpose is to make judgements about a program, improveits effectiveness, and to weigh decisions.b.Suggesting Suggesting is giving a suggestion to someone to do something in the form of a suggestion, recommendation or solution to something, either in the form of a problem, a situation that requires opinions or input in doing something.c.Advicing Advicing is ask someone to do or not to do something.Advicing leads someone to good and right things that can make someone to be better than before.Advicing consist of good teaching or lesson such as hints, warnings, and reprimands.d.Promising Promising is give a promise to do or give something to someone.Promising also is an ability to do or leave something in an effort to gain trust.Promises can be spoken or written as a contract between two parties.It can be the ability to comply with obligations or not to carry out prescribed to an authorized superior to someone.e. Appreciating Appreciating is an expression to congratulate someone's achievement and support them.Appreciating can motivate others to do something as best they can, pleasing others for what they have done, enjoy yourself for being able to support others, build trust in relationships with colleagues, strengthen relationships with others, show appreciation or respect, increase effectiveness and efficiency at work, makes us focus on the important things and eliminate the insignificant things, produce something innovative, as moral support, and etc. ( 1998:85) stated that commisives are typified by promising or giving an undertaking; they commit one to do a certain action, but also include declarations, intention, and so on.The sentences "And we are going to be in a position to do a lot to help them so that they keep their employees and they save their business, and that taking place I think you can say over the weekend" in the sample that has been explain means that the speaker promise something good a lot people to help them to save their business so a lot of people can keep their employees stay.The fourth type is behabitives.Behabitives is the responses for someone the lead to forgive, greet, praise, curse, thanking, and so on.Similarly, Austin in Alston (1998:85) stated that behabitives is the act of language in doing something concerning sympathy, attitude, forgiveness, or congratulations, which always arise in social communication.The sentence "The oil industry does better than it's doing right now" in the sample that has been explain means that the speaker provide the value of the industry oil that their built.This kind of sentence refers to appreciate the quality of the product.The last type is expositives.Expositives is explanatory sentences about something to someone related to giving explanations and details.Similarly, Austin in Alston (1998:85) stated that expositives are used in acts of exposition involving the expounding of views, the conducting of arguments, and the clarifying of usages and references.The sentence "As the New York metropolitan area continues its battle against the outbreak, the full power of the federal government is there to support them.As you know the Javits Center has now been fully converted into a 3have to say the Corps of Engineers, what they can do is just incredible" in the sample that has been explain means that the speaker explain the detail of the government in New York.This sentences refers to expositives because in this sentences consist of the explanation about the government. Table 1 .Types of Illocutionary Acts Based on the table above the researcher has found 5 (five) types of illocutionary act are verdictives, exercitives, commisives, behabitives, and expositives. on the result of the research
2023-12-05T17:04:24.124Z
2023-11-20T00:00:00.000
{ "year": 2023, "sha1": "12792918fa9bfdc73a4fd66ac009c955a77001fa", "oa_license": "CCBYSA", "oa_url": "https://jurnal.uniraya.ac.id/index.php/Relation/article/download/1207/970", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "0df0ef89243d1b5e9ca57a6eb9e09e60ca019ffb", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [] }
247718253
pes2o/s2orc
v3-fos-license
A Novel Memetic Algorithm Based on Multiparent Evolution and Adaptive Local Search for Large-Scale Global Optimization In many fields, including management, computer, and communication, Large-Scale Global Optimization (LSGO) plays a critical role. It has been applied to various applications and domains. At the same time, it is one of the most challenging optimization problems. This paper proposes a novel memetic algorithm (called MPCE & SSALS) based on multiparent evolution and adaptive local search to address the LSGO problems. In MPCE & SSALS, a multiparent crossover operation is used for global exploration, while a step-size adaptive local search is utilized for local exploitation. A new offspring is generated by recombining four parents. In the early stage of the algorithm execution, global search and local search are performed alternately, and the population size gradually decreases to 1. In the later stage, only local searches are performed for the last individual. Experiments were conducted on 15 benchmark functions of the CEC′2013 benchmark suite for LSGO. The results were compared with four state-of-the-art algorithms, demonstrating that the proposed MPCE & SSALS algorithm is more effective. Introduction Optimization problems widely exist in many fields such as engineering design, economic management, production scheduling [1][2][3][4], wireless communication, and computer science. Some of these problems have many decision variables that create Large-Scale Global Optimization (LSGO) problems. Without loss of generality, a large-scale global optimization can be formulated as a minimization problem as given in where D ≥ 1000 is the dimension size. Large-scale global optimization is one of the most challenging optimization problems where the search space grows exponentially with increasing the problem dimensionality. Such a massive increase in problem dimensions usually changes the search properties. Consequently, the small-scale unimodal function may change to a multimodal function when the number of dimensions increases [5]. erefore, researchers have proposed many improved algorithms based on the existing classic algorithms. For example, in [6,7], the Particle Swarm Optimization (PSO) algorithm was improved using subswarms to maintain diversity. Also, in [8], an improved PSO algorithm containing two types of learning strategies was provided. Multiple Offspring Sampling (MOS) [9] is a hybrid algorithm that combines a Genetic Algorithm (GA) and two local searches. MLSHADE-SPA [5] also is a hybrid algorithm that is based on three Differential Evolution (DE) strategies and a modified Multiple Trajectory Search (MTS) [10] algorithm. IMLSHADE-SPA [11] is an improved MLSHADE-SPA with a novel local search method. However, SHADE-ILS [12] is an enhanced version of the SHADE algorithm. It combines two different local search methods and uses a restart mechanism. Algorithms in [13][14][15][16] are modified algorithms based on Cooperative Coevolution (CC) and Differential Evolution. CBCC-RDG3 [17] is also a modified version of the CC algorithm that modifies the recursive differential grouping method to reduce the overlapping problems. TPHA [18] and DECC-RAG1.1 [19] are two-phase hybrid algorithms that use the CC framework. To encourage the research on LSGO, IEEE Congress on Evolutionary Computation (IEEE CEC) organizes LSGO algorithm competitions yearly or biennial. Since 2013, it has been performed on the CEC′2013 LSGO benchmark suite [20]. MOS was the winner in the years 2013-2018. Moreover, MOS and the other 11 excellent algorithms that did not join the competitions were compared in [21]. Again, the MOS algorithm overperformed all the other algorithms. However, SHADE-ILS and MLSHADE-SPA were announced in the 2018 competition that they are performing better than MOS. Although CC-RDG3 [17] was announced as the winner of the 2019 competitions, it was not up to the level of the previous winner, SHADE-ILS. It seems that LSGO is still a quite hard nut to crack [21]. e algorithms for LSGO problems can be roughly classified into three categories: standard evolutionary algorithms, CC-based evolutionary algorithms, and memetic algorithms [22]. e memetic algorithm (MA) [23] is a combination of global search and local search. Due to the exploration ability of global search and the exploitation ability of local search, MA performs well in LSGO problems. As mentioned above, the CEC competition award algorithms (e.g., SHADE-ILS and MLSHADE-SPA) and improved algorithm IMLSHADE-SPA based on MLSHADE-SPA are all MAs. In MLSHADE-SPA, IMLSHADE-SPA, and SHADE-ILS, global search and local search have the same position, and the two search methods generate the same number of candidate solutions. However, since the dimension exceeds 1000 for LSGO problems, it is necessary to discuss the case that the numbers of local search and global search are different. Furthermore, at each iteration of the algorithms, the local search is only used to improve the current best solution, and other members cannot be improved, which may miss potential excellent individuals. Besides, the three algorithms all used Differential Evolution and adopted a variety of improvement strategies, which makes the algorithm more complicated. In this case, it is necessary to examine new algorithms and new ideas. is paper proposes a novel memetic algorithm (called MPCE & SSALS) based on multiparent evolution and local search. A multiparent crossover operator is used for global exploration. Furthermore, a step-size adaptive local search algorithm, which is improved from the MTS algorithm, is proposed for local exploitation. e proposed algorithm is inspired by the Simplified Group Search Optimizer (SGSO) [24] algorithm to generate parent vectors and adopt a population size reduction strategy. ere are three main differences between the proposed algorithm and the above memetic algorithms: (1) e proposed algorithm performs much more local searches than global searches. (2) e proposed algorithm performs the local search for every individual. (3) e proposed algorithm uses a new and simpler global search method. In the following sections, the details of the MPCE & SSALS algorithm will be explained. e algorithm is also compared with four state-of-the-art algorithms, namely, SHADE-ILS, MLSHADE-SPA, CBCC-RDG3, and IMLSHADE-SPA. e main contributions and novelty of this paper can be summarized as follows: (i) Proposing a novel memetic algorithm for the LSGO problem (ii) Using multiparent crossover and SGSO to solve LSGO problem (iii) Proposing an improved local search algorithm that can be an effective option for LSGO (iv) Demonstrating that local search-dominated hybrid algorithm can effectively solve the LSGO problems. e rest of this paper is organized as follows. In Section 2, the most related work to this paper proposed approach is discussed. Section 3 presents the details of the proposed algorithm. Section 4 explains the numerical experiments of MPCE & SSALS that are carried out using the CEC′2013 benchmark suite and the performance of MPCE & SSALS compared to four algorithms. Finally, conclusions are made, and further research is discussed in Section 5. Related Works is section is devoted to presenting the related work needed for understanding the MPCE & SSALS algorithm. Memetic algorithms, multiparent crossover, simplified group search optimizer, and MTS are described. Memetic Algorithms. Moscato [23] first proposed the concept of memetic algorithm in 1989. e memetic algorithm is a combination of population-based global search and individual-based heuristic local search. It suggests an algorithm framework. In this framework, different search strategies are used to construct different memetic algorithms. For example, Genetic algorithms, Differential Evolution, Particle Swarm Optimization, and many others can be used for global search strategy. Hill Climbing, Simulated Annealing, Tabu Search, and others can be used for local search strategy. Memetic algorithms have embraced many forms, employing a wide variety of combinations of population-based heuristics and individual improvement heuristics [25], such as [26][27][28][29][30][31][32][33]. Some of these algorithms based on GA and Tabu Search were studied in [26,27]. e memetic model of PSO and local search was introduced in [28][29][30]. A Memetic Artificial Bee Colony Algorithm is also reported in [31]. e combination of a backbone-based crossover operator and a multineighborhood simulated annealing procedure was discussed in [32]. In [33], adaptive memetic computing with a GA, DE, and Estimation of Distribution Algorithm synergy was elaborated. It can automatically activate one of the three algorithms to generate offspring. SHADE with an iterative Local Search (SHADE-ILS) is a hybrid algorithm that combines a modern DE algorithm, Success-History-based Adaptive DE (SHADE [40]), with two local search methods. In each iteration, the SHADE is applied to evolve the population of candidate solutions. One of the two local search methods is chosen to improve the current best solution found by SHADE. e local search method's selection is according to the improvement obtained by each of them in the previous phase. A restart mechanism has been incorporated into the algorithm to explore new search space regions when the search gets stagnated. MLSHADE-SPA is a memetic framework that includes three DE algorithms for global exploration and a modified version of MTS (MMTS) for local exploitation. e three DE algorithms are success history-based differential evolution with linear population size reduction and semiparameter adaptation (LSHADE-SPA), enhanced adaptive differential evolution (EADE) [41], and differential evolution with novel mutation and adaptive crossover strategies (ANDE) [42]. e framework also uses the divide-and-conquer method, which randomly divides the dimensions into groups and solves each group separately. An improved MLSHADE-SPA (IMLSHADE-SPA) framework was proposed in [11], which replaced the local search method (MMTS) with a new local search method and achieved higher performance. Multiple Offspring Sampling (MOS) [43] is a framework used to combine different metaheuristic algorithms. e participation ratio for each algorithm is adjusted dynamically according to a given strategy. Due to the different algorithms and strategies to be selected, different MOS versions were proposed in [9,44,45]. Our paper focuses on the MOS [9], the winner in the CEC′2013 competition. In MOS [9], three algorithms are combined: GA, Solis and Wets algorithm [46], and MTS-LS1-Reduced algorithm. ese algorithms are executed in sequence, one after the other. e number of candidate solutions to be generated by each algorithm is adjusted dynamically according to the average fitness increment of the newly created individuals [9]. Multiparent Crossover. Evolutionary algorithms (EAs) have been successfully applied to solve many optimization problems. EAs simulate the evolution process of nature. ere are three basic operators in EAs: crossover (or recombination), mutation, and selection. e classic crossover operator recombines two parents and generates new offspring. e recombination mechanism determines what parts of each parent are inherited by the child and how this is done. Various crossover operators were proposed for different problems that fit one of the multiple representations for a chromosome [47]. ese crossover operators can be categorized into two categories: exchange-based or calculation-based. e first type of operator is generally proposed for binary coding, but it is also suitable for real coding. Some of the crossover operators' examples are One-point Crossover, Two-point Crossover, Uniform Crossover, and so on. For instance, Uniform Crossover randomly determines whether the child's ith gene is selected from father 1 or father 2. With these crossover mechanisms, each gene in the offspring is copied from one of the parents. e new offspring's chromosome characteristics are directly inherited from their parents without any changes. e second category of the crossover operators is generally used for real coding, such as Average Crossover, Parent Centric Crossover, Heuristic Crossover, Simulated Binary Crossover, and so on. In these operators, the value of each offspring's gene is calculated numerically by the parents' genes. For example, Average Crossover generates the ith gene of the child by averaging alleles from both parents. e first category is more in line with the original concept of gene recombination. In some algorithms, such as the DE algorithm, the second category crossover operator is used as a mutation rather than a crossover operator. It can produce new genes that are different from their parents. is paper tends to define the second type of operators as a hybrid of crossover and mutation operations. Multiparent crossover extends the two-parent crossover operators to recombine more than two parents for generating new offspring. Many multiparent crossovers have been successfully applied to solve various optimization problems and found to be better than traditional crossovers, such as scanning crossover and diagonal crossover [48,49], multiparent simplex crossover [50], multiparent sequential constructive crossover [47], and a novel multiparent order crossover [51]. Simplified Group Search Optimizer. Group Search Optimizer (GSO) is a swarm intelligence algorithm with superior performance for multimodal problems [52]. GSO is inspired by animal searching behaviors and group living theory [52]. It includes three types of members: producer, scrounger, and ranger. During each iteration, the individual with the best fitness value in the group, as a producer, will stop and scan the environment to find resources. e scrounger takes a random walk towards the producer to join the resources. A small number of rangers make a random move to avoid entrapment in local minima. e Simplified Group Search Optimizer (SGSO) [24] is an improved GSO version. It is more efficient and simpler than the original version. It also shows excellent search performance for large-scale optimization problems. In SGSO, the producer abandons environmental scanning. e scrounger adopts an improved join strategy, which moves towards the best member and other excellent members. e rangers use a simple search method and the ranger's percentage decreases. e SGSO is described as follows: (1) In a D-dimensional search space, the ith member at the kth iteration has a current position, Computational Intelligence and Neuroscience (2) Group members are sorted by fitness value in ascending order. e best member x best,k , as the producer, does not move in this iteration. (3) Randomly select 87% of the group members except the producer to perform scrounging. e scroungers move to a new position according to where r 1 and r 2 are uniform random D-dimensional vectors in the range (0, 1) and x m−best,k is a member randomly chosen from the top 4 in the group (except x best,k ). (4) e remaining members are rangers, who take a random step according to where r 3 is a standard normal distribution Ddimensional vector, step is a constant, representing the basic step size, and f is a D-dimensional Boolean random vector indicating which dimensions will change. e probability of change is set to be 1.2/D as given in [24]. where j ∈ {1,2, . . ., D}, rand(1) is a function that produces a uniform random number in the range (0, 1), and j rand is a randomly chosen index ∈ {1,2, . . ., D}, which ensures that at least one component in f is set to 1. Multiple Trajectory Search. Multiple trajectory search (MTS) was presented for the large-scale global optimization problem in [10]. It provides three local search methods, where MTS-LS1 is the first and most important one. MTS-LS1 does search from the first to the last dimension successively. Each dimension is subtracted from the search range (SR) value to see whether the objective function value is improved or not. If it is improved, MTS-LS1 proceeds to search in the next dimension. If it is not improved, the solution is restored, and this dimension is added by 0.5 * SR, aiming to see, again, if its value is improved or not. If it is not improved, the solution is restored. Afterward, MTS-LS1 continues to search in the next dimension. SR is initialized to 0.5 * (Upper_Bound − Lower_Bound). If all dimensions are not improved, SR will be cut to half. When SR reaches 1E − 15, its value will be reset to 0.4 * (Upper_Bound − Lower_Bound). MTS-LS1 and its improved versions are used in many algorithms [5,9,12], including the algorithm proposed in this paper. Proposed Algorithm In this section, multiparent Crossover Evolution and Step-Size Adaptive Local Search algorithm will be described. Besides, the proposed hybrid algorithm that combines both of them will be introduced. Multiparent Crossover Evolution (MPCE). In MPCE, the population is composed of D-dimensional vectors. e number of vectors is called population size, denoted as NP. e initial population is generated with uniformly distributed random numbers. Each member of the population can produce the next generation through mutation and multiparent crossover operation. e ith member of the Gth generation is denoted as e main characteristics of MPCE are as follows: (1) e mutation formula is modified from (2) of the SGSO algorithm. e mutant vector is generated according to where x best, G is the best vector in the Gth generation, p-best is the index of a vector which is randomly chosen from the ranked top 10% vectors in the Gth generation (except x best, G ), and r 1 and r 2 are uniform random vectors in the range (0, 1). (2) MPCE uses a four-parent crossover operation to produce the next generation. e four parents are x i , v i and two excellent individuals randomly selected from the population. e crossover operation could be computed using where j∈{1,2, . . ., D}, a(i) and b(i)∈{1,2, . . .,NP} are the index of vectors randomly chosen from ranked top 50% vectors in the Gth generation, CP1, CP2, CP3 ∈ (0,1) are the crossover probability constant of v i , x a(i) , x b(i) , respectively, r rand(ji) is a uniform random number in the range (0, 1). e parameters CP1, CP2, and CP3 are determined through experiments and set to 0.3, 0.29, and 0.29, respectively. (3) e population size decreases during the optimization process. With the ongoing iteration, the vectors in the population tend to be gradually assimilated, where the larger NP is less helpful to improve the search performance. Many algorithms apply the population size linear decrease strategies, such as LSHADE algorithm [53]. Besides, for the MPCE & SSALS algorithm, reducing the population size is conducive to deeper local search. In the beginning, the global search and the local search are performed alternately. NP is reduced by 1, and the worst individual in the population is dismissed every several iterations. When NP is reduced to 4, MPCE global search ends and then only the local search is executed to improve the current best solution. 3.2. Step-Size Adaptive Local Search (SSALS). e basic idea of SSALS derives from MTS-LS1 that is the first local search strategy in the Multiple Trajectory Search (MTS) algorithm. ese algorithms are designed for single individuals and can also be used for multiple individuals when combined with other algorithms. Each dimension of the SSALS algorithm has its basic step size, stored in the vector s. In each iteration, SSALS randomly selects one or more dimensions, multiplies the step size of each dimension by a random number, and adds the product to each dimension. If the new solution is better than the original one, these selected dimensions' step size is multiplied by 2. Otherwise, the solution is restored, and each step size is multiplied by −0.5. e step size is initialized using 0.5 * (Upper_Bound − Lower_Bound). e variable minbs represents the minimum step size, an adaptive value that is recalculated in each iteration. If the step size's absolute value reaches minbs, it will be restored using the initial value. In the case of multiple individuals, the SSALS algorithm key steps are described as follows. (1) Choose dimensions to be searched according to where f ji ∈ {0,1} indicates whether the jth dimension of the ith vector is to be changed, rand(1) is a function that produces a uniform random number in the range (0, 1), iteration is the number of iterations, and j rand(i) ∈ {1,2, . . ., D} is a randomly chosen index to ensure that x i has at least one dimension to participate in the search. According to (7), the number of dimensions to be searched will rapidly decrease in the iterative process and finally keep at 1.5 per vector on average. is value makes the algorithm has a bit of global search ability in the early stage of the optimization process. (2) Generate the new solution according to where s i,G is a vector, representing the basic step size of the ith individual in the Gth generation. (3) Calculate the variate minbs. SSALS defines a D×5 matrix H, which is used to store each dimension's last five effective step sizes. e effective step size is defined as If more than one vector is improved in an iteration, and some of these vectors' same dimension is changed, save the average effective step size of this dimension into H. e formula for calculating minbs is given in minb s G � min(0.1, min(mean(H, 2))). (4) Update the basic step size according to Experimentation A set of 15 benchmark functions proposed in the CEC 2013 special session on large-scale global optimization was used to study the MPCE & SSALS performance. ese functions are divided into four categories according to the degree of separability. f1-f3 belong to fully separable functions, f4-f11 init_bs ← 0.5 * (Upper_Bound-Lower_Bound); s ← init_bs; Initialize population X (x1, x2, . . ., xNP) randomly; Calculate the fitness: f_value ← cost_function(X); Sort individuals in X based on their fitness; H ← ones(D,5); 6 Computational Intelligence and Neuroscience are partially separable functions, f12-f14 belong to overlapping functions, and f15 is classified as fully nonseparable functions. A detailed description of each of these benchmark functions is given in [20]. MPCE & SSALS performed 25 times for each benchmark function. All tests were completed using MATLAB R2019a. e dimension D of all functions is 1000, except that f13 and f14 are 905. e stopping criterion was a fixed number of fitness evaluations (FEs). e Max_NFE was set to be 3.0E + 6, and the program terminates when Max_NFE is reached. e initial value of NP was set to 100, CP1 � 0.3, CP2 � 0.29, CP3 � 0.29, I NPD � 100, and I GS � 40. e statistical results including the best, the worst, the median, the mean, and the standard deviation computed over 25 runs are shown in Table 1. Influence of the Different Components. In this section, experiments were conducted to observe the influence of the different components. For each test, Table 2 lists the average results of 25 independent runs. Wilcoxon signed-rank test with the significance level of 5% was used for statistical analysis. e ">," "<," and "�" mean "significantly better," "significantly worse," and "no significant difference," respectively. e last row of Table 2 shows the times of win/tie/ loss (w/t/l) in the pairwise comparison. To observe the individual effect of both Multiparent Crossover Evolution and Step-Size Adaptive Local Search, experiments were executed on the two algorithms, respectively. As shown in Table 2, the optimization performance of SSALS is significantly better than that of MPCE on most of the functions, indicating that local searches contribute more to the hybrid algorithm. e influence of the number of parents was also studied. In the proposed algorithm, CP2 and CP3 are the crossover probability constants of parent 2 and parent 3, respectively. CP2 � 0 indicates that parent 2 does not participate in the evolutionary operation, so is CP3. In one test, CP2 was set to 0, indicating that the three-parent crossover operation was used. In another test, CP2 and CP3 were both set to 0, which means that a two-parent crossover operation was applied. e other settings are considered the same as in Table 1. According to the results, increasing the number of parents affects f5, f6, f9, and f10, but it has no significant difference on other functions. In general, it is beneficial and harmless. In addition, to verify the improvement effect of SSASL, the same experiments were performed in identical conditions with replacing MTS-LS1 with SSALS. In the comparison test between SSALS with MTS-LS1, it is clear that SSALS is significantly better than MTS-LS1 on all 15 functions, demonstrating that the optimization performance of SSALS is significantly higher than that of MTS-LS1. Parameter Analysis. Major parameters of the MPCE & SSALS are I NPD and I GS . For each I NPD iteration, population size (NP) is subtracted from 1. When NP is reduced to 4, the population-based search ends, which means only the individual-based search is performed to improve the best solution. erefore, a smaller I NPD means fewer population searches and more single individual searches. I GS indicates the number of iterations between two global searches, and a smaller I GS represents more global searches and fewer local searches. Different I NPD and I GS are studied in this section. MPCE & SSALS performed 25 times for each combination. Wilcoxon signed-rank test was used for statistical analysis. To find the appropriate I NPD and I GS value, three CEC′2013 benchmark functions f3, f7, and f15 are studied in the tests with I NPD value varying from 50 to 500 and I GS value varying from 20 to 200, respectively. e other parameters use the same settings as in Table 1. Algorithm performances by adopting different I NPD and I GS values on f3, f7, and f15 are shown in Figures 1(a), 1(b), and 1(c), and Figures 2(a), 2(b), and 2(c), respectively. e horizontal axis represents the respective parameter settings while the vertical axis shows the obtained logarithm value of mean FEs. As shown in Figures 1 and 2 Table 3. Results shown in bold indicate the final selected parameters. When fixing I GS value at 40, I NPD � 100 significantly outperforms I NPD � 50 in 5 functions and is outperformed by I NPD � 50 in 1 function, while other the 9 functions have no significant difference. I NPD � 100 significantly outperforms I NPD � 200 in 7 functions and is outperformed by I NPD � 50 in 2 functions, which indicate that the best overall optimization performance is I NPD � 100. When I NPD is fixed at 100, I GS � 40 significantly outperforms I GS � 20 and I GS � 80 in 2 and 3 functions, respectively. ere is no significant difference in other functions, which means that the best optimization performance is I GS � 40. As shown in Table 3, MPCE-SSALS with I NPD � 100 and I GS � 40 significantly outperforms the other parameter settings. It is also observed that the best parameter values of different test functions may be different; for example, the best parameter for f3 and f6 is I NPD � 200 and I GS � 40. is suggests that it is a good choice to set the parameter values to 100 and 40 in general, but for a specific problem, better parameter values can be determined by experiments. In addition, when I NPD � 100 and I GS � 40, the FEs of global search and local search are 12524 and 2987476, respectively, which indicates that the proposed algorithm is mainly based on local search. To compare these algorithms' performances on the CEC′2013 function suite, the average ranking of each algorithm was calculated. For a fair comparison, the other four algorithms' experimental data and supplementary material 8 Computational Intelligence and Neuroscience are directly taken from their original papers. Wilcoxon signed-rank test (significance level � 0.05) is utilized for pairwise comparison of these five algorithms. e place of each algorithm on each function is calculated according to the Wilcoxon rank tests. e comparison results and ranking are listed in Table 4. e best results for each benchmark function are distinguished by bold font. As shown in Table 4, MPCE & SSALS is the one with the best performance among these algorithms. SHADE-ILS and IMLSHADE-SPA rank second and third, respectively. As shown in the figure, the results could be summarized as follows: e convergence rate of MPCE & SSALS, in general, is faster than that that of MLSHADE-SPA and IMLSHADE-SPA, but it has a similar convergence rate to CBCC-RDG3 and SHADE-ILS. However, MPCE & SSALS is the simplest one among these algorithms. Results Discussion. e excellent results of MPCE & SSALS mainly benefit from the following factors: (1) e multiparent strategy used in this paper enables each offspring to inherit genes from multiple excellent individuals. It not only increases the offspring diversity, but also moves the algorithm quicker towards better solutions. (2) SSALS effectively improves MTS-LS1, which enhanced the local search performance significantly. Each dimension has its own basic step size that can be adjusted to accommodate the different effects of each dimension on the function. In addition, the minimum step size affects the search accuracy. If a large number of high-precision searches (very small step size) are carried out in the early stage of the algorithm, it will waste an amount of calculation and easily fall into the local minima. According to the current search results, gradual improvement to the search accuracy can avoid excessive search in the early stage of the algorithm. Similarly, in the late stage of the algorithm, the most promising position can be searched with high precision. (3) More local searches are performed in the algorithm. e proposed algorithm performs much more local searches than global searches. is enhances the exploitation capability of the algorithm in the search space. (4) Population decrease strategy is used in MPCE & SSALS. At the beginning of the algorithm, a large population size is conducive to improving the exploration ability. With the optimization process, individual differences are minimized, and the advantages of a large population are also reduced. Gradually reducing the population size is helpful to enhance the exploitation ability. (5) e memetic algorithm framework is used to combine the multiparent strategy, SGSO and SSALS, to work together. e memetic algorithm framework balances the exploration ability of global search and the exploitation ability of local search; thus, it has been widely used in LSGO problems. e SGSO algorithm also performs well in LSGO problems. Conclusions In this paper, a memetic algorithm MPCE & SSALS based on multiparent crossover evolution and step-size adaptive local search is proposed for LSGO problem. e MPCE strategy is used for global exploration, and the SSALS method is applied for local exploitation. In the early stage of algorithm execution, the global search and the local search are performed interchangeably, and the population size is gradually reduced to 1. In the later stage, only the local search is executed to improve the final solution. Local search is performed during the whole process, and the execution times of local search is far more than that of global search. A set of 15 benchmark functions was used to evaluate the performance of the MPCE & SSALS algorithm. According to the experimental data, the overall performance of the MPCE & SSALS algorithm performs better than the other four state-of-the-art algorithms. e experimental results also indicate that the performance of SSALS is significantly higher than that of MTS-LS1, and the local search-dominated hybrid algorithm can effectively solve the LSGO problem. On the other hand, the experiment analysis reveals that the multiparent crossover strategy can only improve the optimization effect of certain test functions while having no discernible impact on others. Among the four parents in the crossover operation, three individuals are selected from the previous generation of the population. e source of parents is relatively single. e advantages of multiparenting were not fully used. In the future, it is possible to add new parent generation methods, such as using PSO to generate one of the parents. is paper demonstrated that multiparent crossover evolution combined with local search is an effective algorithm framework to address the LSGO problem. A possible extension to this paper is to examine new parent generation techniques or local search strategies that improve the algorithm's performance. Data Availability e source code and experimental data of MPCE & SSALS can be requested from yydzhwf@xnu.edu.cn. Conflicts of Interest e authors declare that there are no conflicts of interest regarding the publication of this paper.
2022-03-25T17:20:27.542Z
2022-03-24T00:00:00.000
{ "year": 2022, "sha1": "b0e783cae312c2af9316d6bc3fce8ea24983c46d", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/cin/2022/3558385.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "3452331d892d61a98712f6a0220be1045364eb75", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
119205212
pes2o/s2orc
v3-fos-license
On the amplification of magnetic fields in cosmic filaments and galaxy clusters The amplification of primordial magnetic fields via a small-scale turbulent dynamo during structure formation might be able to explain the observed magnetic fields in galaxy clusters. The magnetisation of more tenuous large-scale structures such as cosmic filaments is more uncertain, as it is challenging for numerical simulations to achieve the required dynamical range. In this work, we present magneto-hydrodynamical cosmological simulations on large uniform grids to study the amplification of primordial seed fields in the intracluster medium (ICM) and in the warm-hot-intergalactic medium (WHIM). In the ICM, we confirm that turbulence caused by structure formation can produce a significant dynamo amplification, even if the amplification is smaller than what is reported in other papers. In the WHIM inside filaments, we do not observe significant dynamo amplification, even though we achieve Reynolds numbers of $R_{\rm e} \sim 200-300$. The maximal amplification for large filaments is of the order of $\sim 100$ for the magnetic energy, corresponding to a typical field of a few $\sim \rm nG$ starting from a primordial weak field of $10^{-10}$ G (comoving). In order to start a small-scale dynamo, we found that a minimum of $\sim 10^2$ resolution elements across the virial radius of galaxy clusters was necessary. In filaments we could not find a minimum resolution to set off a dynamo. This stems from the inefficiency of supersonic motions in the WHIM in triggering solenoidal modes and small-scale twisting of magnetic field structures. Magnetic fields this small will make it hard to detect filaments in radio observations. INTRODUCTION Cosmic magnetism is an astrophysical puzzle. While radio observations provide evidence for magnetic field strengths of up to a few ∼ µG in galaxy clusters and galaxies (e.g. Ferrari et al. 2008;Brüggen et al. 2011;Ryu et al. 2011, and references therein), the origin of such strong fields is unclear, given that the upper limits on the primordial magnetic field at the epoch of the Cosmic Microwave Background set B < 10 −10 G (e.g. Neronov & Vovk 2010). From a theoretical point of view, the first cosmic seed fields can be generated in the very early Universe during inflation and first-order phase transitions (however, the uncertainty on the efficiency of such mechanisms is large, B ∼ 10 −34 − 10 −10 G, e.g. Widrow et al. 2011). Additional processes such as the Biermann-battery, or aperiodic turbulent fluctuations in the intergalactic plasma, might also provide seed fields in the range ∼ 10 −19 − 10 −16 G (Kulsrud et al. 1997;Schlickeiser 2012). Later on, structure formation can cause further amplification via a small-scale turbulent dynamo (e.g. Subramanian et al. 2006) in two main phases: first, via exponential growth of the magnetic field in the kinematic regime, and, second, via non-linear growth and stretching of the coherence scales until saturation with the turbulent forcing (Wang & Abel 2009;Beck et al. 2012;Schober et al. 2013;Pakmor et al. 2014). Galactic activity can yield localised additional seeding (e.g. Kronberg et al. 1999;Völk & Atoyan 2000), while further amplification in cluster outskirts might be produced via the magneto-thermal instability (Parrish et al. 2008) or instabilities driven by cosmic rays accelerated by shocks (Drury & Downes 2012;Brüggen 2013). At higher redshifts (z ∼ 2), star formation should be able to induce small-scale dynamo by injecting turbulence from supernova explosions (e.g. Beck et al. 1996Beck et al. , 2013, producing large Rotation Measures (e.g. Kronberg et al. 2008;Mao et al. 2012) and possibly explaining the tight correlation between far-infrared and radio continuum emission (Schleicher & Beck 2013). Cosmological simulations can reproduce the observed field strengths within galaxies and galaxy clusters starting from weak primordial fields (e.g. Dolag et al. 1999;Brüggen et al. 2005;Bonafede et al. 2011;Ruszkowski et al. 2011), yet similar field strengths can also be achieved with outflows from active galactic nuclei (e.g. Xu et al. 2009;Dubois & Teyssier 2008), galactic winds (e.g. Donnert et al. 2009) and star formation (e.g. Beck et al. 2013). In particular, Donnert et al. (2009) concluded that magnetized galactic outflows and their subsequent evolution within the ICM in principle can explain the observed magnetisation of galaxy clusters, while measuring cosmological magnetic fields in low-density environments can reveal the origin of cosmic magnetic fields. Very little is known about the evolution and present-day distribution of magnetic fields in the periphery of galaxy clusters and in the cosmic web, particularly in filaments that contain ∼ 50 − 60 percent of the total mass in the Universe (e.g. Cautun et al. 2014). This circumstance makes the study of ultra-high energy cosmic rays (UHECRs) very uncertain since large-scale magnetic fields change the arrival direction of UHECRs (e.g. Ryu et al. 2010). This also adds uncertainties to the composition of UHECRs, as the presence of magnetic fields can significantly alter the spectrum and composition of UHECRs that reach Earth (e.g. Alves Batista et al. 2014). Numerical simulations are crucial for studying the non-linear processes that lead to the amplification of the seed magnetic fields during structure formation. In simulations, a large spatial resolution is needed to produce the degree of turbulence that leads to dynamo amplification (e.g. Federrath et al. 2011a;Turk et al. 2012;Latif et al. 2013). However, in filaments, neither Lagrangian (such as smooth particle hydrodynamics) nor mesh refinement schemes based on matter density, achieve the necessary resolution. On the other hand, the use of fixed grids is computationally demanding due to the need of resolving the details of the internal structure of filaments. Whether the magnetic field in filaments approaches equipartition with the kinetic energy, is unclear. The amplification is expected to depend on the numerical resolution, on the exact distribution of modes (compressive or solenoidal), as well as on the range of dynamical scales (Schekochihin et al. 2004;Ryu et al. 2008;Cho et al. 2009;Jones et al. 2011). Modelling of magnetic fields in filaments is relevant for the study of radio emission from the cosmic web, that surveys in the nearby (e.g. LOFAR) and more distant future (e.g. the SKA) might be able to detect for the first time, in case of large enough magnetic fields (Brown 2011;Araya-Melo et al. 2012). ENZO-MHD The simulations performed in this work have been produced with a customised version of the grid code ENZO (The Enzo Collaboration et al. 2013). ENZO is a highly parallel code for cosmological magneto-hydrodynamics (MHD), which uses a particle-mesh N-body method (PM) to follow the dynam-ics of the DM and a variety of shock-capturing Riemann solvers to evolve the gas component. The MHD implementation of ENZO that we use has been developed by Wang & Abel (2009) and Wang et al. (2010). It is based on the Dedner formulation of MHD equations (Dedner et al. 2002), which uses hyperbolic divergence cleaning to preserve the ∇ · B = 0 condition. The MHD solver adopted here uses a piecewiselinear reconstruction, where fluxes at cell interfaces are calculated using the Harten-Lax-van Leer (HLL) approximate Rimeann solver (Harten 1983) and time integration is performed using a total variation diminishing (TVD) second order Runge-Kutta (RK) scheme (Shu & Osher 1988). The resulting solver is expected to be slightly more diffusive than the piecewise-parabolic approach, but allows a more efficient treatment of the electromagnetic terms. Extensive tests have been conducted to compare the performance of different MHD solvers in astrophysical codes (including the implementation of the Dedner scheme employed here) in the case of decaying supersonic turbulence. Overall, the Dedner cleaning compared well with more complex MHD schemes, at the price of being more dissipative at very small spatial scales, due to the small-scale ∇ · B waves generated by this scheme (Kritsuk et al. 2011). For further tests on the validation of the code we refer the reader to Wang & Abel (2009). This MHD solver, as well as a version of the piecewise parabolic method (PPM) hydro solver, has been ported to NVIDIA's CUDA framework, allowing ENZO to take advantage of modern graphics hardware (Wang et al. 2010;The Enzo Collaboration et al. 2013). A key step in ENZO's implementation is flux correction, which is required when each level of resolution is allowed to take its own time step. Within the GPU version of the MHD solvers, the fluxes are calculated on the GPU and only the fluxes required for flux correction are transferred back to the CPU. This procedure reduces the overhead associated with the data transfer, which can be large in a heterogeneous architecture of this sort. Due to the explicit, directionally-split stencil pattern of both the PPM and Dedner MHD solvers, they are well-suited for hardware acceleration. The porting onto GPUs replaced many shared temporary arrays of the CPU version into larger temporary arrays that are not shared among loop iterations, and exposed the massive parallelism in the algorithm using CUDA. For further details on the porting onto CUDA, we refer the reader to Wang et al. (2010); The Enzo Collaboration et al. (2013, and). Most of our simulations were run on the Piz Daint system (deployed by ETHZ CSCS Swiss national supercomputing centre in Lugano 1 , a Cray XC30 supercomputer accounting for more than 5000 computing nodes, each equipped with an 8-core 64-bit Intel SandyBridge CPU (Intel Xeon E5-2670) and an NVIDIA Tesla K20X GPU. When running at fixed mesh resolution, the GPU allow to gain a factor of ∼ 4 in performance, compared to the usage of the corresponding CPU, reducing accordingly the necessary computing time and allowing the investigation of a larger parameters space, with a given amount of computational resources. In the Appendix we present a number of tests performed using the CUDA implementation of ENZO's MHD solver, where we simulated the amplification of a weak uniform field in a cubic box with a steady driving of turbulence. Setups We assume a WMAP 7-year cosmology with Ω0 = 1.0, ΩB = 0.0455, ΩDM = 0.2265, ΩΛ = 0.728, Hubble parameter h = 0.702, and a spectral index of ns = 0.961 for the primordial spectrum of initial matter fluctuations (Komatsu et al. 2011). The amplitude of the variance of the cosmic spectrum of density at the start of each run has been varied from run to run as explained in Sec. 3.1-3.3. The magnetic field in all runs has been initialised to the reference value of B0 = 10 −10 G (comoving), which we imposed as a background uniform field at the beginning of each run. A list of runs is given in Tab. 2.2. 110 3.6 · 10 7 256 3 55 4.5 · 10 6 512 3 27 5.6 · 10 5 640 3 22 2.9 · 10 5 1024 3 13 7.0 · 10 4 3 RESULTS Magnetic field amplification in the ICM In a first set of simulations, we measured the amplification of a cosmological weak magnetic field during the formation of a galaxy cluster, as a benchmark test for our following studies of amplification within filaments with ENZO-MHD. This magnetisation of the ICM during structure formation has already been studied with a variety of codes by many authors (e.g. Dolag et al. 1999;Brüggen et al. 2005;Dubois & Teyssier 2008;Xu et al. 2009;Collins et al. 2010;Bonafede et al. 2011), that demonstrated how the amplification of magnetic field is a natural process within the large over-density of galaxy clusters (even if the amplification factors can change from simulation to simulation). Here we want to study the growth of magnetic fields as a function of spatial resolution. Hence, we want to limit as much as possible the uncertainties related to the use of adaptive mesh refinement (e.g. Xu et al. 2009). Therefore, we only used runs with uniform spatial resolution along the whole cluster evolution.To this end, we adopted an artificially large normalisation of the primordial matter power spectrum, σ8 = 5.0, in creating our initial conditions, in order to enable the formation of a single cluster of mass ∼ 10 14 M⊙ even within the rather small volume of (14 Mpc) 3 . Of course, this unrealistically large value of σ8 (to be compared with the concordance value σ8 ≈ 0.8) will shed little light on the timing of the amplification since a large value of σ8 causes the formation of clusters already at high redshifts. Using this idealised setup, we simulated the evolution of the ICM employing grids from 64 3 to 1024 3 cells/DM particles corresponding to a comoving spatial resolution from 220 kpc to 13 kpc. A list of our cluster runs is given in Tab. 2.2. Figure 1 shows maps of temperature and magnetic fields for a slice through the centre of the cluster at z = 0, for all resolutions from 64 3 to 1024 3 . While the temperature distribution of the cluster varies slightly across runs, the spatial distribution of the magnetic fields changes clearly with increasing resolution. Starting at a resolution of 27 kpc (512 3 ) the morphology of the magnetic field becomes increasingly more tangled on scales smaller than the cluster core radius, and clumps of gas with B 0.1 µG start to appear throughout the virial volume. At our best resolution, the maximum Reynolds number within the virial volume is: where Rv = 1.5 Mpc is the virial radius at z = 0 (e.g. Vazza et al. 2011a) and ∆x is our (comoving) spatial resolution (13 kpc in the most resolved run). According to simulations of forced turbulence in a box (Schekochihin et al. 2004;Cho et al. 2009;Jones et al. 2011), this is large enough to start a small-scale dynamo. The former likely represents an overestimate of the real Reynolds number in the flow, because the cluster's virial radius was smaller in the past, and because the driving of the turbulence by sub-clusters preferentially occurs on scales smaller than the current virial radius (Vazza et al. 2012), thereby limiting the outer scale of turbulence. At all resolutions, the radial profile of the magnetic fields at z = 0 (Fig. 2) shows the build-up of the magnetic field in the centre. The growth of the field proceeds faster with increasing resolution in the innermost regions. Inside the virial volume, the average profile of the magnetic field does not vary much with resolution in the range between 27 kpc and 13 kpc, suggesting that we are not far from convergence. The maximum field we observe in the centre is ∼ 0.7µG, corresponding to a maximum amplification factor of ∼ 5 · 10 7 for the magnetic energy and 7000 for the magnetic field. Beyond the virial radius the simulation does not seem to be fully converged. At distances of 1 Rv from the cluster centre, the average field varies from ∼ 0.02 − 0.04 µG at low resolution to ∼ 0.1µG at high resolution. For a cluster of this mass and central temperature (∼ 3 · 10 7 K), the resulting plasma beta is of the order of β ∼ 100 (where β = nkBT /PB, where n is the gas density and PB is the magnetic pressure) in the innermost cluster regions. This matches observations for real galaxy clusters (Murgia et al. 2004;Bonafede et al. 2010). Figure 3 shows the comoving kinetic energy per unit mass (top lines) and magnetic field spectra (lower lines) for all resolutions at z = 0. All spectra were computed in a (7 Mpc) 3 cubic box centred on the cluster, using an FFT algorithm and assuming periodic boundary conditions. In order to compare our spectra to standard "turbulence in a box" simulations (e.g. Haugen et al. 2003;Schekochihin et al. 2004;Cho et al. 2009;Kritsuk et al. 2011), we assumed ρ = 1 for the gas, which removes the effect of density fluctuations on the kinetic energy spectra. The specific kinetic energy spectra are very similar at all resolutions, with a power-law slightly steeper than the Kolmogorov slope across more than two orders of magnitude in scale. This is in agreement with previous numerical results (e.g. Vazza et al. 2009Vazza et al. , 2011aVazza et al. , 2012Gaspari & Churazov 2013). The magnetic field spectra, however, show the clear build-up of the small-scale magnetic field as soon as the spatial resolution is sufficiently fine. From k 4 the magnetic spectra get shallower as resolution is increased, and in the range 10 k 100 a significant pile-up of magnetic energy occurs for resolutions better than 256 3 (i.e. ∆x 55 kpc). The observed small-scale spectra are qualitatively similar to previous results by Xu et al. (2009), even if their seeding model for the magnetic field differs from that we adopted. No developed power-law spectra is observed for the magnetic field, but a peak that moves towards larger scales as resolution is increased, similar to Haugen et al. (2003); Cho et al. (2009) and at odds with what is usually assumed in Faraday Rotation models (Murgia et al. 2004;Bonafede et al. 2010Bonafede et al. , 2013. The peak in the magnetic energy is located at k ∼ 100 (∼ 50 kpc) in our highest resolution run. The build-up over time of the small-scale magnetic field is shown in Figure 4 for our 1024 3 run. 2 The dependence on resolution is stronger for the magnetic field than for the velocity field, and the highest resolution run shows a final magnetic field energy which is a factor ∼ 10 3 larger than that of the lowest resolution run. The small change of the final magnetic energy going from 640 3 to 1024 3 (where actually the total magnetic energy is slightly lower, an effect we ascribe to tiny variations in the non-linear evolution of the MHD structure within the volume) suggests that no further increase in the spatial resolution can produce a significant increase in the magnetic field amplification. Figure 5 shows the evolution of Ev(k)dk and EB(k)dk for all runs, where we integrated the spectra only from k cl 4 in order to focus on the kinetic/magnetic energy fluctuations contained within the cluster volume (1/k cl ∝ Rv, where Rv ∼ 1.5 Mpc at z = 0). The total comoving kinetic energy per unit of mass is smaller by one order of magnitude going from z = 10 to z = 0. This is an effect of the thermal dissipation of infall motions via shock heating and turbulent dissipation, and the increase of the small-scale kinetic energy as a function of resolution is only modest, i.e. a factor ∼ 3 by z = 0. On the other hand, the small-scale magnetic energy is increased by a factor ∼ 10 6 by the end of the run. Even in this case, the amplified field is far from equipartition with the velocity field at all scales, even if the difference at the smallest scale is small (EB/Ev ∼ 0.1 − 0.3 for k ∼ 100), and in the fully saturated stage the peak of the small-scale magnetic energy is expected to drift ot even smaller spatial scales (Bhat & Subramanian 2013). In summary, our tests confirm the start of small-scale turbulent amplification of magnetic fields at high resolution. The typical magnetic field strength reaches a maximum of ∼ 0.7 µG in the cluster centre. Even if the exact level of the amplification might depend on numerical details and codes (e.g. Dolag et al. 1999;Brüggen et al. 2005;Xu et al. 2009;Bonafede et al. 2010;Collins et al. 2010), our results are in agreement with the basic scenario of turbulent amplification of primordial fields to explain the observed magnetisation of galaxy clusters. Magnetic field amplification in filaments In a separate set of MHD runs, we investigated the amplification of the primordial magnetic field in a cosmic filament. Here we used the canonic value of σ8 = 0.8 and started from a larger cosmological volume (75 Mpc) 3 , in which we selected a massive due to the large unbalance between the kinetic and the internal gas energy of cells. . Specific kinetic energy (top lines) and magnetic (lower lines) spectra for a volume of (7 Mpc) 3 centre of the cluster of Fig. 1 for all simulated resolutions. The spatial frequency, k, is in units of the box size and for each run goes from k = 1 (7 Mpc) to the Nyquist frequency of each spectrum (i.e. twice the grid resolution of each run). ∼ 15 Mpc long filament which may be regarded as representative (as shown in the large-scale view of Fig. 6). As before, we initialised the primordial field at z = 30 as a uniform field with strength B0 = 10 −10 G (comoving). Figure 7 shows the final magnetic field in a central slice through all our filament runs at z = 0. The magnetic field is the highest close to the major axis of the filament, and its maximum observed strength is only of a few ∼ nG. Fig. 8 shows slices of gas temperature, velocity and magnetic field through the centre of the filament along its length taken at different epochs. The filament is already in place at z = 1 and connects two ∼ 10 14 M⊙ clusters (that are located outside of the adaptive mesh refinement, AMR, region). Its peripheral regions feature strong (M ∼ 10 − 100) accretion shocks along its extension, where the accreted smooth gas (that mostly falls into it along the perpendicular of the accretion region) is shock-heated to a few ∼ 10 6 K. Downstream of accretion shocks inside the filament, most of the gas flow is supersonic, as the sound speed at T ∼ 10 6 K is cs ≈ 100 km/s, lower than the measured velocities (which are ∼ 100 − 300 km/s). The magnetic field increases from ∼ 3 · 10 −11 G to a few ∼ 10 −9 G downstream of the shocks. After this first boost, there is little further amplification within the filament and even the most magnetised patches hardly reach ∼ 10 −8 G. We have highlighted some of these patches in Fig. 9, where we compared the velocity field and the magnetic field strength within a slice through the filament. Although there is no one-to-one correlation between velocity field and magnetic field, the observed trend suggests that further amplification within the filament occurs in the proximity of shocks or regions where gas flows collide. However, there is little evidence of eddies with strong curling motions. This is quite different from clusters at comparable resolution. If we rescale the number of cells by the width of the filament, our most resolved run here is comparable to cluster runs 10 9 10 10 time [yr] 10 -4 10 -2 10 0 Figure 5. Evolution of the total kinetic energy per unit mass inside the cluster volume of Fig. 1 (top lines) and of the total magnetic energy for the same volume, as a function of resolution. The energies are given in [(cm/s) 2 ]. our 1024 3 cluster run in terms of the maximum Reynolds number in the flow. In order to test convergence, we re-simulated the same initial conditions with four different resolutions on a fixed grid (from 64 3 to 512 3 cells/DM particles). The filament we have chosen is roughly oriented along the z-axis of the grid (Fig. 6), which also enabled us to perform additional AMR runs by restricting the region for the active refinement to a narrow rectangular selection within the root grid volume. Thus we have re-simulated the region with up to two more levels of refinement (reaching a maximum resolution of 36 kpc). By the end of its evolution, the filament reaches a transverse size up to ∼ 4 Mpc, corresponding to ∼ 200 cells in our AMR runs with three levels. In AMR runs, we let ENZO refine the cell size by a factor of two wherever the local gas density exceeded the density at the level l by factor ∆ = ρ l /ρ l−1 , where we have set ∆ = 3. As previously remarked, the use of AMR may not be optimal for the study of magnetic fields since refining on matter over-density alone can artificially suppress turbulence in regions that are relevant for dynamo amplification (e.g. Xu et al. 2009). For this reason, in a control run with one AMR level, we also enabled AMR wherever the velocity jump along any of the coordinate axes was larger than ∆v = |vj+1 − vj−1|/|vj |, as in Vazza et al. (2009). The results are very similar to those obtained adopting the density refinement criterion only, since the turbulent velocity field within the filament is mostly supersonic (Ryu et al. 2008), and the density variations within it are large enough for our conservative choice for ∆ to trigger refinements in most of its interior. It turns out that from redshift z = 1, roughly 20-25 percent of our AMR region is covered by cells at the highest resolution (≈ 6.55 · 10 6 cells), corresponding to more than a ∼ 60 − 70 percent of the volume occupied by the filament within the AMR region itself. Incidentally, the same choice would not work for galaxy clusters, where the density varies more gently and the turbulence is subsonic (Xu et al. 2009;Vazza et al. 2009). The parameters for this set of simulations are summarised in Tab. 3.1. The velocity spectra for our run with the highest resolution displays a similar evolution as in the cluster (Fig. 10), and present a well-defined power law (compatible with ∝ k −2 ) for nearly two decades in scale. The magnetic spectra again do not show a clear power-law behaviour, and show small-scale bumps which evolve with time. However, the build-up of the small-scale magnetic structure is much less significant than in the ICM. Moreover, the trend does not increase over time but reaches it maximum around z ∼ 1 (green lines), while the small-scale power is ∼ 1 − 2 orders of magnitude smaller at z = 0. Overall, the magnetic spectra seem to evolve much faster towards their maximum, compared to the case of the ICM, but since z ∼ 1 they do not show significant evolution on most scales. The maximum in the magnetic field spectra on small scales (k 80, corresponding to 200 kpc) is matched by an excess of velocity power at the same scales. The time corresponds to the epoch in which the filamentary region that connects the two forming clusters assembles most of his mass, and when shock heating raises the WHIM's temperature to 10 6 K ( Fig. 8). At this time, gas flows into the filament from opposite sides at large velocities, and compresses the magnetic fields. Still, the plasma beta is only of the order of β ∼ 10 5 − 10 6 in the filament. The dependence on resolution of the power spectra (Fig. 11) is similar to that of clusters (Fig. 3), and runs with higher resolution show the build-up of small-scale magnetic fields, even if less evident than in the ICM. For comparison, at k = 10 the magnetic power spectra increases by more than a factor of ∼ 100 in the ICM run when the resolution is increased by a factor of 8, while this is less than a factor ∼ 10 in the filament. The integrated velocity and magnetic spectra as a function of time are given in Fig. 12. Again, we filtered out scales smaller than the mean diameter of the filament at z = 0, to focus only on velocity and magnetic field fluctuations that are roughly contained within the filament ( 4 Mpc). The continuous accretion of matter onto the filament causes the growth of both quantities: during the whole evolution the specific kinetic energy has increased by ∼ 2 orders of magnitude, while the magnetic energy has increased by ∼ 3 orders of magnitude in our best resolved runs, and ∼ 2 orders of magnitude in our coarsest run. This is very different from our previous results for clusters (Fig. 5). While the increase of specific kinetic energy is similar, the increase of magnetic energy with resolution is much slower with resolution, indicating that convergence might be within reach. This suggests that starting from a resolution of the order of 73 kpc (∼ 1/60 of the thickness of the filament) or better, the effects of compressive modes and shocks on the final magnetic field does not increase with resolution. We conclude that despite the large dynamical range of scales of our AMR runs (corresponding to a Reynolds number of ∼ 210 in our most resolved case, by assuming that the outer scale is the average diameter of the filament, ∼ 4 Mpc), in our simulated filament we do not observe a significant small-scale dynamo. Moreover, the trend with resolution of spectra and integrated quantities indicates that the lack of efficient amplification is robust against further increase in resolution, thereby limiting the maximum amplification factor to ∼ 100 for the magnetic energy in the WHIM for this object. In Sec. 4, we will discuss this further. Larger cosmological runs Given the limited statistical significance of results obtained with single objects, we proceed to large-scale unigrid cosmological simulations comprising hundreds of clusters and filaments. As before, we initialised the magnetic field with a uniform B0 = 10 10 G at z = 30 and employed the Dedner scheme on the MHD version of ENZO (Wang et al. 2010). First we present our largest run: a (50 Mpc) 3 volume simulated with 2400 3 cells and DM particles (resolution 20.8 kpc), which, as far as we know, is the largest MHD cosmological simulation to date. The simulation used ∼ 4.5 million core hours running on 512 nodes (2048 cores in total) on Piz Daint. The resolution was chosen such that at least a cell size of ∼ 20 kpc could be achieved in order to obtain sufficient amplification in 10 14 M⊙ halos. The simulation box is large enough in order to contain massive galaxy clusters with a concordance model σ8. Figure 13 shows the projected (mass-weighted) magnetic field strength at z = 0 across the whole volume. In regions of large over-densities, the magnetic field is amplified beyond the effect of compression by twisting motions driven by accretion and mergers. Twisted magnetic field structures are found only close to the centre of halos or in the proximity of the main axis of filaments. The maximum field attained in filaments hardly reaches ∼ 10 nG, while in the most massive halos the maximum magnetic field is of the order of ∼ 0.05 − 0.1 µG at most. The average B(n) (Fig. 14) shows that, for the largest part, the magnetic field scales as B ∝ n 2/3 with little scatter. At high densities, the large number of small halos dominates the average and cause a flattening of the relation because of the small number of massive hot clusters 3 . For this reason we also plot (red lines) the average relation obtained only using cells with T 10 7 K, which highlights the ICM. Then the upper envelope of the average reaches ∼ 0.1 µG at densities typical of cluster centres, which is an effect of the small-scale dynamo (Sec. 3.1). However, for the concordance value of σ8 = 0.8 the high-mass clusters form late in time compared to the cluster previously simulated and hence the amplification by z = 0 is less efficient by the end of the run. The rather small final mass/size of the clusters formed in the (50 Mpc) 3 is too small to probe large Reynolds numbers for most of the simulated objects. Our high temperature threshold selects objects with virial masses above ∼ 5 · 10 13 − 10 14 M⊙, i.e. with virial radii around ∼ 1 Mpc. In this case the virial radius is sampled with at least 150 3 cells at z = 0, and the numerical Reynolds number of the flow is Re ∼ 500 (Eq. 1). This fulfils the criterion proposed by Federrath et al. (2011b) and Latif et al. (2013), according to which a minimum amount of 128 3 cells per Jeans length is necessary to obtain dynamo effects in primordial halos. At the over-density typical of filaments, n/ n ∼ 1 − 10, the average magnetic field is 10 nG, as found in our previous filament runs. Additional magnetic field seeding by galaxies Finally, we investigate the possible role of additional magnetic field seeding from galaxies crossing the filament. In a simulation box of (25 Mpc) 3 sampled by a 1200 3 mesh, we tested the effect of releasing additional magnetic fields as small magnetic loops injected at the estimated location of forming galaxies. The location of each presumed galaxy was assigned based on a (comoving) gas over density larger than 500 times the critical gas density, and at the centre of each over-dense region we injected a magnetic loop (3 2 cells across) with a total magnetic field strength corresponding to β = 100 at the location of each galaxy. For the sake of simplicity, we enabled the seeding from galaxies only once at z = 2, and compared the results at z = 0 to the model with purely primordial seeding. This model can only test the efficiency of magnetisation of filaments and galaxy clusters through stripping and mixing of gas from magnetised halos in the course of their motions inside large-scale structures. Note that our seeding model does not include the additional effect of gas outflows driven by winds and AGN. The additional seeding magnetises the high-density ICM leading to field strengths of up to of ∼ 0.1 − 1 µG at the centre of the most massive halos at z = 0. The distribution function of magnetic energy and the average B(n) (Fig.15) shows how the galactic seeding has the greatest effect in halos (n/ n 100). There, the final magnetic field is ∼ 10 − 30 times larger, reaching ∼ 0.3µG even in the low-mass halos formed in this smaller box. However, the effect outside of these halos is quite limited since only the B 10 nG is significantly affected by the additional seeding from galaxies (bottom panel). In summary, while more complex time-dependent model of magnetic seeding from high-redshift galaxies are required, our results do not show significant large-scale magnetisation by the simple advection and stripping of magnetised galaxies. The inclusion of fast (or continuous) magnetised outflows driven by galactic activity might yield different results. SPH simulations by Donnert et al. (2009) have shown that the magnetisation of the cosmic web outside of halos in galactic seeding scenarios is very model-dependent. DISCUSSION We have investigated the amplification of primordial magnetic fields as a function of spatial resolution. Our results can be summarised as follows: • Magnetic field amplification in the ICM: We have simulated the small-scale dynamo in a galaxy cluster with uniform grids of increasing resolution (from 220 to 13 kpc). At resolutions with cell sizes below ∼ 26 kpc we observe the emergence of small-scale power in the magnetic energy spectra. The amplification seems to have reached convergence at the maximum resolution of 13 kpc (i.e. ∼ 1/100 of the cluster virial radius at z = 0), at least inside the virial region. The magnetic fields reach ∼ 0.4 µG in the cluster core, corresponding to ∼ 1/100 of the thermal energy of the cluster within the same volume. Although our setup is rather artificial (due to the use of an artificially large value of σ8 in order to enable the growth of a massive cluster inside a small cosmic volume), the results are in agreement with previous results (Dolag et al. 1999;Brüggen et al. 2005;Dubois & Teyssier 2008;Donnert et al. 2009;Collins et al. 2010). • Magnetic field amplification in cosmic filaments: In filaments, the maximum amplification factor for the magnetic energy is of the order of ∼ 100 and the maximum field strength, close to the axis of the filament, hardly reaches ∼ 0.01 µG. The corresponding magnetic energy is only ∼ 10 −5 of the gas kinetic energy, smaller than what is found in driven turbulence simulations (e.g. Federrath et al. 2011a). The physical reason for this is discussed in the next Section. These results seem to be independent of resolution and apply up to the largest Reynolds number we could probe here, Re ≈ 200. The independence of resolution stems from the fact that the ratio of kinetic energy of compressive and solenoidal modes within the filament does not change significantly with resolution. Compressive forcing only leads to inefficient magnetic field amplification. • Amplification as a function of environment: Inside halos where the virial volume is sampled with enough resolution elements ( 150 3 inside the virial volume) we find some dynamo amplification, as suggested by Federrath et al. (2011b) and Latif et al. (2013). The Fig. 14) for two resimulations of (25 Mpc) 3 volume at z = 0, with a cosmological weak magnetic field initialised at z = 30 (black) or with the additional release of magnetic loops from "galaxies" in the volume at z = 2 (red). The additional grey line shows the expected results for pure compression. Bottom panel: energy-weighted distribution of magnetic fields for the same runs. additional release of stronger magnetic fields from the high density peaks of halos (here assumed to take place only once at z = 2) does not affect the magnetic fields in filaments at z = 0. However, it does increase the magnetisation of the ICM at z = 0, due to stripping and further mixing of the additional magnetic field in the turbulent ICM. What is the difference between the small-scale dynamo in clusters and filaments? Figure 16 summarises our results for the amplification of magnetic field in the ICM and in the WHIM, as measured in our cluster and filament runs. The plots show the amplification of magnetic energy and of the mean magnetic field strength (averaged inside the cluster and filament volume) at z = 0, where we assigned a fiducial maximum Reynolds number to both systems from Eq. 1. The Reynolds numbers in the filament are smaller but the observed dependence on resolution suggest that there would be no efficient dynamo, even for fairly large numerical Reynolds numbers (∼ 200). The magnetic fields in the ICM can be understood from simulations (Schekochihin et al. 2004;Cho et al. 2009;Jones et al. 2011) with Pm = 1 (where Pm = η/ν is the Prandtl number, and η, ν are the magnetic resistivity and physical viscosity, respectively). They concluded that for a large enough Reynolds number an exponential growth of the field is observed, followed by a linear growth on the timescales of several tens of dynamical times. During the exponential phase B(t) = B0 exp(Γt/τ ), where B0 is the initial field strength, t is the time and τ is a characteristic time of the system (which can be here approximated as the sound crossing time). A fast dynamo occurs only when Γ is ≫ 1. In a Pm = 1 regime the relation between Γ and the Reynolds number is Γ ≈ R 1/2 e X (X is a numerical factor of order X ∼ 15 − 30, from which it follows that Re 15 2 − 30 2 to enter the exponential phase). These results suggests that even if the system is subject to continuous turbulent forcing at the largest scales, it takes several tens of crossing time for the system to reach a stationary magnetic field strength of the order of ∼ 30 percent of the total kinetic energy. This is not far from what we observe, at least on the smallest spatial scales, in our simulated cluster at the highest resolution, owing to the fairly large Re ∼ 1400 there. This is not observed in the filament, even at our highest resolution, with no sign of dynamo action. Besides the smaller numerical Reynolds number in the filaments, there are additional reasons to believe that the amplification cannot be significantly larger than this -even in case of a much larger Re. First, previous simulations by independent groups have shown that compressive forcing of turbulence is very inefficient in producing dynamo amplification, as most of the energy pumped into the system is quickly dissipated into shocks (Haugen & Brandenburg 2006;Federrath et al. 2011a;Jones et al. 2011) (see also our Appendix). In particular, Federrath et al. (2011a) have shown that the magnetic field dynamo driven by forced turbulence in a box exhibits a characteristic drop of the growth rate at the transition from subsonic to supersonic turbulent flow. Solenoidal turbulence drives more efficient dynamos, due to the higher level of vorticity generation and the stronger tangling of the magnetic field. Based on the different approach of solving the Kazantsev equation with the WKB (Wentzel, Kramers, and Brillouin) approximation, Schober et al. (2012) measured the growth rate of magnetic field dynamo in different turbulent models. They showed that for highly compressible turbulence the critical Reynolds number to produce an efficient dynamo is larger than in the case of Kolmogorov turbulence (i.e. ∼ 2700 vs ∼ 100), and that the growth rate in the compressible case has a shallower dependence on the Reynolds number (i.e. Γ ∝ R 1/3 e for Burgers turbulence and ∝ R 1/2 e for Kolmogorov turbulence ). Finally, using a Fokker-Planck approach to compute the growth of magnetic field dynamo in the non-linear regime, have recently shown that the characteristic length scale of the magnetic field grows faster in Burgers than in Kolmogorov turbulence. This confirms that in the presence of compressive forcing dynamo amplification is much less efficient than in the solenoidal forcing case. This is even more apparent in filaments because strong advection motions along the spine of filaments continuously move turbulent eddies away from the region of colliding flows, reducing smallscale dynamo more severely than in the case of stationary forcing of solenoidal turbulence in the box (e.g. Federrath et al. 2011a). Here, we have analysed how the modes of the velocity field evolve with spatial resolution in both cases. To this end, we decomposed in the velocity field using the Hodge-Helmholtz projection in Fourier space (e.g. Kritsuk et al. 2011), and computed the kinetic energy in the compressive and in the solenoidal modes. In both cases, we selected a region at z = 0, not affected by infall motions outside of accretion shocks. The panels in Figure 17 show our results: while in the ICM the budget of kinetic energy in compressive modes decreases with resolution, in the WHIM the energy does not change. At the best resolution, the energy in compressive modes is only ∼ 30 percent in the ICM and up to ∼ 60 percent in the WHIM. The fact that the kinetic energy in solenoidal motions is higher in galaxy clusters and smaller in the WHIM has already been established by cosmological numerical simulations (Ryu et al. 2008;Iapichino et al. 2011;Zhu et al. 2013;Miniati 2014). However, we find that this persists at high resolution and that in filaments ∼ 2/3 of the kinetic energy is in form of supersonic compressive modes. This explains the lack of amplification in filaments, despite the increase in the numerical Reynolds number. Indeed, the maximum amplification of magnetic energy in subsonic solenoidal turbulence (as in the ICM) is expected to be ∼ 1 − 2 orders of magnitude higher than the maximum amplification reached in supersonic compressive turbulence (as in the WHIM) (Federrath et al. 2011a). Moreover, the growth rate in the first case is ∼ 5 − 10 times faster. Physical and numerical limitations of the MHD picture Our resolution tests went down to a resolution of ∼ 20 kpc even though the smallest collisional scales in the WHIM should be ∼ 100 − 10 3 kpc based on pure Coulomb interactions. Below this scale a kinetic modelling could be more appropriate (Weinberg 2013). However, if efficient scattering occurs between particles and magnetic perturbations induced by small scale plasma instabilities, then the mean free path of particles decreases in a self-regulating process: if turbulence is stronger at the scale of injection, the mean free path of plasma particles is reduced and the range of scales over which the fluid behaves as collisional is increased (Schekochihin et al. 2005;Kunz et al. 2011;Brunetti & Lazarian 2011). Whether or not the same picture applies to the even more tenuous and weakly magnetised WHIM in filaments, is presently uncertain. Regarding the MHD scheme, the Dedner hyperbolic cleaning scheme (Dedner et al. 2002) is a robust and widely used method in the literature, but is prone to small-scale artefacts and artificial dissipation due to the ∇ ·v wave necessary to limit to the presence of magnetic monopoles. In the literature, this method has been compared to others, both for grid and SPH simulations (Dedner et al. 2002;Wang & Abel 2009;Mignone et al. 2010;Kritsuk et al. 2011;Stasyszyn et al. 2013;Pakmor et al. 2014), reporting good consistency. In particular, Kritsuk et al. (2011) have investigated in detail the performance of several MHD methods in the case of decaying supersonic turbulence in an isothermal box, in-cluding the Dedner scheme implemented in ENZO. They concluded that all codes agreed well on the kinetic and magnetic energy decay rate, but they varied on the amplitude of the peak magnetic energy, as this was significantly dependent on the numerical dissipation of each method (that in turns determines the effective magnetic Reynolds number). They found that the use of explicit divergence cleaning reduces the magnetic spectral bandwidth relative to codes that preserve the condition on the magnetic field exactly, as the constrained transport (CT) methods. They concluded that codes that fall short in some of the investigated diagnostics (i.e. dissipation of small scale modes in the Dedner cleaning scheme) still can get to the correct physical answer, provided that they compensate the higher numerical dissipation with higher numerical resolution. Comparison to previous work Our results for non-radiative runs seem to be in agreement with those obtained by Brüggen et al. (2005), Dubois & Teyssier (2008) and Collins et al. (2010), who also reported evidence of growth of magnetic fields in excess of simple compression even if with lower efficiencies 4 . Runs with radiative cooling readily obtain magnetic fields of the order of ∼ µG in the ICM, mainly as a result of overcooling (Dubois & Teyssier 2008;Collins et al. 2010;Ruszkowski et al. 2011). Despite some similarity in the magnetic spectra, it is difficult to relate to the ENZO-MHD simulations by (Xu et al. 2009(Xu et al. , 2011) since their seeding is very different from ours. There is disagreement, though, with the results of cosmological SPH simulations (Dolag et al. 1999;Gazzola et al. 2007;Dolag et al. 2008;Donnert et al. 2009;Dolag & Stasyszyn 2009;Bonafede et al. 2011;Beck et al. 2012;Stasyszyn et al. 2013;Beck et al. 2013), that typically reach much larger amplification factors for the magnetic energy, already at high redshift (z 2). Understanding these differences is beyond the goal of this paper, and we can only speculate that the reason lies in the capability of SPH in refining the innermost regions of halos already at earlier times. However, also the difficulty in correctly modelling small-scale velocity structures (and the connected magnetic-field amplification) in SPH might be responsible for the difference (Bauer & Springel 2012;Price 2012), for which ad-hoc solution are required (Dolag et al. 2005;Dolag & Stasyszyn 2009;Donnert et al. 2013;Stasyszyn et al. 2013). Few papers address the magnetic field amplification in filaments. Early MHD grid simulations by predicted ∼ 10 − 100 nG fields in filaments. However, the total normalisation of the magnetic fields had to be scaled up in order to match the observation of the Coma cluster. Taking this into account and normalising by the assumed initial seed field, these simulations essentially showed only compressive amplification of magnetic fields in filaments, in line with what we also find at low resolution. Brüggen et al. (2005) applied instead AMR and a passive scheme in FLASH to monitor the amplification of magnetic fields also at the scale of filaments, and reported an average amplification factor of Figure 17. Resolution-dependence of the ratio between compressive and total (compressive+solenoidal) kinetic energy in clusters and filaments. ∼ 10 3 − 10 4 for the magnetic energy of filaments, i.e. larger than what we found here. This can be explained by the difference in the adopted MHD scheme, even if the spectra of magnetic fields did not show evidence for small-scale dynamo amplification and the topology of magnetic fields in filaments was found to be laminar. Smaller amplification factors for the magnetic energy, essentially in agreement with our results here, were found using SPH simulations by Dolag et al. (2004) with a constrained realisation of the nearby (100 Mpc) 3 Universe. Finally, several of our results have already been explained by Ryu et al. (2008), who used an hybrid approach to rescale the magnetic field distribution obtained with a passive MHD solver coupled to a cosmological simulations. In post-processing, they then estimated the saturated growth of magnetic fields based on the (unresolved) turbulent decay of vortical motions resolved in the simulation. The important difference in the modes of turbulent forcing in filaments and galaxy clusters, and its impact on the amplification of weak primordial fields was already pointed out in their work, and our direct simulation with a larger spatial resolution confirmed their main results Ryu et al. (2008). However, our simulations (see also our tests in the Appendix) have shown that the results of dynamo amplification in driven turbulence (specially in the isothermal case) cannot be trusted to exactly predict the maximum dynamo amplification in WHIM. First, due to the major role played by shocks even in the filament interiors, which cannot be fully captured with isothermal computations, since this largely underestimates the role of the baroclinic generation of vorticity. And, second, because of the presence of strong longitudinal motions along the filament that prevents the continuous build-up of small scale magnetic field at any specify location within the filament, as instead observed at the centre of clusters. CONCLUSIONS We have studied the amplification of primordial magnetic fields via a small-scale turbulent dynamo using direct MHD numerical simulations with ENZO In particular, we have investigated the amplification of magnetic fields in the ICM and in the WHIM of filaments. While in the ICM we confirm that turbulence from structure formation can produce significant dynamo amplification (even if the measured efficiency is smaller than what is reported in some papers), in filaments we do not observe significant dynamo amplification, even though we reached Reynolds numbers of Re ∼ 200. The maximum amplification for large filaments is of the order of ∼ 100 for the magnetic energy, mostly due to strong compression in supersonic flows, corresponding to a typical field of a few ∼ nG. This result is independent of resolution and follows from the inefficiency of supersonic motions in the WHIM in triggering solenoidal modes, while compressive modes are dominant in filaments at all investigated resolutions. Our results can serve as a guideline for the minimum resolution for the onset of small-scale dynamo in cosmological simulations. Our results for the ICM (Sec.3.1) suggest that a dynamical range of at least L/∆x ∼ 210 (where L is the scale for the driving of turbulence and ∆x is the numerical resolution) is necessary to observe the build-up of small-scale magnetic field in a dynamo process, as this would enable a flow with Re 500. Even if the bulk of turbulence injection in the ICM at late redshift happens through mergers and on scales of a fraction of the virial radius (Vazza et al. 2009(Vazza et al. , 2011a(Vazza et al. , 2012, the converging accretion flows within and the injection of vorticity at accretion shocks (Ryu et al. 2008;Miniati 2014) are likely to build up magnetic fields in the ICM on scales up to the order of the virial radius. Assuming L ≈ 2Rv, the above criterion suggests that ∆x/Rv 100 to have efficient dynamo, i.e. in order to achieve a large enough Reynolds number for smallscale dynamo, a cosmological simulation needs a spatial resolution of order ∼ 30 kpc for a 10 15 M⊙ halo (Rv ≈ 3 Mpc), of order ∼ 10 kpc for a 10 14 M⊙ halo, and of order ∼ 3 kpc for 10 13 M⊙. The fact that clusters form late, combined with the fact that mergers typical inject energy at scales below Rv, likely makes the above estimate a lower limit on the required resolution, as shown in our larger cosmological run (Sec.3.3). It is more difficult to draw firm conclusions in the case of filaments, as our runs do not clearly show a convergence on the dynamo process. Our results suggest that a dynamical resolution equal or larger than L/∆x ∼ 60 (where L is the width of the filament) is necessary to approach convergence in the energy share of solenoidal and compressive motions (Fig.17), which sets the nature of the turbulent forcing within the system. Despite the fact that the theoretical Reynolds number available to the flow is large, Re 10 2 , in the presence of this dominant compressive forcing no clear evidence of a fast dynamo is detected, even in our highest resolution runs, where the total magnetic energy is only ∼ 10 −5 of the kinetic energy and carries memory of the initial magnetic field imposed at z = 30. The observational consequences of these results are important. First, the deflection of UHECRs by filaments in the cosmic web is expected to be fairly small, i.e. 1 degree, allowing the identification of extragalactic sources Ryu et al. 2010). Secondly, the detection of synchrotron emission by electrons accelerated by shocks surrounding filaments will be very challenging since diffusive shock acceleration requires a minimum magnetic field of ∼ 0.05 − 0.1 µ G (Vazza et al., submitted). Within the present uncertainties about the magnetisation level of the WHIM, we suggest that any observation of large-scale fields in filaments in the radio band will contain valuable information about the strength of primordial magnetic fields (e.g. Neronov & Vovk 2010; Widrow et al. 2011). Since the growth of primordial magnetic fields in filaments should be dominated by simple compression Figure 16. Amplification of magnetic energy as a function of numerical resolution (top) and Reynolds numbers (centre), for the clusters and filaments. Bottom panel: average magnetic field at z = 0, considering a uniform seed field of B = 10 −10 G. The Reynolds numbers of each run is computed as in Eq. 1, based on the typical size of the cluster (∼ 3 Mpc) and of the filament (∼ 4Mpc). and small-scale shocks, the dynamical memory of the system should persist over long cosmological times, and any observed magnetisation level should closely connect to the primordial magnetisation. This is different from galaxy clusters where most of the magnetic energy is extracted from the kinetic energy budget, thereby quickly erasing previous dynamical information. Finally, we stress that our results imply by no means that the quest for higher resolution in the filamentary structures of the cosmic web is useless. Provided that MHD can still be applied there (Sec. 4), resolution can significantly impact the Faraday Rotation from the intergalactic medium (IGM), which the SKA might probe (e.g. Akahori et al. 2014). It also affects the synchrotron emission from the cosmic web (Brown 2011;Araya-Melo et al. 2012) because shock statistics change with resolution (Vazza et al. 2011b). The use of high resolution also allows to model galaxy formation processes in filamentary environments in detail, which is crucial to study the impact of magnetised outflows from galaxies (Xu et al. 2009;Donnert et al. 2009;Beck et al. 2013). corresponds to the box size). This is done with a specific module available in ENZO, that generates random isotropic velocity fields with specified input spectra and absolute normalisation for the total velocity field (Wang et al. 2010). In our tests, we employed 512 3 boxes and drove M = 1.5 and M = 15 isotropic motions in a continuous way. Figure A1 shows the magnetic field strength at three different times for these two runs, at epochs ≈ 0.005t dyn , ≈ 1.5t dyn and ≈ 3t dyn , where the dynamical times is defined as t dyn = L box /V drive (L box is the box size and V driv = M cs is the rms velocity at the forcing scale. The evolution of kinetic and magnetic spectra until 4 · t dun for the two cases is given in Fig. A2, and highlights the significantly different evolution of magnetic field structure in the two regimes. In the M = 15 case after a very tiny fraction (10 −2 ) of the dynamic time we see the emergence of magnetic energy on very small scales, as an effect of shocks that are formed very early inside the box due to strong supersonic motions. The small-scale magnetic energy increases over time, without significantly changing the location of the peak of magnetic energy, and after ∼ 4t dyn we observe the hint of equipartition with kinetic energy on the smallest scales. This case is close to the case of the WHIM in cosmic filaments, due to the involved supersonic flow, even if the multiple collisions of oblique shocks are more efficient in driving solenoidal motions in the medium (mostly through baroclinic generation of vorticity and at curved shocks through Crocco's theorem, e.g. Jones et al. 2011), which reaches roughly a ∼ 50 percent budget of the total kinetic energy at the end of the run, i.e. much more than in our simulated filament. Moreover, the forcing to which the magnetic eddies are subjected is constant in time, while in the case of filaments (Sec. 3.2) strong advection motions longitudinal to the major axis of the filament tend to continuously replace magnetic eddies at a given Eulerian location, thereby reducing their growth rate. Conversely, the M = 1.5 is closer to the case of the simulated ICM, given the transonic forcing regime and the enhanced presence of solenoidal motions by the end of the run (∼ 60) percent of the total kinetic energy, i.e. similar to our high-resolution ICM runs. In this case we observe in the spectra a slower build-up of small scale magnetic energy, and the progressive increase of the total velocity spectrum over time. In this transonic forcing the thermalisation of kinetic energy at shocks is obviously greatly reduced, and a more volume filling and tangled velocity field can build over time. Roughly after one dynamical time, we observe the formation of a well defined peak in the magnetic spectrum, that progressively moves to larger spatial scales and becomes of the same order the kinetic energy at the smallest scales, as predicted in efficient smallscale dynamo (Schekochihin et al. 2004;Cho et al. 2009). Both simulations confirm the possibility of simulating small-scale dynamo amplification with the ENZO-MHD version we adopted to obtain our results in the main paper, and suggest that to get to more quantitative answers in the case of the ICM and of the WHIM one must resort to proper 3D cosmological simulation, in order to have the large-scale dynamics properly taken into account. Figure A1. Maps of magnetic field strength (arbitrary units) for a central slice in our driven turbulence tests with 512 3 , for the M = 15 forcing (left) and the M = 1.5 forcing (right) case, at the epochs of ≈ 0.005t dyn , ≈ 1.5t dyn and ≈ 3t dyn , where t dyn is the dynamical time. Figure A2. Velocity power spectra (solid lines) and magnetic power spectra (dot-dashed lines) for two 512 3 "turbulence in a box" runs, assuming a constant forcing of M = 15 (left) and of M = 1.5 (right). All spectra are computed within a 256 3 sub volume contained in the two boxes. The time evolution samples ∼ 4t dyn with roughly constant time spacing.
2014-10-23T14:14:06.000Z
2014-09-09T00:00:00.000
{ "year": 2014, "sha1": "6a5516991b38c8ac1db8fda59b5c6ef1083ac60b", "oa_license": null, "oa_url": "https://academic.oup.com/mnras/article-pdf/445/4/3706/6077504/stu1896.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "6a5516991b38c8ac1db8fda59b5c6ef1083ac60b", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
219535364
pes2o/s2orc
v3-fos-license
Asymmetric phospholipids impart novel biophysical properties to lipid bilayers allowing environmental adaptation Phospholipids are a diverse group of biomolecules consisting of a hydrophilic head group and two hydrophobic acyl tails. The nature of the head and length and saturation of the acyl tails are important for defining the biophysical properties of lipid bilayers. It has recently been shown that the membranes of certain yeast species contain high levels of unusual asymmetric phospholipids, consisting of one long and one medium chain acyl moiety – a configuration not common in mammalian cells or other well studied model yeast species. This raises the possibility that structurally asymmetric phospholipids impart novel biophysical properties to the yeast membranes. Here, we use atomistic molecular dynamics simulations (MD) and environmentally-sensitive fluorescent membrane probes to characterize key biophysical parameters of membranes formed from asymmetric lipids for the first time. Interestingly, we show that saturated, but asymmetric phospholipids maintain membrane lipid order across a wider range of temperatures and do not require acyl tail unsaturation or sterols to maintain their properties. This may allow cells to maintain membrane fluidity even in environments which lack the oxygen required for the synthesis of unsaturated lipids and sterols. Introduction Membranes are composed of a richly diverse population of lipids. The primary class are glycerophospholipids which are themselves varied in their head group and the length and saturation of their two acyl tails. While cells can and do incorporate fatty acids from their environment into membrane phospholipids, they also synthesize and incorporate de novo fatty acids, expending energy and resources in the process and implying that compositional diversity is functionally relevant (Harayama and Riezman, 2018). The lipid composition of a bilayer profoundly influences the biophysical characteristics of that membrane, and this is particularly true when considering the length and saturation of the lipid acyl tails. The bilayer biophysical properties in turn influence membrane function -the diffusion and interactions of membrane proteins for example, or the curvature and bending rigidity during membrane remodeling. One of the key biophysical properties influencing function is membrane lipid order, closely related to the phase behavior of the bilayer. In general, bilayers composed of glycerophospholipids can adopt one of two phases -a solid, or gel phase as low temperatures and a liquid-disordered phase at high temperatures (Heberle and Feigenson, 2011). Such a phase transition would be detrimental for a living organism living at variable temperatures and therefore cells incorporate sterols into the bilayer to modulate the phase behavior, allowing them to generate a new phase -the liquid-ordered phase, and maintain that phase over physiological temperature ranges. In general, the liquid-ordered phase is enriched in saturated phospholipids and sterols, whereas unsaturated phospholipids form lower order domains. The maintenance of membrane biophysical properties by cells using a combination of saturated lipids, unsaturated lipids and sterols, however, comes at a cost -unsaturated lipids and sterols require oxygen for their synthesis (Kwast et al., 1998;Shanklin and Cahoon, 1998). In general, due to the characteristics of the lipid metabolic pathways responsible for their synthesis, the two acyl tails of each lipid are often similar or identical in length (Jenni et al., 2007;Lynen et al., 1980;Maier, 2006). For example, two of the most common lipids in mammalian cell membranes are di-oleoyl-phosphatidylcholine (DOPC) and di-palmitoyl-phosphatidylcholine (DPPC). It has recently been discovered that fission yeast species, Schizosaccharomyces japonicus, produces high quantities of structurally asymmetric phospholipids not common for mammals or even in closely related yeast species such as the well-known model organism Schizosaccharomyces pombe . Curiously, S. japonicus shows several interesting environmental adaptations such as the ability to survive and proliferate at temperatures above 40 °C and in hypoxic environments (Kaino et al., 2018). Moreover, nuclear envelope remodeling during mitosis diverged between the two species with S. japonicus undergoing a semi-open mode and S. pombe executing closed mitosis (Makarova et al., 2016). This suggests that these membrane associated processes might be related to their distinct membrane composition and the resulting bilayer properties. It is not currently known what biophysical properties membranes composed of asymmetric lipids will display but such an understanding is crucial to the future elucidation of their physiological role. Therefore, we investigate the biophysical properties of artificial lipid bilayers composed of these asymmetric phospholipids for the first time, using a combination of advanced fluorescence microscopy and atomistic scale molecular dynamics (MD) simulations. In particular, we form giant unilamellar vesicles from 1-stearoyl-2-decyl-sn-glycero-3-phosphatidylcholine (SDPC) which contains one saturated tail of 18 carbon length and one saturated tail of 10 carbon length. Vesicles are formed in the presence or absence of 30% ergosterol -the primary sterol found in fungi, and stained with the environmentally sensitive probe di-4-ANEPPDHQ which reports on membrane lipid order through changes in its fluorescence emission (Owen et al., 2012). Combined with insights from MD, our data shows that membranes formed from asymmetric lipids can maintain bilayer fluidity over a wide range of temperatures and, crucially, without the requirement for acyl tail double bonds or membrane sterols. We hypothesise that this might be a new and novel mechanism for the temperature and hypoxic tolerance of S. japonicus and we further propose bilayers composed of asymmetric lipids might have important medical and industrial applications due to the novel biophysical properties the asymmetric lipids impart. GUV preparation GUV preparation was performed by electroformation as previously described (Morales-Penningston et al., 2010). Briefly, a lipid film was formed on ITO-coated glass slides from 1 mg/ml of either a single lipid solution in chloroform or a phospholipid mixture with 30% (mol/mol) ergosterol. Asymmetric 1stearoyl-2-decyl-sn-glycero-3-phosphatidylcholine (SDPC) was obtained through customized synthesis by Avanti Polar Lipids. After lipid films were dried, a 200 mM sucrose solution was used to form GUVs in the following conditions -50°C, 11 Hz, 1V alternative electric current. Microscopy Prepared GUVs were incubated with the 5 mM di-4-ANEPPDHQ and transferred to a glass-bottomed microscope dish. Imaging was performed in temperature-controlled conditions in the environmental chamber on a Zeiss LSM 780 inverted confocal microscope equipped with a 32 element GaAsP Quasar detector. The following parameters were used: a 488 nm laser was selected for fluorescence excitation of di-4-ANEPPDHQ and emission was detected for the ordered channel (500 -601nm) and the disordered channel (640 -700nm). Image analysis GUV images obtained at different temperatures were analyzed using ImageJ and a custom written macro (Owen et al., 2012). Equatorial images of individual GUVs were selected for analysis and a mean GP value calculated for each GUV pixel. For each temperature point at least 20 individual GUVs of various radius (between 1 -30µm) were analyzed. Molecular Dynamic Simulations Six different lipid bilayers were simulated in order to investigate the effect of lipid molecules with asymmetric tails on the properties of the bilayers: pure DOPC, pure DSPC, pure SDPC, and mixed membranes containing each of these three lipids with 30% ergosterol. Each membrane was built with 200 total lipids (PC or PC & ergosterol) in each leaflet using the CHARMM-GUI membrane builder (Jo et al., 2008(Jo et al., , 2009. Each system also included 30 water molecules per lipid molecule as well as 0.15 mM NaCl. Each membrane was minimised and then equilibrated to a temperature of 303.15 K and a pressure of 1 bar following the simulation protocol prescribed by CHARMM-GUI (Lee et al., 2016). After equilibrating each system, a production simulation was carried out for 200 ns at a temperature of 303.15K and a pressure of 1 bar. The temperature was controlled by a Nosé-Hoover thermostat and a Parrinello-Rahman barostat was used to control pressure. All simulations were run using the GROMACS simulation package (van der Spoel and Hess, 2011) and the CHARMM36 forcefield was used to model the interactions of the lipid molecules and the ions (Klauda et al., 2010), while the water molecules were modelled using CHARMM TIP3P (Impey and Klein, 2016). The model for the SDPC lipid was generated by truncating the sn-2 tail of the lipid model used to represent DSPC after the 10 th carbon. LINCS constraints were used on the hydrogencontaining bonds in order to allow us to use 2.0 fs timesteps within the production simulations. Unless specified otherwise, the last 150 ns of production simulation was used for analysis. The bilayer properties were characterised by the area per lipid (APL), lipid order parameter (SCD), bilayer thickness and the interdigitation of the lipids within the bilayers. Unless otherwise noted, analysis scripts were written in Python with the use of MDAnalysis (Gowers et al., 2016;Michaud-Agrawal et al., 2011). A 2D Voronoi tessellation of atomic positions in each leaflet was performed to determine APL of each component of the bilayer, using the C21 C2 and C31 atoms as seeds for the PC lipids, O3 atoms for the ergosterol (atoms are given by CHARMM atom names). The lipid order parameter is a measure of the conformational flexibility of acyl chains in a bilayer, and is given by: where θ is the angle between the bilayer normal and the carbon-hydrogen vector of a carbon atom in an acyl tail, and the average is taken over time and over all molecules of a given species within the membrane. The SCD was calculated for each lipid or surfactant species a function of carbon atom position along an acyl chain. Smaller values of SCD indicate a more disordered acyl chain. The bilayer thickness is calculated as distance between the mean height in z, along the bilayer normal, of the phosphate phosphorous atoms in the two leaflets. In order to investigate whether the sn-1 hydrocarbon tails of the PC lipid molecules in the various bilayers `snorkel' towards the water/bilayer interface, we measure the difference in the z-coordinates of the middle of the bilayer and the terminal carbon in each hydrocarbon tail of the lipid molecules. If the terminal carbon penetrates the opposite leaflet by more than 5Å then we count it as interdigitating with the other leaflet. If the terminal carbon is greater than 5Å away from the middle of the bilayer and still within the same leaflet as the headgroup of the lipid molecule then we count it as `snorkeling'. Otherwise we count it as being in the middle of the bilayer. Lateral diffusion was calculated using the gmx msd module from the GROMACS package Using the Einstein relation, the diffusion coefficient (D) was evaluated from the mean square displacement (MSD): where d is the system dimension, r is the coordinate of the atom selection at a given time, t, from a time origin, t0. Membranes formed from asymmetric lipids display intermediate membrane lipid order consistent over physiological temperatures. We first generated GUVs composed of asymmetric phosphatidylcholine (SDPC) which were stained with 5M di-4-ANEPPDHQ, imaged by confocal microscopy and the per pixel GP values calculated. For comparison, GUVs were also formed from pure symmetric saturated and unsaturated phospholipids. Figure 1A shows representative images of these GUVs imaged at physiological temperature (37°C) and pseudocoloured by GP value. Quantification of such GUVs per condition ( Figure 1B) showed, as expected, that GUVs formed from symmetric unsaturated and saturated lipid show large negative and positive GP values, respectively. Interestingly, while SDPC is also a saturated lipid, it shows intermediate GP values, significantly different from both symmetric cases (p < 0.0005). This gives the first indication that asymmetric lipids may represent a mechanism to maintain some membrane fluidity in the absence of acyl chain double bonds. We next examined the temperature dependence of these GP values. Both asymmetric SDPC and symmetric DOPC show relatively consistent levels of membrane order over the temperature range between 21°C and 45°C whereas the symmetric saturated lipid bilayers undergo a phase transition at around 42°C ( Figure 1C). Interestingly GP values of asymmetric bilayers are similar to those of symmetric saturated lipids for temperatures above 42°C ( Figure 1C). To further detail the effect of asymmetry on membrane order, we performed atomistic MD simulations and calculated the deuterium NMR lipid order parameter. The order parameter for carbons on the sn-1 shows that the long tail of SDPC broadly behaved as a saturated lipid, without the central dip at the double bond position that DOPC displays ( Figure 1D, upper graph). However, the simulations show that order is higher near the SDPC head, the area co-occupied by the short sn-2 chain and drops off to low levels deep in the bilayer. Therefore, the intermediate character of asymmetric lipids is the result of an order gradient from high order interfacial regions to a low order bilayer core. A similar trend is observed for the acyl tail at the sn-2 position ( Figure 1D, lower graph). Intermediate character of SDPC lipid bilayer is visually represented in snapshots of these simulations ( Figure 1E). Figure 1: Lipid order of asymmetric phospholipid bilayers. A) Representative GP images of asymmetric and symmetric lipid GUVs stained with di-4-ANEPPDHQ. Color bar designates GP values (membrane lipid order). Scale bar, 5 m. B) Quantification of membrane lipid order showing intermediate character of SDPC bilayers at 37°C. C) GP values over the range of 21°C -45°C. Note that the asymmetric lipids maintain membrane over a wide range of physiological temperatures. D) Deuterium NMR order parameter for the sn-1 (upper graph) and sn-2 (lower graph) chains. Note for sn-1 chain each bilayer shows intermediate character of SDPC and the presence of high order at the interfacial region and but low order in the bilayer core. E) Snapshots of atomistic MD simulations of the three bilayers. Ergosterol has less effect on bilayers composed of asymmetric lipids and is not required to maintain bilayer fluidity. Sterols constitute approximately 30% of the mammalian and yeast membranes depending on the species, organelle distribution and growth conditions. Sterols modulate the biophysical properties of cell membranes maintaining membrane fluidity at physiological temperatures. We repeated the characterization of membrane lipid order using imaging and MD simulations for bilayers composed of asymmetric phospholipids this time in the presence of 30% ergosterol, the dominant sterol found in fungi. Representative GP images of these GUVs stained with di-4-ANEPPDHQ are shown in Figure 2A. The quantification ( Figure 2B) shows, as expected ergosterol modulates membrane lipid order, generating relatively uniform GP values for all three bilayers. Interestingly, when compared to the bilayers lacking ergosterol, those formed from asymmetric lipids were not substantially affected by the presence of 30% ergosterol compared to their symmetric counterparts ( Figure 2C). Again, the effect of asymmetric phospholipids on temperature dependence is profound, in particular when considering how membrane order changes when temperature is varied from 21°C ( Figure 2D). Membranes composed of either saturated or unsaturated symmetric lipids show a marked decrease in membrane order over the temperature range 21°C to 45°C, in particular, likely undergoing a phase transition to the liquid-disordered phase at around 40°C. Bilayers composed of SDPC and ergosterol however are better able to maintain lipid order, eventually becoming the most ordered of the three conditions at 45°C. Quantification of the deuterium NMR lipid order parameter for each carbon atom of the sn-1 acyl tail ( Figure 2E upper graph) shows the different effect of ergosterol on asymmetric compared to symmetric lipids. In the bilayer core, ergosterol has a minimal effect on SDPC lipid order, confirming the minimal changes in GP observed from the microscopy data. Again, a similar trend is observed for the acyl tail at the sn-2 position ( Figure 2E, lower graph). Representative atomistic MD simulations of these bilayers are shown in Figure 2E. Quantification of membrane lipid order in bilayers containing 30% ergosterol. C) Comparison bilayers containing 30% ergosterol relative to pure bilayers showing the minimal effect of ergosterol on SDPC bilayers. D) Variation of GP values with temperature over the range 21°C-45°C showing how SDPC is best able to maintain membrane fluidity. E) Deuterium NMR order parameter for the sn-1 chain (upper graph) and sn-2 (lower graph) for each bilayer. F) Snapshots of atomistic MD simulations of the three bilayers. Asymmetric lipids impart bilayers with novel biophysical properties; maintaining high area per lipid and high lipid diffusion despite their lack of acyl chain double bonds. Imaging and simulation data have shown membranes formed from asymmetric phospholipids have intermediate lipid order. Unlike membranes formed from symmetric lipids, they do not require sterol to achieve that membrane order and maintain those properties over a wide range of physiological temperatures. We, therefore, used atomistic MD simulations to test other biophysical membrane properties including lipid packing, membrane thickness and lipid diffusion. Data shows that membranes formed from saturated asymmetric phospholipids have a lipid packing density similar to those formed from unsaturated, symmetric lipids such as DOPC ( Figure 3A). Similarly, the data shows that lipid diffusion in SDPC bilayers is significantly higher than that observed in bilayers formed from saturated lipids both in the presence and absence of ergosterol. Diffusion is more akin to that observed for DOPC bilayers, despite the lack of tail unsaturation ( Figure 3B). Finally, asymmetric SDPC bilayers, at only 3.5 nm (4.1 nm in the presence of ergosterol), are significantly thinner than bilayers composed of either saturated (4.9 nm and 5.2 nm) or unsaturated (3.9 nm and 4.0 nm) symmetric lipids, both in the absence and presence of ergosterol ( Figure 3C). There are two reasons for this thinning, SDPC bilayers show high levels of snorkeling (3.3% and 1.9% of lipids without/with ergosterol) ( Figure 3E), but more significantly, SDPC bilayers show very high levels of interdigitation between the two leaflets (9.7% and 17.75% of lipids). This compared to only 5.0% and 6.8% for saturated symmetric lipids and 3.5% and 9.4% for unsaturated symmetric lipids ( Figure 3D). Figure 3: Biophysical properties of bilayers composed of asymmetric phospholipids, extracted from atomistic MD simulations. A) Area per lipid, B) Lipid diffusion coefficient, C) Bilayer thickness and D) Percentage of the terminal carbons in each of the sn acyl chains that interdigitate (z < −5 ˚A), snorkel (z > 5 ˚A) and are found in the bilayer center (−5 ˚ A < z < 5 ˚A) for each of the simulated bilayers Discussion The fission yeast species, S. japonicus produces abundant amounts of asymmetric phospholipids with two saturated acyl chains that differ in length by 8 carbon atoms in place of the more typical symmetric phospholipids present in mammalian cells and even in the closely related sister species and model organism S. pombe . It is possible that these lipids impart novel biophysical characteristics to bilayers, but these have not previously been studied. Here, we combined advanced fluorescence microscopy using environmentally sensitive membrane probes and atomistic MD simulations to investigate the novel biophysical properties of bilayers composed of these unusual lipids for the first time. Our data indicate that while membrane lipid order for asymmetric lipid bilayers is intermediate between those composed of either unsaturated or saturated symmetric lipids, they are able to better maintain membrane order over a wider range of physiological temperatures, and crucially above 42°C. It was also notable that the presence of ergosterol -the primary sterol found in fungi, had a smaller effect on asymmetric than symmetric lipids and, importantly, that ergosterol was not required to maintain bilayer fluidity. We therefore conclude that lipid tail asymmetry may be a novel mechanism for modulating membrane fluidity and membrane order in the absence of acyl tail unsaturation and absence of sterols. S. japonicus has previously been shown to survive and proliferate at wider temperature range (up to 42°C), unlike it's closely related sister species S. pombe (Klar, 2013). This data is now consistent with a model by which S. japonicus can maintain membrane lipid order above 42°C due to the presence of these asymmetric phospholipids. Interestingly S. japonicus grows similarly under anaerobic or aerobic conditions, in contrast to S. pombe or other yeast species such as budding yeast S. cerevisiae, which exhibit better growth under oxygen supplemented conditions and grow poorly in its absence (Kaino et al., 2018). It is possible that S. japonicus evolved a mechanism to adapt to low oxygen consumption and respiration deficiency through a range of adaptive metabolic changes including lipid synthesis toward a high abundance of asymmetric saturated phospholipids. Synthesis of both sterols and unsaturated fatty acids require oxygen, and therefore organisms that can tolerate anaerobic conditions are auxotrophic for sterols and unsaturated fatty acids and many have evolved a mechanism of enhanced uptake of exogenous supply in hypoxic conditions (Reiner et al., 2005). These molecules are key modulators of membrane fluidity and so their absence could partly explain the inability of S. pombe and S. cerevisiae to grow in these conditions. Interestingly, asymmetric lipids now enter as a novel mechanism to allow S. japonicus to survive in such an environmental niche, conferring the ability to modulate and maintain membrane lipid order even in the absence of acyl tail unsaturation or membrane sterols. Consistent with this idea, S. japonicus membranes are lower in sterol content compared to S. pombe , perhaps indicating the necessity of a distinct mechanism to regulate membrane order under a variety of growth conditions and explaining the enhanced survival of S. japonicus in both high temperature and hypoxic environments. MD simulation data shows that the membranes composed of asymmetric lipids are thinner compared to symmetric bilayers. This could also explain the observed significant enrichment of relatively short predicted single transmembrane helices in S. japonicus compared to S. pombe that doesn't produce abundant asymmetric lipids . Matching of transmembrane domain structure with bilayer properties has recently been shown (Lorent et al., 2020) and may indicate that proteins with shorter transmembrane helices co-evolved with asymmetric lipids (Makarova and Owen, 2020). The novel biophysical properties imparted to bilayers by unusual asymmetric phospholipids might have a wide range of fascinating consequences and applications. For example, they may represent a more general adaptation to hypoxic environments -in microorganisms, but also mammalian cells such as in solid cancer environments. Industrial use of liposomes (such as for drug delivery) can require specific membrane fluidity which can be difficult to maintain due to oxidation of acyl tail double bonds. We hypothesize that the use of asymmetric lipids for these applications might prove beneficial. Finally, medium chain fatty acids (MCFAs) such as the 10 carbon sn-2 tail of SDPC are an important industrial component of many products from food to cosmetics (Sarria et al., 2017), and are often acquired from environmentally unsustainable sources such as palm oil. We propose S. japonicus as a potentially important novel industrial microbe which may prove to be a sustainable source of such molecules.
2020-06-04T09:08:06.659Z
2020-06-03T00:00:00.000
{ "year": 2020, "sha1": "a687e66a90b7c7207c0928035541daa702cdc52d", "oa_license": "CCBY", "oa_url": "https://www.biorxiv.org/content/biorxiv/early/2020/06/03/2020.06.03.130450.full.pdf", "oa_status": "GREEN", "pdf_src": "BioRxiv", "pdf_hash": "d705166230aad4584c1cadfe92af615e5fc2c189", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Biology", "Chemistry" ] }
255219070
pes2o/s2orc
v3-fos-license
Decoupling of Dual-Polarized Antenna Arrays Using Non-Resonant Metasurface A non-resonant metasurface (NRMS) concept is reported in this paper to improve the isolation of dual-polarized and wideband large-scale antenna arrays. By properly designing the NRMS, it can perform stable negative permeability and positive permittivity along the tangential direction of the NRMS within a wide band, which can be fully employed to suppress the mutual couplings of large-scale antenna arrays. At the same time, the proposed NRMS can also result in positive permittivity and permeability along the normal direction of the NRMS, which guarantees the free propagation of electromagnetic waves from antenna arrays along the normal direction. For demonstration, a 4×4 dual-polarized antenna array loading with the proposed NRMS is designed to improve the isolations of the antenna array. The simulations demonstrate that the isolations among all ports are over 24 dB from 4.36 to 4.94 GHz, which are experimentally verified by the measured results. Moreover, the radiation patterns of antenna elements are still maintained after leveraging the proposed NRMS. Due to the simple structure of the proposed NRMS, it is very promising to be widely employed for massive MIMO antenna arrays. In the past few decades, researchers have made much effort to reduce the interferences between array elements, and different methods have been proposed [16][17][18][19][20][21]. The first method is directly suppressing the propagation of surface wave coupling and space wave coupling, such as electromagnetic bandgap (EBG) structures [16], defected ground (DGS) [17], and resonators [18]. These methods can effectively reduce the coupling of the array based on the frequency responses of these decoupling structures, yet these decoupling structures usually work in a narrow band. These decoupling structures are also complicated and bulky to achieve the desired frequency responses for mutual coupling reductions and must be inserted between array elements, which need relatively large space. On the other hand, due to the frequency responses of the decoupling structures being polarization-dependent, they are not feasible for application in dual-polarized antenna arrays. As a result, the decoupling methods mentioned above are difficult for extension to massive MIMO arrays. The second solution is to introduce an extra coupling path to parametric study, and the comparison between our work and the techniques employed in the latest literature are presented in Section 3 as well. Section 4 provides the conclusions. Non-Resonant Metasurface for Decoupling The free-space wave coupling mainly causes the mutual coupling between massive antenna array elements with a half-wavelength distance. Therefore, the metasurface employed above the array is mainly used to reduce the free-space coupling path. This section will investigate the scheme of the proposed isolation improvement method in detail. The decoupling principle will be analyzed based on a massive MIMO antenna array sketch with NRMS. The NRMS unit cell is studied under TE and TM modes when the incidece waves propagate in various directions to analyze the effects on the extracted permittivity and permeability to investigate the design procedure of NRMS. Then, the decoupling principle is studied in detail with an example of dual-element dual-polarized antenna array. Decoupling Scheme of the NRMS The sketch of the isolation enhancement principle with NRMS is shown in Figure 1a. The space waves radiated from the element P3 can be broadly represented with a1 and a2, where a1 and a2 are responsible for the mutual coupling between adjacent and non-adjacent antenna elements, respectively. When the NRMS is placed above the array with a distance of h, the NRMS can be equivalent to a negative permeability and positive permittivity medium along the tangential (or x-axis) direction of the NRMS when the unit cell of the NRMS is properly designed, where the propagation constant is purely imaginary. As a result, the propagation of the a1 and a2 at the tangential direction of the array will be prohibited. Unlike the resonant-based MS that only works in a narrow band, this paper proposes a non-resonant and symmetric MS for wideband and dual-polarized large-scale antenna arrays. The geometry of the proposed NRMS unit cell is shown in Figure 1b, developed from the periodic cross-shaped ring, and an air cavity is engraved on the center of the NRMS. The metal strips with a 0.5 mm width are printed on the RO 4350B substrate with a permittivity of 3.66, a loss tangent of 0.002, and a thickness of 1.524 mm. Sensors 2023, 23,152 3 of 17 a 4 × 4 dual-polarized antenna array with NRMS is designed, and its corresponding performance, parametric study, and the comparison between our work and the techniques employed in the latest literature are presented in Section 3 as well. Section 4 provides the conclusions. Non-Resonant Metasurface for Decoupling The free-space wave coupling mainly causes the mutual coupling between massive antenna array elements with a half-wavelength distance. Therefore, the metasurface employed above the array is mainly used to reduce the free-space coupling path. This section will investigate the scheme of the proposed isolation improvement method in detail. The decoupling principle will be analyzed based on a massive MIMO antenna array sketch with NRMS. The NRMS unit cell is studied under TE and TM modes when the incidece waves propagate in various directions to analyze the effects on the extracted permittivity and permeability to investigate the design procedure of NRMS. Then, the decoupling principle is studied in detail with an example of dual-element dual-polarized antenna array. Decoupling Scheme of the NRMS The sketch of the isolation enhancement principle with NRMS is shown in Figure 1a. The space waves radiated from the element P3 can be broadly represented with a1 and a2, where a1 and a2 are responsible for the mutual coupling between adjacent and nonadjacent antenna elements, respectively. When the NRMS is placed above the array with a distance of h, the NRMS can be equivalent to a negative permeability and positive permittivity medium along the tangential (or x-axis) direction of the NRMS when the unit cell of the NRMS is properly designed, where the propagation constant is purely imaginary. As a result, the propagation of the a1 and a2 at the tangential direction of the array will be prohibited. Unlike the resonant-based MS that only works in a narrow band, this paper proposes a non-resonant and symmetric MS for wideband and dual-polarized large-scale antenna arrays. The geometry of the proposed NRMS unit cell is shown in Figure 1b, developed from the periodic cross-shaped ring, and an air cavity is engraved on the center of the NRMS. The metal strips with a 0.5 mm width are printed on the RO 4350B substrate with a permittivity of 3.66, a loss tangent of 0.002, and a thickness of 1.524 mm. Figure 2a shows the simulation model of the unit cell of the NRMS when the incidence waves impinge on it normally, where the E-field of the incidence waves in the tangential direction of the NRMS and the H-field of the incident waves in the vertical direction of the propagation direction of the incident waves. Wave ports are embedded on the top and bottom surfaces of the unit cell without any air gaps. The simulated S−parameters under TE and TM modes at different incident angles of θ offset the normal direction are provided in Figure 2b. The S 11 is less than −2.5 dB for TE mode, while it is lower than −6 dB for TM mode within 60 • from 3 to 5 GHz. The extracted permittivity and permeability under TE and TM modes with different θ are given in Figure 2c,d, respectively. Both the extracted permittivity and permeability of the unit cell under TE and TM incidence waves at different angles of θ are positive, which means that the space waves along the normal direction of the NRMS can propagate through the unit cell freely. Figure 2a shows the simulation model of the unit cell of the NRMS when the incidence waves impinge on it normally, where the E-field of the incidence waves in the tangential direction of the NRMS and the H-field of the incident waves in the vertical direction of the propagation direction of the incident waves. Wave ports are embedded on the top and bottom surfaces of the unit cell without any air gaps. The simulated S−parameters under TE and TM modes at different incident angles of θ offset the normal direction are provided in Figure 2b. The S11 is less than −2.5 dB for TE mode, while it is lower than −6 dB for TM mode within 60° from 3 to 5 GHz. The extracted permittivity and permeability under TE and TM modes with different θ are given in Figure 2c,d, respectively. Both the extracted permittivity and permeability of the unit cell under TE and TM incidence waves at different angles of θ are positive, which means that the space waves along the normal direction of the NRMS can propagate through the unit cell freely. Figure 3a illustrates the simulation model of the unit cell when the incidence waves propagate along the tangential direction of the unit cell, where the H-field of the incidence waves are in the vertical direction of the unit cell, and the E-field is perpendicular to the direction of the incidence waves. The wave ports are also embedded on the left and right sides of the unit cell. Figure 3b shows the simulated S-parameters under TM mode incidence wave at different incident angles of φ, where it shows the S21 is less than −9 dB. The extracted permittivity and permeability under TM incidence wave at different incidence angles can be found in Figure 3c, d. The NRMS unit cell exhibits a negative permeability from 3.0 to 5.1 GHz, while the extracted permittivity is positive within the same band. The S-parameters and the corresponding extracted equivalent parameters demonstrate that the space waves coupling under TM incidence wave cannot propagate along the tangential direction of the unit cell. It is also found that the extracted permittivity and permeability of the NRMS under TE mode are all positive, indicating that the NRMS cannot suppress the mutual couplings generated by TE modes. This specifies a design idea that an anisotropic NRMS can be properly configured to offer negative permeability and positive permittivity for both TE and TM modes to further reduce the mutual couplings among antenna elements, which is our future work. Owing to the features of wideband and geometry symmetric along the orthogonal directions, the proposed NRMS unit cell is promising to reduce the mutual couplings of wideband and dual-polarized large-scale antenna arrays. Figure 3a illustrates the simulation model of the unit cell when the incidence waves propagate along the tangential direction of the unit cell, where the H-field of the incidence waves are in the vertical direction of the unit cell, and the E-field is perpendicular to the direction of the incidence waves. The wave ports are also embedded on the left and right sides of the unit cell. Figure 3b shows the simulated S-parameters under TM mode incidence wave at different incident angles of ϕ, where it shows the S 21 is less than −9 dB. The extracted permittivity and permeability under TM incidence wave at different incidence angles can be found in Figure 3c,d. The NRMS unit cell exhibits a negative permeability from 3.0 to 5.1 GHz, while the extracted permittivity is positive within the same band. The S-parameters and the corresponding extracted equivalent parameters demonstrate that the space waves coupling under TM incidence wave cannot propagate along the tangential direction of the unit cell. It is also found that the extracted permittivity and permeability of the NRMS under TE mode are all positive, indicating that the NRMS cannot suppress the mutual couplings generated by TE modes. This specifies a design idea that an anisotropic NRMS can be properly configured to offer negative permeability and positive permittivity for both TE and TM modes to further reduce the mutual couplings among antenna elements, which is our future work. Owing to the features of wideband and geometry symmetric along the orthogonal directions, the proposed NRMS unit cell is promising to reduce the mutual couplings of wideband and dual-polarized large-scale antenna arrays. Figure 3a illustrates the simulation model of the unit cell when the incidence waves propagate along the tangential direction of the unit cell, where the H-field of the incidence waves are in the vertical direction of the unit cell, and the E-field is perpendicular to the direction of the incidence waves. The wave ports are also embedded on the left and right sides of the unit cell. Figure 3b shows the simulated S-parameters under TM mode incidence wave at different incident angles of φ, where it shows the S21 is less than −9 dB. The extracted permittivity and permeability under TM incidence wave at different incidence angles can be found in Figure 3c, d. The NRMS unit cell exhibits a negative permeability from 3.0 to 5.1 GHz, while the extracted permittivity is positive within the same band. The S-parameters and the corresponding extracted equivalent parameters demonstrate that the space waves coupling under TM incidence wave cannot propagate along the tangential direction of the unit cell. It is also found that the extracted permittivity and permeability of the NRMS under TE mode are all positive, indicating that the NRMS cannot suppress the mutual couplings generated by TE modes. This specifies a design idea that an anisotropic NRMS can be properly configured to offer negative permeability and positive permittivity for both TE and TM modes to further reduce the mutual couplings among antenna elements, which is our future work. Owing to the features of wideband and geometry symmetric along the orthogonal directions, the proposed NRMS unit cell is promising to reduce the mutual couplings of wideband and dual-polarized large-scale antenna arrays. Decoupling of A Two-Element Antenna Array with the NRMS In this section, an example of a dual-element antenna array with a NRMS is given to verify the decoupling performance of the proposed NRMS. The design procedure of the isolation improvement for a dual-element dual-polarized antenna array with NRMS and the configuration of the reference array in the design procedure is given in Figure 4a,b, respectively. This array implements stacked microstrip antenna elements to obtain a broad operating bandwidth. The square metal patch is printed on the top surface of the RO 4350B substrate with a permittivity of 3.66 and a loss tangent of 0.002. The center-tocenter distance between the array elements is 16.5 mm (0.5λ at the center frequency). Four orthogonal slots are etched on the bottom square patch to reduce the cross-polarized mutual coupling between two ports of the antenna array element itself. A PP (polypropylene) board with a permittivity of 2.2 is placed above the lower layer substrate to support the upper layer patch antenna. Then two cavities on the corresponding position of the antenna array elements are engraved to provide a space for the PCB board solder. Port1 and Port2 work in x-polarization, while Port3 and Port4 work in y-polarization. Here, the PP board does not have any other impact on the antenna performance besides supporting the twolayer substrates. The reference antenna array shown in Figure 4a is labeled as Array 1. In the next step, Array 2 is shown, where the NRMS is employed above Array 1. Here, the period of T and the size of x1 of the metal ring is 8 mm and 2.75 mm, respectively. The thickness of the substrate is 1.524 mm. The specific geometry and dimensions of the reference array in the design procedure are depicted in Figure 4b, including the overall structure of the reference array from the front side view and the structure of each layer from the top side view. (a) Decoupling of A Two-Element Antenna Array with the NRMS In this section, an example of a dual-element antenna array with a NRMS is given to verify the decoupling performance of the proposed NRMS. The design procedure of the isolation improvement for a dual-element dual-polarized antenna array with NRMS and the configuration of the reference array in the design procedure is given in Figure 4a,b, respectively. This array implements stacked microstrip antenna elements to obtain a broad operating bandwidth. The square metal patch is printed on the top surface of the RO 4350B substrate with a permittivity of 3.66 and a loss tangent of 0.002. The center-tocenter distance between the array elements is 16.5 mm (0.5λ at the center frequency). Four orthogonal slots are etched on the bottom square patch to reduce the cross-polarized mutual coupling between two ports of the antenna array element itself. A PP (polypropylene) board with a permittivity of 2.2 is placed above the lower layer substrate to support the upper layer patch antenna. Then two cavities on the corresponding position of the antenna array elements are engraved to provide a space for the PCB board solder. Port1 and Port2 work in x-polarization, while Port3 and Port4 work in y-polarization. Here, the PP board does not have any other impact on the antenna performance besides supporting the twolayer substrates. The reference antenna array shown in Figure 4a is labeled as Array 1. In the next step, Array 2 is shown, where the NRMS is employed above Array 1. Here, the period of T and the size of x1 of the metal ring is 8 mm and 2.75 mm, respectively. The thickness of the substrate is 1.524 mm. The specific geometry and dimensions of the reference array in the design procedure are depicted in Figure 4b, including the overall structure of the reference array from the front side view and the structure of each layer from the top side view. Decoupling of A Two-Element Antenna Array with the NRMS In this section, an example of a dual-element antenna array with a NRMS is given to verify the decoupling performance of the proposed NRMS. The design procedure of the isolation improvement for a dual-element dual-polarized antenna array with NRMS and the configuration of the reference array in the design procedure is given in Figure 4a,b, respectively. This array implements stacked microstrip antenna elements to obtain a broad operating bandwidth. The square metal patch is printed on the top surface of the RO 4350B substrate with a permittivity of 3.66 and a loss tangent of 0.002. The center-tocenter distance between the array elements is 16.5 mm (0.5λ at the center frequency). Four orthogonal slots are etched on the bottom square patch to reduce the cross-polarized mutual coupling between two ports of the antenna array element itself. A PP (polypropylene) board with a permittivity of 2.2 is placed above the lower layer substrate to support the upper layer patch antenna. Then two cavities on the corresponding position of the antenna array elements are engraved to provide a space for the PCB board solder. Port1 and Port2 work in x-polarization, while Port3 and Port4 work in y-polarization. Here, the PP board does not have any other impact on the antenna performance besides supporting the twolayer substrates. The reference antenna array shown in Figure 4a is labeled as Array 1. In the next step, Array 2 is shown, where the NRMS is employed above Array 1. Here, the period of T and the size of x1 of the metal ring is 8 mm and 2.75 mm, respectively. The thickness of the substrate is 1.524 mm. The specific geometry and dimensions of the reference array in the design procedure are depicted in Figure 4b, including the overall structure of the reference array from the front side view and the structure of each layer from the top side view. (a) The S parameters of Array 1 and Array 2 are given in Figure 5. Port 1 and Port 2 work in y-and x-polarization, respectively. All S13 and S42 of the arrays with the proposed NRMS show low mutual coupling between the ports with the same polarization. Furthermore, Array 2 can provide a wider decoupling bandwidth. S14 and S23 of the array with NRMS present the mutual coupling between the ports with the cross-polarization. The simulated S-parameters also verify the theoretical analysis for the decoupling with the NRMS mentioned in the previous section. The optimized dimensions of the NRMS and the high h are listed in Table 1. The S parameters of Array 1 and Array 2 are given in Figure 5. Port 1 and Port 2 work in y-and x-polarization, respectively. All S 13 and S 42 of the arrays with the proposed NRMS show low mutual coupling between the ports with the same polarization. Furthermore, Array 2 can provide a wider decoupling bandwidth. S 14 and S 23 of the array with NRMS present the mutual coupling between the ports with the cross-polarization. The simulated Sparameters also verify the theoretical analysis for the decoupling with the NRMS mentioned in the previous section. The optimized dimensions of the NRMS and the high h are listed in Table 1. The S parameters of Array 1 and Array 2 are given in Figure 5. Port 1 and Port 2 work in y-and x-polarization, respectively. All S13 and S42 of the arrays with the proposed NRMS show low mutual coupling between the ports with the same polarization. Furthermore, Array 2 can provide a wider decoupling bandwidth. S14 and S23 of the array with NRMS present the mutual coupling between the ports with the cross-polarization. The simulated S-parameters also verify the theoretical analysis for the decoupling with the NRMS mentioned in the previous section. The optimized dimensions of the NRMS and the high h are listed in Table 1. From the decoupling study of the two-element antenna array with the proposed NRMS, it can be concluded that the proposed NRMS can be equivalent to a negative permeability and positive medium along the tangential direction of the antenna arrays. Therefore, the NRMS will suppress the propagation of the free-space coupling component along the tangential direction. The best decoupling level can be achieved by carefully designing the sizes of the NRMS element and the height of the NRMS above the antenna array. Antenna Configuration Sometimes, the decoupling method that is effective to the two-element antenna array does not necessarily work for large-scale antenna arrays (e.g., 4 × 4 antenna array, even larger), where much more complicated coupling paths exist in large-scale antenna array. As a result, the proposed NRMS is also utilized to check its feasibility to improve the isolation of a wideband and large-scale antenna array. For brevity, a wideband and dualpolarized 4 × 4 antenna array is investigated here. The antenna element and element dimension used in the stacked micro-strip antenna array in Section 2 are also applied to the 4 × 4 arrays in this section. The proposed largescale antenna array that consists of 16 elements with an inter-element distance d of 33 mm covers the bandwidth from 4.29 to 5.13 GHz. The micro coaxial cables are adopted to excite the antenna elements of the antenna array. Different from the dual-element antenna array in Figure 4, the mutual couplings of the 4 × 4 phased arrays exist between the adjacent and non-adjacent elements in both co-polarization and cross-polarization. The mutual coupling between the array elements in the diagonal direction cannot be neglected either. The proposed NRMS, loaded above the antenna arrays with a height h of 15 mm, is expected to simultaneously reduce the mutual coupling of all the paths in a wideband. A design procedure of the NRMS is shown in Figure 6. Here, the original antenna array is labeled as Array A. The original array with the proposed NRMS is marked as Array B, where some non-metalized holes are drilled in both substrates, and a foam board is made with a thickness of 15 mm to support the NRMS. The simulated model of the 4 × 4 antenna array with NRMS is depicted in Figure 6a, where the NRMS is placed above the original antenna array with a distance. The detailed structure of the original antenna array and the NRMS is depicted as well. The prototype is depicted in Figure 6b, where all foam boards and substrates are compressed into one piece and fixed by screws and bolts passing through the non-metalized holes. From the decoupling study of the two-element antenna array with the proposed NRMS, it can be concluded that the proposed NRMS can be equivalent to a negative permeability and positive medium along the tangential direction of the antenna arrays. Therefore, the NRMS will suppress the propagation of the free-space coupling component along the tangential direction. The best decoupling level can be achieved by carefully designing the sizes of the NRMS element and the height of the NRMS above the antenna array. Antenna Configuration Sometimes, the decoupling method that is effective to the two-element antenna array does not necessarily work for large-scale antenna arrays (e.g., 4 × 4 antenna array, even larger), where much more complicated coupling paths exist in large-scale antenna array. As a result, the proposed NRMS is also utilized to check its feasibility to improve the isolation of a wideband and large-scale antenna array. For brevity, a wideband and dual-polarized 4 × 4 antenna array is investigated here. The antenna element and element dimension used in the stacked micro-strip antenna array in Section 2 are also applied to the 4 × 4 arrays in this section. The proposed largescale antenna array that consists of 16 elements with an inter-element distance d of 33 mm covers the bandwidth from 4.29 to 5.13 GHz. The micro coaxial cables are adopted to excite the antenna elements of the antenna array. Different from the dual-element antenna array in Figure 4, the mutual couplings of the 4 × 4 phased arrays exist between the adjacent and non-adjacent elements in both co-polarization and cross-polarization. The mutual coupling between the array elements in the diagonal direction cannot be neglected either. The proposed NRMS, loaded above the antenna arrays with a height h of 15 mm, is expected to simultaneously reduce the mutual coupling of all the paths in a wideband. A design procedure of the NRMS is shown in Figure 6. Here, the original antenna array is labeled as Array A. The original array with the proposed NRMS is marked as Array B, where some non-metalized holes are drilled in both substrates, and a foam board is made with a thickness of 15 mm to support the NRMS. The simulated model of the 4 × 4 antenna array with NRMS is depicted in Figure 6a, where the NRMS is placed above the original antenna array with a distance. The detailed structure of the original antenna array and the NRMS is depicted as well. The prototype is depicted in Figure 6b, where all foam boards and substrates are compressed into one piece and fixed by screws and bolts passing through the non-metalized holes. Parametric Study The height of the NRMS is an important parameter to determine the decoupling level of the array. Therefore, a parametric study for the height h is essential to obtain the optimal decoupling level for the 4 × 4 large-scale antenna array. Figure 7 gives the S-parameters of the antenna array when h varies from 9 to 21 mm with a step of 3 mm. Here, the couplings between element A6 and the neighboring and non-neighboring elements are selected. It shows that, when h is 15 mm, the lowest couplings between antenna elements can be obtained, and the mutual couplings of the antenna array are lower than −24 dB. As h is set as other values, the mutual couplings of the antenna array are higher than 24 dB. Parametric Study The height of the NRMS is an important parameter to determine the decoupling level of the array. Therefore, a parametric study for the height h is essential to obtain the optimal decoupling level for the 4 × 4 large-scale antenna array. Figure 7 gives the S-parameters of the antenna array when h varies from 9 to 21 mm with a step of 3 mm. Here, the couplings between element A6 and the neighboring and non-neighboring elements are selected. It shows that, when h is 15 mm, the lowest couplings between antenna elements can be obtained, and the mutual couplings of the antenna array are lower than −24 dB. As h is set as other values, the mutual couplings of the antenna array are higher than 24 dB. The size x1 of the metasurface cell (see Figure 1b) also plays an essential role in determining the decoupling level because the cross-shaped pattern of the NRMS unit cell fully determines the S-parameters and the corresponding extracted permittivity and permeability. Theoretically, the x1 and x3 determine bandwidth that we can extract negative permeability and positive permittivity. Thus, the parametric study for the size x1 and x3 for the decoupling level of the 4 × 4 large-scale antenna array is provided. Figure 8 gives the S-parameters when size x1 varies from 1.25 to 2.75 mm with a step of 0.5 mm. It depicts that, for the increment of x1, the mutual coupling of the 4 × 4 large-scale antenna array decreases. When x1 is 2.75 mm, the mutual coupling of the 4 × 4 large-scale antenna array reaches the lowest level, where the mutual coupling is lower than 24 dB. When x1 increases further, the more reflected waves from the proposed NRMS will be generated, which might enhance the space wave coupling of the array. Besides, the impedance between the proposed NRMS and antenna array will be deteriorated as well. The size x1 of the metasurface cell (see Figure 1b) also plays an essential role in determining the decoupling level because the cross-shaped pattern of the NRMS unit cell fully determines the S-parameters and the corresponding extracted permittivity and permeability. Theoretically, the x1 and x3 determine bandwidth that we can extract negative permeability and positive permittivity. Thus, the parametric study for the size x1 and x3 for the decoupling level of the 4 × 4 large-scale antenna array is provided. Figure 8 gives the S-parameters when size x1 varies from 1.25 to 2.75 mm with a step of 0.5 mm. It depicts that, for the increment of x1, the mutual coupling of the 4 × 4 large-scale antenna array decreases. When x1 is 2.75 mm, the mutual coupling of the 4 × 4 large-scale antenna array reaches the lowest level, where the mutual coupling is lower than 24 dB. When x1 increases further, the more reflected waves from the proposed NRMS will be generated, which might enhance the space wave coupling of the array. Besides, the impedance between the proposed NRMS and antenna array will be deteriorated as well. Figure 9 shows the S-parameters when size x3 varies from 0.6 to 6.6 mm with a step of 2 mm. It clearly demonstrates that, with the increment of x3, the mutual coupling of the 4 × 4 large-scale antenna array reduces slightly. When the air cavity size x3 of the NRMS unit cell is 6.6 mm, the lowest mutual coupling level of the antenna array can be achieved. When x3 increases further, the air cavity will destroy the structure of cross-shaped NRMS units. Thus, the S-parameters of the 4 × 4 large-scale antenna array with larger air cavity size x3 (larger than 6.6 mm) are not given. Finally, the optimized x1 and x3 is 2.75 mm and 6.6 mm, respectively. Comparing the parametric study of x1 and x3 on the decoupling level of the 4 × 4 large-scale antenna array, it could conclude that the size of the crossshaped structure plays a determining role, and the size of the air cavity plays a fine-tuning function. Figure 9 shows the S-parameters when size x3 varies from 0.6 to 6.6 mm with a step of 2 mm. It clearly demonstrates that, with the increment of x3, the mutual coupling of the 4 × 4 large-scale antenna array reduces slightly. When the air cavity size x3 of the NRMS unit cell is 6.6 mm, the lowest mutual coupling level of the antenna array can be achieved. When x3 increases further, the air cavity will destroy the structure of cross-shaped NRMS units. Thus, the S-parameters of the 4 × 4 large-scale antenna array with larger air cavity size x3 (larger than 6.6 mm) are not given. Finally, the optimized x1 and x3 is 2.75 mm and 6.6 mm, respectively. Comparing the parametric study of x1 and x3 on the decoupling level of the 4 × 4 large-scale antenna array, it could conclude that the size of the cross-shaped structure plays a determining role, and the size of the air cavity plays a fine-tuning function. Antenna Array Performance The 4 × 4 arrays in Figure 6 have the mutual coupling between adjacent and nonadjacent array elements in the x, y, and diagonal directions. Moreover, the arrays are symmetric along the x, y, and diagonal directions. Due to the highly geometry symmetric of the NRMS and dual-polarized antenna array, the mutual coupling between the array element A6 (see Figure 6a) and other elements can represent all coupling types. Here, the Sparameters of A6 are selected and shown. S11,12 represents the mutual coupling level between port11 and port12 of the array element A6. It should be noticed that S2,12 refers to the mutual coupling between neighboring array elements in the diagonal direction. S4,12 and S10,12 represent the mutual couplings between adjacent elements in the y-direction and x-direction, respectively. In addition, S16,12 and S28,12 refer to the mutual coupling between non-neighboring array elements. The S-parameters of port11, similar to that of port12, can also represent the mutual coupling of the proposed 4 × 4 large-scale antenna array. Figure 10 gives the simulated and measured S-parameters of the 4 × 4 antenna array with and without the proposed NRMS. In Figure 10a, the reference array covers from 4.05 to 4.91 GHz with an isolation of 16.5 dB. Figure 10b shows that the array with the proposed NRMS covers 4.36 to 4.94 GHz with an isolation of 24 dB. The measured S-parameters of the antenna array with the proposed NRMS in Figure 10c align well with the simulated one in Figure 10b. Therefore, all simulated and measured S-parameters demonstrate that the isolations of the wideband and dual-polarized large-scale antenna array can be improved effectively to over 24 dB by employing the proposed NRMS within 4.36 to 4.94 GHz. Antenna Array Performance The 4 × 4 arrays in Figure 6 have the mutual coupling between adjacent and nonadjacent array elements in the x, y, and diagonal directions. Moreover, the arrays are symmetric along the x, y, and diagonal directions. Due to the highly geometry symmetric of the NRMS and dual-polarized antenna array, the mutual coupling between the array element A6 (see Figure 6a) and other elements can represent all coupling types. Here, the S-parameters of A6 are selected and shown. S 11,12 represents the mutual coupling level between port11 and port12 of the array element A6. It should be noticed that S 2,12 refers to the mutual coupling between neighboring array elements in the diagonal direction. S 4,12 and S 10,12 represent the mutual couplings between adjacent elements in the y-direction and x-direction, respectively. In addition, S 16,12 and S 28,12 refer to the mutual coupling between non-neighboring array elements. The S-parameters of port11, similar to that of port12, can also represent the mutual coupling of the proposed 4 × 4 large-scale antenna array. Figure 10 gives the simulated and measured S-parameters of the 4 × 4 antenna array with and without the proposed NRMS. In Figure 10a, the reference array covers from 4.05 to 4.91 GHz with an isolation of 16.5 dB. Figure 10b shows that the array with the proposed NRMS covers 4.36 to 4.94 GHz with an isolation of 24 dB. The measured S-parameters of the antenna array with the proposed NRMS in Figure 10c align well with the simulated one in Figure 10b. Therefore, all simulated and measured S-parameters demonstrate that the isolations of the wideband and dual-polarized large-scale antenna array can be improved effectively to over 24 dB by employing the proposed NRMS within 4.36 to 4.94 GHz. The simulated and measured radiation patterns of the array before and after loading the proposed NRMS at 4.5 GHz, 4.7 GHz, and 4.9 GHz are shown in Figure 11 to check the impacts of the proposed NRMS on the radiation performance of the antenna element. The measurements are implemented in the anechoic chamber to avoid electromagnetic interference from the environment. As seen in Figure 11, the radiation patterns of the antenna arrays with and without the proposed NRMS are almost unchanged except for some slight ripples. Moreover, the cross-shaped metal rings convert partial space wave energy from one polarization into orthogonal polarization, which causes the deterioration of the cross-polarization level of the antenna elements. Additionally, the radiation patterns are asymmetric due to the propagation blockage of the space waves radiated from the active antenna by its adjacent antennas. The simulated and measured radiation patterns of the array before and after loading the proposed NRMS at 4.5 GHz, 4.7 GHz, and 4.9 GHz are shown in Figure 11 to check the impacts of the proposed NRMS on the radiation performance of the antenna element. The measurements are implemented in the anechoic chamber to avoid electromagnetic interference from the environment. As seen in Figure 11, the radiation patterns of the antenna arrays with and without the proposed NRMS are almost unchanged except for some slight ripples. Moreover, the cross-shaped metal rings convert partial space wave energy from one polarization into orthogonal polarization, which causes the deterioration of the cross-polarization level of the antenna elements. Additionally, the radiation patterns are asymmetric due to the propagation blockage of the space waves radiated from the active antenna by its adjacent antennas. Finally, the realized gains and total efficiencies of the antenna array with and without the proposed NRMS are presented in Figure 12. The simulated results illustrate that the proposed NRMS can improve the realized gain and total efficiency of the antenna array within a wide bandwidth, which are attributed to the suppressed propagation of the space waves coupling along the tangential direction. Moreover, the measured realized gain and total efficiency of the array with the proposed NRMS are lower than the simulated results caused by the measurement error. The losses in coaxial cables also cause the discrepancies between the simulation and the measurement of the antenna arrays. Here, the measured realized gain and total efficiency are more than 6 dBi and 74% from 4.36 to 4.94 GHz, respectively. Antenna Performance Comparison The performance of the proposed 4 × 4 large-scale antenna array with the proposed NRMS is compared with the literature reported recently, as listed in Table 2. The decoupling proposed in [24] is an effective method, but it has a fatal defect of a narrow bandwidth caused by the resonant-response property of the decoupling network. Typically, it needs to be deployed on the backside of the antenna array, where the antenna array with the decoupling network has multiple substrates, which means they are bulky and complicated. Moreover, the decoupling network usually lowers gain and total efficiency of antenna array. The ADS in [26] is a novel decoupling concept, but it has a complicated design Figure 11. The radiation patterns of the array with and without NRMS at different frequencies. Finally, the realized gains and total efficiencies of the antenna array with and without the proposed NRMS are presented in Figure 12. The simulated results illustrate that the proposed NRMS can improve the realized gain and total efficiency of the antenna array within a wide bandwidth, which are attributed to the suppressed propagation of the space waves coupling along the tangential direction. Moreover, the measured realized gain and total efficiency of the array with the proposed NRMS are lower than the simulated results caused by the measurement error. The losses in coaxial cables also cause the discrepancies between the simulation and the measurement of the antenna arrays. Here, the measured realized gain and total efficiency are more than 6 dBi and 74% from 4.36 to 4.94 GHz, respectively. Finally, the realized gains and total efficiencies of the antenna array with and without the proposed NRMS are presented in Figure 12. The simulated results illustrate that the proposed NRMS can improve the realized gain and total efficiency of the antenna array within a wide bandwidth, which are attributed to the suppressed propagation of the space waves coupling along the tangential direction. Moreover, the measured realized gain and total efficiency of the array with the proposed NRMS are lower than the simulated results caused by the measurement error. The losses in coaxial cables also cause the discrepancies between the simulation and the measurement of the antenna arrays. Here, the measured realized gain and total efficiency are more than 6 dBi and 74% from 4.36 to 4.94 GHz, respectively. Antenna Performance Comparison The performance of the proposed 4 × 4 large-scale antenna array with the proposed NRMS is compared with the literature reported recently, as listed in Table 2. The decoupling proposed in [24] is an effective method, but it has a fatal defect of a narrow bandwidth caused by the resonant-response property of the decoupling network. Typically, it needs to be deployed on the backside of the antenna array, where the antenna array with the decoupling network has multiple substrates, which means they are bulky and complicated. Moreover, the decoupling network usually lowers gain and total efficiency of antenna array. The ADS in [26] is a novel decoupling concept, but it has a complicated design Antenna Performance Comparison The performance of the proposed 4 × 4 large-scale antenna array with the proposed NRMS is compared with the literature reported recently, as listed in Table 2. The decoupling proposed in [24] is an effective method, but it has a fatal defect of a narrow bandwidth caused by the resonant-response property of the decoupling network. Typically, it needs to be deployed on the backside of the antenna array, where the antenna array with the decoupling network has multiple substrates, which means they are bulky and complicated. Moreover, the decoupling network usually lowers gain and total efficiency of antenna array. The ADS in [26] is a novel decoupling concept, but it has a complicated design process for the pattern of the primary reflectors and secondary reflectors. Additionally, it needs a relatively larger space to accommodate the reflectors to reflect enough space waves so that the unwanted coupling waves can be largely eliminated, which is not conducive to antenna miniaturization. Though the gain and total efficiency of the array in [28] are higher than that of our work, it has a larger inter-element distance and a bulky structure. The metasurfaces proposed in [29,31] have a relatively small inter-element distance, but they can only be used in single-polarized arrays because of the magnetic resonant response and the asymmetry along the cross direction. Moreover, the bandwidth, gain, and total efficiency of the arrays in [29,31] are worse than our work. Additionally, the worst isolation of the array in [31] is higher than our work, but it cannot be expanded to a massive MIMO array application owing to the limitation of asymmetrical structure along the orthogonal directions. Compared with the structure in the current literature, on the premise of guaranteeing a wider working bandwidth, higher worst isolation, gain, and efficiency, our work has a simple design and installation process of the decoupling scheme. Conclusions A novel decoupling concept of NRMS for wideband and dual-polarized large-scale antenna arrays is proposed. The decoupling mechanism of the NRMS has been analyzed. To justify the feasibility of the proposed NRMS for decoupling of large-scale antenna arrays, a 4 × 4 antenna array loading with the proposed NRMS has been simulated, fabricated, and measured. The simulated and measured results demonstrate that the isolations of the antenna array can be enhanced from 16.5 dB to over 24 dB within the band from 4.36 to 4.94 GHz, almost without introducing any other adverse effect on the performance of the overall antenna array. The comparison between the proposed NRMS and other techniques reported in the latest literature shows a great potential of the proposed NRMS to be applied to massive MIMO antenna arrays with low mutual couplings in the sub-6 GHz band.
2022-12-29T16:01:19.210Z
2022-12-23T00:00:00.000
{ "year": 2022, "sha1": "6703c794e70e22fba5a4ca9aeeea091ea5f0fe24", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/23/1/152/pdf?version=1671798413", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8f6415969f754c41c329c916bec1f95245229ebe", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
198018572
pes2o/s2orc
v3-fos-license
Application of Cartoon Like Effects to Actual Images Copyright © 2019 by author(s) and International Journal of Trend in Scientific Research and Development Journal. This is an Open Access article distributed under the terms of the Creative Commons Attribution License (CC BY 4.0) (http://creativecommons.org/licenses/ by/4.0) ABSTRACT This paper represents different techniques of converting image to cartoon. Using any one of below mentioned techniques it is possible to convert all types of captured images to cartoon such as images of person, mountains, trees, flora and fauna etc. There are several other techniques for image to cartoon conversion such as using photoshop, adobe illustrator, windows MAC, paint.net and much more INTRODUCTION Social media is extensively used these days. And standing out in this online crowd has always been a to-do on every user's list on these social media platforms. Be it images, blog posts, artwork, tweets, memes, opinions and what not being used to seek attention of followers or friends to create influence or to connect with them on such social platforms. We aim to provide one such creative solution to their needs, which is applying cartoon like effects to their images. Users can later share these images on any social media platforms, messengers, keep it for themselves, share it with loved ones or do whatever they like with it. Nowadays almost everyone is registered in social networks. We keep online status updated every day, share photos and comments, follow our friends' news. To have a nice profile is a matter of prestige. You can use a photo of your own in a profile image, create an amusing avatar or turn your photo into a cartoon. With a pool of web applications available online, an image conversion to cartoon takes few clicks. Need of Project Creating a cartoon like effect is time and space consuming. Existing solutions to provide cartoon like effect to images are complex. Some solutions involve installing complex photo editing software like photoshop and other involve performing some task by user. Our research shows a website to carry out the task of Applying effects is more suitable, space efficient and takes minimum user efforts, for example toony photos is an existing website to perform such task but it is difficult to use as user has to markdown points & lines on the image to apply effects which is not user friendly also the options are limited. Hence there is a dire need for a website which is user friendly and performs the task of applying effects to images very well. Following is our brief research on existing solutions: A. Cartoon Effect The majority of photo editing websites offer the so-called Cartoon Effect. The main advantages of online photo to cartoon effect apps are simplicity and quickness. You'll have to upload a photo from your computer or from the web, find Cartoon Effect in the tool set or choose between styles or variants of this funny photo effect (like in case of www.picturetopeople.org, Kuso Cartoon ) and press the button Apply (or Go). The image processing varies from several seconds up to 1-2 minutes. However, as all quick online solutions these apps have drawbacks. A lot of photo online photo editing tools are rather humdrum because they are deprived of enhancement features. In these apps cartoonization is limited to 1-click operation. Besides, sometimes colors may become blurred and it leads to an unsatisfactory result. Such apps as www.converttocartoon.com, Photo.to, AnyMaking and others belong to this group. At the same time there are online photo editors with more advanced tools. They have a variety of adjustment options. For example, BeFunky helps you modify sketch brightness, contrast, smoothness and other details. B. Pencil Sketch Another means of cartoonization is making a pencil sketch out of your digital photograph. Whenever you apply Cartoon effect your images turn bright and cheerful. If you want to render a solid atmosphere and achieve respectability in your online profile pencil sketch creation will suit your needs better. The image manipulation procedure is just the same as described for the cartoon effect. Upload a photo, select the desired effect, push the button Apply and you are done. The application does its job instantly by itself. PhotoSketcher, Fotosketcher, Dumpr, Tuxpi photo editor and many other applications give you an opportunity to convert your snaps to life-like pencil sketches. Besides, you can decorate your profile photo with a cute photo frame and even create a photo with your favorite cartoon character. Amaze your nearest and dearest, friends and coworkers with a cool profile photo, stand out from the crowd and attract more followers and fans in social networks. You know, the first impression is the strongest)). How to turn photo into cartoon online or on Windows/Mac Moreover, sharing a photo cartoon on social media could attract more attention when others just post standard photos. We are going to share how to turn photo to cartoon on Windows, Mac, and online in this tutorial. With these photo editors and our guides, you can create cartoon at any time, even if you have not learnt any knowledge about painting. If you are ready, let's start right now. The following are the steps to convert photo to cartoon in windows MAC: -Method 1: Convert Photo into Cartoon Online Many people prefer to use online photo editors. They are compatible to more platforms and allow you to edit photos at anytime and anywhere. There are lots of online cartoon photo editor on the internet, you can choose one of them to make your photo into cartoon. Convert photo to cartoon with Paint.net Paint.net is a fee photo editor for Windows, depending on the .Net framework. As one of the best alternatives for Microsoft Paints, Paint.net offers more effects and features, including make cartoon photos with personal images. Step 1: Import photo Step 2: Add effect Step 3: Simulate cartoon Step 4: Fill background Then save the paint to local disk. Convert photo to cartoon using Adobe illustrator Step 1: Get Yourself a Picture and Upload to Adobe Illustrator Choose your favorite picture and upload it to Illustrator. Select a large enough picture so that you will be able to draw over it easily. This program is vector based, that means you will be able to scale the final artwork without losing its form. It will always look great if you want to make it larger or smaller. Step3: Add Colors to You Picture Now comes the exciting part where you can add color to the picture. You want to use shades of colors that is close to the original, unless of course you are looking for a different effect. An easy way to do this is to outline each color segment separately or its own layer. This will enable you to color each section and add your gradients. Year of The Paper Step4: Refine with Gradient Colors and Background Now you want to be a little but more detailed with the gradient tool to give your picture depth. You can play around with solid colors and gradients until it looks great. Adobe illustrator offers many options to create different effects. After Detailed analysis of above steps we realize above techniques/methods are not feasible or easy to use for a user with a simple requirement of applying a cartoon effect to his images. All the above methods are time and space consuming. Hence, we aim to build a website which provides easy to use interface to apply cartoon effects to the user. Proposed System We propose to use neural style transfer which is a machine learning algorithm, which involves two images, first is the input image from the user and second is the style image which is used to apply the style on the input image. Following are the examples of images generated using neural style transfer We propose to create a website, which consists of image upload functionality using which the user can upload his image, the uploaded image is then processed by server using Neural style transfer algorithm and the resulting image is presented to the user on the website. Which then user can download & share. Neural fast style transfer is used by Apps such as https://deepart.io, Prisma, Artisto etc. We decided to choose this approach over traditional image filters (e.g. using image filters such as median & bilateral filters to posterize an Image) as Neural fast style transfer is quite new and challenging technique which uses machine learning & image processing to produce various styled images based on variety of input & style images. The algorithm can be implemented in Python/JavaScript/Lua to perform neural style transfer. We will use Python to implement the backend and the front end of the website will be in HTML, CSS & JS. Basically, in Neural Style Transfer we have two images-style and content. We need to copy the style from the style image and apply it to the content image. By, style we basically mean, the patterns, the brushstrokes, etc. we will provide a set of style images which a user can use to apply different kinds of Cartoon like effects to his image. Scope User will be provided with a set of pretrained style images to choose from. Based on the chosen style and the content image provided by the user, the Resulting image with cartoon like effect is generated by the program. The implementation is based on of the combination of Gatys' A Neural Algorithm of Artistic Style, Johnson's Perceptual Losses for Real-Time Style Transfer and Super-Resolution, and Ulyanov's Instance Normalization (all 3 papers mentioned above). Block Diagram Flow Diagram: Process Diagram: Algorithm Our implementation uses TensorFlow to train a fast style transfer network. We use roughly the same transformation network as described in Justin Johnson et. al, except that batch normalization is replaced with Ulyanov's instance normalization. We use a loss function close to the one described in Gatys, using VGG19 instead of VGG16 and typically using "shallower" layers than in Johnson's implementation (e.g. we use relu1_1 rather than relu1_2). Empirically, this results in larger scale style features in transformations. Challenges and Problem Training of networks for different style images is time consuming and requires lots of computation hardware(GPUs). Different content images may produce slightly different styled images. Precision of cartoon like effect entirely depends on type of content image provided. CONCLUSION: Thus we have shown that how image can be converted to cartoon. We also stated the examples on how image is converted to cartoon. Hardware and software requirements of image to cartoon conversion are also shown in this paper. The systematic working of image to cartoon conversion and respective algorithm and formulae is shown with neat diagram in this paper. Also we have stated challenges and problems one can face while cartoonifying the captured image. In this paper we have also discussed need and scope of cartoonifying the content image
2019-07-22T22:31:16.190Z
2019-04-30T00:00:00.000
{ "year": 2019, "sha1": "51ef97182c60357374216c34bd42ec39ec0025ac", "oa_license": "CCBY", "oa_url": "https://www.ijtsrd.com/papers/ijtsrd22928.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "97d5fe220fe22db0b619fcf4342032d0391efa27", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
238222220
pes2o/s2orc
v3-fos-license
Reducing Uncertainty in 21st Century Sea-Level Predictions and Beyond Sea-level rise is one of the most critical issues the world faces under global warming. Around 680 million people (10% of the world’s population) live in low-lying coastal regions that are susceptible to flooding through storm surges and from sea-water infiltration of fresh groundwater reserves, degradation of farmland and accelerated coastal erosion, among other impacts. Rising sea level will exacerbate these problems and lead to societal impacts ranging from crop and water-supply failures to breakdowns of city infrastructures. In time, it is likely such changes will necessitate the migration of people with substantial economic cost and social upheaval. Here, we discuss the physical processes influencing 21st Century sea-level rise, the importance of not using 2100 alone as a benchmark, the changes that are already locked in, especially after 2100, and those that can be avoided. We also consider the need for both adaptation and mitigation measures and early warning systems in this challenging global problem. Finally, we discuss how the scientific prediction of sea level rise can improved through international coordination, cooperation and cost sharing. INTRODUCTION Sea level has risen by ∼20 cm over the last 150 years or so ( Figure 1). The rate of change has been increasing through time, however, and in the early 21st Century it is ∼3.3 mm/yr and growing at a rate of ∼0.8 mm/yr per decade (Nerem et al., 2018). In the last 3 decades, sea level has risen by 10 cm, roughly equalling the amount over the preceding 120 years (WMO, 2021) ( Figure 2). When compounded by storm surges, these changes have been seen in a number of coastal flooding incidents this century in both major cities (e.g. Houston in 2018, New York in 2012, New Orleans in 2005 and across wide regions in developing countries (e.g., Bangladesh in 2004(e.g., Bangladesh in , 2005(e.g., Bangladesh in , 2015(e.g., Bangladesh in , and 2017. Sea levels will continue to rise in coming decades and millennia, and up to 5 m by 2150 cannot be ruled out (IPCC, 2021). Global sea levels change on timescales of decades to millennia in three ways (Frederikse et al., 2020): 1) the net loss of mass from glaciers and ice sheets to the oceans; 2) the expansion of ocean water as it warms; and 3) changes in non-glacial water storage on land, including groundwater aquifers and water held behind dams and on rivers. The oceans hold around 97% of all water on Earth. Of the remaining 3%, around two thirds are held as ice within glaciers and ice sheets (the other third being in lakes, soils, rivers and the atmosphere). Ice sheets and glaciers thus hold nearly 70% of the planet's freshwater (Siegert, 2006). By far the greatest ice volume is in Antarctica; if melted, sea level would rise by over 57 m. The next largest ice sheet is in Greenland, containing enough water to increase sea level by about 7 m. The remaining ice caps and glaciers in the world, if they too melted, would raise sea level by only ∼0.32 m (Farinotti et al., 2019). Hence, the greatest uncertainties to future sea-level rise relate to how the massive polar ice sheets will react to global warming and at what point this response becomes impossible to halt, or irreversible, on human time scales. Sea level integrates and aggregates a range of climate processes and, because of the long reaction times of ice and ocean processes, lags climate forcing. Consequently, during episodes of global warming, sea-level rise experienced at a particular date is unlikely to represent the maximum expected from that warming. "Built in" sea-level rise demands we consider adaptation measures to protect our coastal communities, as well as ways to reduce the problem through mitigation. The amount that is built in and the level to which can mitigate further increases are key issues for the 21st Century, including in the near term of the next 2 decades given persistently high emissions levels. This will determine whether and how we can inhabit today's coastal regions for the rest of this century and beyond (Siegert et al., 2020;Aschwanden et al., 2021). Sea Level Change Present and Past Valuable insights into future sea-level rise can be obtained by looking into records of past change over the last few decades, as well as much further back in time during periods of previous global warming. The fact that we know how much sea level has risen in the last 150 years is due to a combination of tide-gauge measurements from the 19th Century to highly precise satelliteobservations of ocean levels in the 21st Century. Satellite altimetry from the last 30 years shows that all parts of Greenland's ice sheet are now losing mass (Sasgen et al., 2020) ( Figure 3). Furthermore, important regions of the Antarctic ice sheet, where the ice rests on a bed >1 km below sea level that deepens upstream, are thinning and losing mass rapidly ( Figure 3). Satellite data reveal the rate of West Antarctic mass loss has increased six-fold since the early 1990s (Shepherd et al., 2018), and it is the "marine" sections of the ice sheet -where the ice is in contact with relatively warm waterthat are most vulnerable. In contrast, while the world's glaciers also contribute significantly to sea-level rise, the rate of FIGURE 1 | Global sea-level change since the 1850s. Although the early measurements were quite simple and lacked accuracy, they reveal an upwards trend in sea level (about 0.8 mm every year) that is greater than the margin of error. By the mid 1900s measurements became much more accurate (the blue line) and show the rate of sea level rise to have increased to around 2 mm per year. In the last few decades satellite measurements (the black line) have provided highly accurate records of sea level and show the rate of sea-level rise to now be over 3 mm each year. Taken from Siegert (2017). Frontiers in Environmental Science | www.frontiersin.org September 2021 | Volume 9 | Article 751978 acceleration is smaller than for the ice sheets (Ciracì et al., 2020). The ice-sheet contribution to sea level rise (mostly from Greenland) now exceeds that from the world's glaciers and, in combination, they now exceed sea-level rise from thermal expansion of the ocean. This trend is certain to continue in the coming decades and indicates that future sea-level rise is likely to be dominated by the response of the vast polar ice sheets to warming-related processes. This will occur especially at high levels of emissions and warming, as projections show that valley glacier and ice cap loss will peak at mid-century and subsequently decline as glacier ice disappears almost entirely around 2200 under such scenarios. Further back in time, extensive evidence shows that warming has repeatedly driven large, rapid sea-level rise from ice-sheet loss. At the peak of the last Ice Age, around 20,000 years ago, ice sheets captured so much water from the oceans that global sea level was ∼130 m lower than now (Lambeck et al., 2014). Ice Ages are instigated by slight variations in Earth's orbit and attitude, which lead to changes in the amount of solar radiation cast onto the Earth, nudging the Earth system to drawdown atmospheric CO 2 to, and then release it back from, the deep ocean so amplifying and globalising a major climate change response. Such changesice ages paced by Earth's orbit and driven by atmospheric CO 2have been recorded in the ice core record for the last 800,000 years ( Figure 4, Lüthi et al., 2008), and in the sedimentary record for millions of years. The last deglaciation was driven by an atmospheric CO 2 increase from about 180 to 280 parts per million (ppm), leading to a global-average temperature rise of ∼6°C to approximately pre-industrial temperatures, and sea level to increase, on average, by 1.3 m per century for about 10,000 years. While useful for a general understanding of how ice sheets respond to warming, this longterm average masks considerable variability at global and regional scales (Harrison et al., 2019). At the height of the last deglacial warming, during the Bølling-Allerød period around 14,000 years ago, sea-levels rose about 18 m during 350 years, most likely driven by the collapse of the Laurentide ice sheet (Deschamps et al., 2012;Dutton et al., 2015). Although the vulnerable sectors of today's ice sheets are not so massive as the Laurentide, this relatively recent event in Earth's history indicates the potential for rapid ice loss, and resulting sea-level rise, over a period of only a few human generations. Ice sheets, and their interactions with the ocean, were critical to rapid climate change in the last deglaciation and will likely be so this coming Century. They influenced climate by releasing large quantities of water, via direct melting or by iceberg calving into the oceans, so affecting ocean salinity-driven circulation (Broecker, 2010). Ocean circulation is, in turn, important to ice sheet growth/loss and climate change for three reasons. First, oceans are capable of transporting heat between latitudes and hemispheres. Second, ocean thermal conditions are one of several factors affecting the growth of sea ice, which is an important feedback on surface reflectivity and the amplification of warming. Third, ocean temperatures are a control on the rate of ice-sheet decay where the ice is in direct contact with the water, as in West Antarctica, and on the maintenance and stability of ice shelves, which serve to slow ice sheet loss ( Figure 3). Hence, through iceocean-atmospheric interactions, the gentle rise in global temperatures through the last deglaciation was punctuated regionally by episodes of rapid (on the order of decades or shorter) and extreme (in some areas >±5°C) regional temperature change. In 2021, the average annual concentration of atmospheric CO 2 is now at ∼415 ppm, and at a level comparable to a period around 5.3-2.6 million years ago, know as the Pliocene, when global temperatures were around 3°C warmer than today and sea level was at least 20 m higher at times. Whether the Pliocene represents a direct analogue for our future, or whether the high rate of change experienced over the last 150 years will push Earth toward a different state, is a serious issue in climate and Earth system science. How Much Higher Could Sea Level Get by 2100? Sea level rise will continue in the 21st Century, and well beyond it (Siegert et al., 2020). Whether this rise will be contained to <1 m, or be much higher, will depend on whether 1) we can curtail greenhouse gas emissions to "net zero" by mid-Century, thus stalling the atmospheric CO 2 concentrationand then bringing it downso that global warming can be restricted to the 1.5°C target (relative to the pre-industrial level), and 2) the polar ice sheets will react more rapidly than observed to date, in ways we know they can and have in the past (IPCC, 2019). At present, sea level is tracking along the most severe prediction associated with unabated emissions (e.g., the IPCC's representative carbon pathway RCP8.5) and consequent warming ( Figure 5, Slater et al., 2020). Some glaciologists use numerical ice-sheet models to understand how fast the polar ice sheets can release mass to the ocean under warming scenarios. While such experiments are useful in understanding processes that may be responsible for mass loss, and much progress has been made in ice-sheet modelling over the last few decades, there still exists a number of limitations to the models that preclude accurate 21st Century predictions. Siegert et al. (2020) point to six issues that urgently need to be resolved as they would help reduce uncertainties in predictions: 1) mapping of subglacial topography, as model FIGURE 4 | CO 2 Levels over the past 800,000 years. Note consistent pattern of glacial (ice age) CO 2 concentrations around 180 ppm, and Inter-glacial (warmer/ pre-industrial) periods with around 280 ppm. The pre-industrial CO 2 value was 277 ppm and today it is around 415 ppm. Adapted from Lüthi et al. (2008). Frontiers in Environmental Science | www.frontiersin.org September 2021 | Volume 9 | Article 751978 outputs are only as good as the inputs, and the landscapes beneath the Greenland and Antarctic ice sheets are far less well resolved than the potential resolution of the models; 2) collecting more ocean data at the ice-sheet marine margins to better comprehend the supply of heat to the most vulnerable sections of the ice sheet; 3) acquiring geophysical information from the ice-bed interface, as the material properties of the bed dictate how rapid the ice can flow to the ocean; 4) improving the coupling between ice sheet, ocean and atmospheric models, to allow feedbacks and process interplays to be factored into predictions; 5) undertaking laboratory investigations of ice fracturing, as it can lead to sudden changes in ice-sheet conditionssuch by the disintegration of floating ice shelves; and 6) enhancing our knowledge of past changes in order to "train" models. Depending on which model is chosen and which climate scenario plays out, one can arrive at predictions of both less than (Edwards et al., 2021) and more than (DeConto et al., 2021) 1 m of sea level rise this Century from all sources. A reasonable characterisation of the problem might be to conclude sea-level rise of around 1 m by this Century is certainly possible, but higher outcomes cannot be ruled out given uncertainties in the models and the warming that will occur in coming decades. Resolving modelling issues would help the problem greatly and is possible, but would require a significant research effort and funding. Given the benefit of reduced uncertainty in expected 21st Century sea level rise to hundreds of millions of livelihoods, and trillions of dollars of capital locked into coastal towns and cities, it seems obvious that it should be a research imperative. Improving models and their inputs alone may not be enough to drive the necessary policies, however. In addition, an 'early warning system' is needed to know whether the ice-sheet environment is on a path to a >1 m sea-level rise by 2100. Such a system, comprising satellites, airborne platforms, robotic devices, field investigators and expert knowledge, is already good but has major weaknesses in the ice-sheet regions that are most vulnerable, and so this too requires urgent action. The required technology to do this is largely available, but the scale of deployment is presently inadequate. Sea-Level Rise Under Mid-Century Temperature Threshold Exceedance Although studies using aggregated Earth system modelling, such as Edwards et al. (2021), and more dynamical observation-based models, such as DeConto et al. (2021), may appear to diverge in 21st century sea-level estimates, much of these differences disappear in longer time frames. Unfortunately, however, too few studies look beyond 2100 (a date set by the IPCC over 30 years ago for its first assessment of climate change, AR1) despite the long-term response of ice sheets to warming. Much of the public examination of DeConto et al. (2021) failed to take up its finding of massive irreversible sea-level rise, especially under high emissions (IPCC's 'RCP8.5') where rates exceeded 5 cm per year by 2150, resulting in 10 m sea-level rise from Antarctica alone by 2300. In their model, under current nationallydetermined contribution (NDC) policies and measures (agreed in Paris in 2015 and updated by some since), which might lead to warming of 2°C around mid-century and 3°C by 2100, aggressive CO 2 removal initiated after 2060 (returning concentrations to pre-industrial levels) could not halt continued ice loss. This is because of the ocean-ice sheet interaction noted above, where the ocean continues to hold heat even after the atmosphere begins to cool, preventing maintenance of buttressing ice shelves that could restrict ice loss. DeConto et al. (2021) found threshold behaviour around 2°C of warming above pre-industrial levels, after which significant ice loss from Antarctica becomes irreversible. Under continued highemissions consistent with RCP8.5, as is currently occurring with annual rates of increase in atmospheric CO 2 of between FIGURE 5 | Analysis of ice-sheet mass balance and IPCC sea level projections. (A) Measured ice loss from Greenland and Antarctica plotted against IPCC 5th Assessment Report predictions. "AR5 upper" range relates to the "business and usual" RCP8.5 scenario, whereas the "AR5 lower" range corresponds to the RCP2.6 scenario of strong action on carbon dioxide emissions. (B) Components of observed (IMBIE) and predicted (as in (A)) annual sea-level contributions from Greenland and Antarctica between 2007 and 2017, broken into components of ice-dynamics and surface mass balance (SMB). Adapted with permission from Slater et al. (2020). Frontiers in Environmental Science | www.frontiersin.org September 2021 | Volume 9 | Article 751978 2 and 3 ppm (Schwalm et al., 2020), this 2°C threshold might be passed in less than 20 years. DeConto et al. (2021) also found a leap in rates of sea-level rise this century under RCP8.5 once 3°C of warming was exceeded, a finding not inconsistent with Edwards et al. (2021) when looking at 2100. The main acceleration did not occur until 2120, however. The latest Working Group I IPCC Assessment (AR6) of the physical science took up these potential outcomes, stating in the Summary for Policymakers that with very high emissions, global mean sea level up to 2 m by 2100 and 5 m by 2150 "cannot be ruled out due to deep uncertainty in ice sheet processes" (IPCC, 2021). Such long-term outcomes would commit hundreds of millions of people to managed retreat in some of the most populated urban areas of the world. While this may not necessarily occur during the 21st century, many children of today are likely to still be living when the consequences of decisions made by adults today become apparent. The contrast between the results of DeConto et al. (2021) and Edwards et al. (2021) points to two urgent research needs. First, to better understand committed and irreversible sealevel rise requires that models look beyond 2100, in order to more fully capture the total ice-sheet response, which primarily arises after the arbitrary 2100 benchmark. To stop at 2100 minimizes awareness of the impact of warming, as well as future needs for adaptation and, indeed, whether there are limits to what can be adapted (Haasnoot et al., 2020). Second, rather than continuing to aggregate modelling studies that often do not differentiate between more and less robust models in terms of capturing ice sheet behaviour, more dynamic-based studies are needed because, as Bassis et al. (2021) indicate, different assumptions about ice sheet behaviour may change estimates and rates of sea-level rise to significant degrees. Focusing future research efforts on the development of more realistic, dynamical, observation-based models designed to reach beyond the 2100 benchmark, will greatly improve projections of coastal sea-level rise. It would provide invaluable support to nations for planning purposes, as well as potentially stimulating climate ambition by making the consequences of delayed mitigation more accessible to decision makers, including in the finance and insurance sectors. Internationally-Coordinated Research, with Funding Appropriate to the Risk While the scientific challenge is urgent yet tractable, it requires two essential elements. The first is a substantial increase in funding to allow the required advances in modelling technology and measurements. The second is international agreement and collaboration, because this is an issue shared by many that only requires one answer. On funding, it is interesting to understand the present level at which field and computer-based research into sea-level rise is supported. Satellite data have proven essential to appreciate the increasing severity of the issue, and several have been launched over the last few decades on the order of £50-100m per satellite, with consequential funding needed to process data around £1-2m per year. While the former, as it is a research asset, can be supported by one-off investments, the latter, as it requires recurrent spending, would come from the annual budget of a national research council. To place the problem into context, the annual budget of United Kingdom Natural Environment Research Council (NERC) is around £300m, and that of the British Antarctic Survey is around £50m. These sums might seem like a lot, but they must support all areas of environmental science, maintain infrastructure and provide logistics. While government funds can be found to support large infrastructure needs, such as the United Kingdom's new £200m polar research vessel RRS Sir David Attenborough, the funding to perform science using the ship must come out of NERC's annual budget, potentially displacing other work if the costs are substantial. Hence, it seems challenging to see how an annual investment of, say, £100m for 10 years (£1Bn) into sea-level change would be possible from the United Kingdom alone, given the present funding arrangements. While receiving less attention than polar bases and research vessels, the human and computer resource needs for the production of updated models, that encompass complex ice sheet dynamics and ocean-ice-sheet-atmosphere interactions, should not be underestimated. Use of less sophisticated models, and those ending at 2100, is not merely an issue of habit and "ease of use" for researchers, but results from limitations on available post-doctoral and graduate students, computer scientists and mathematicians to develop these more complex models. Use of improved models, especially running multi-century calculations in order to more fully capture the totality of ice sheet and sea level response, is constrained by availability of the super-computers needed to run and fine-tune experiments, often stretching into weeks or even months of computer time. Similar to polar research expeditions, a system of more national and international efforts to produce models that can be used as prognostic tools is needed to replace today's more ad hoc system of grants to individual research teams competing for extremely limited funding. This is not to say that expensive polar-based scientific projects have not, and cannot be, supported. The IceCube neutrino array at South Pole cost around $280m in 2010, the bulk of which can from the US National Science Foundation (NSF). However, while we cannot discount the possibility of substantial increases in the budgets of research councils specifically for sea level research, there may be an alternative approach that can be accommodated by more modest levels of national support; international coordination, collaboration and cost-sharing. One programme that could be used as a template for future collaborative efforts is the International Thwaites Glacier Consortium (ITGC), led by the NSF and NERC, but also involving other nations, to better understand the processes driving mass loss in this vulnerable section of the West Antarctic Ice Sheet, the collapse of which may lead to unusually high rates of sea-level rise. There are multiple benefits of such an arrangement: 1) pooling talent; 2) deploying logistics; 3) mobilising facilities; and 4) sharing costs. The outcome is a programme that achieves more science than a national programme and at a reduced cost per nation. Such a programme also makes good use of facilities and logistics, and forms long-term research relationships that may lead to future collaboration. There are other examples, such as the ANDRILL and Cape Roberts drilling programmes, and the Integrated Ocean Drilling Programme (IODP), each having a similar collaborative element at their cores. With cost sharing between 10 nations, £10m each per year for 10 years would deliver £1Bn but this may still seem prohibitive from a research council perspective. While the £1bn over 10 years price tag is nominal (although probably in the right ball park), it should be noted that this was precisely the level of funding agreed in 2016 by the Oil and Gas Climate Initiative (OGCI)-ten major oil and gas companies each providing £10m per year for 10 yearswhich initially was formed to support research and innovation on (predominantly) methane leaks and carbon capture and storage, so reducing emissions while reducing inefficiencies and potentially extending their existence into the zero carbon transition. Surely we can provide a similar amount for coordinated sea level research, especially given the need for more realistic and responsive coastal planning that ultimately would reduce loss and damage? The answer to the sea-level funding problem is to realise that while research investment is needed, the major beneficiaries from the knowledge generated are likely to be non-scientific; i.e., our coastal communities, and the governments (local and national) overseeing adaptation plans and the development of new city infrastructure, as well as those in finance and insurance responsible for the security of investments. Because of this, it is perhaps inappropriate to expect scientific research councils to fund such a programme from their existing resources. As an international problem of the most critical nature, it requires an international solution with a suitable allocation of central government support, such as has been offered to alleviate the global COVID-19 crisis. As international leaders convene in Glasgow in November 2021 to agree emissions reduction targets, they should also consider how international cooperation and support can lead to reduced sea-level rise uncertainty, and form a plan to achieve this within the coming decade. Political leaders and the scientific community would thereby provide a more secure future not only for the latter half of this century, but also for coming generations. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author.
2021-09-30T13:17:18.308Z
2021-09-30T00:00:00.000
{ "year": 2021, "sha1": "74bbd74b253f850ffee96415ea047dbfaca288c0", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fenvs.2021.751978/pdf", "oa_status": "GOLD", "pdf_src": "Frontier", "pdf_hash": "74bbd74b253f850ffee96415ea047dbfaca288c0", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [] }
4592920
pes2o/s2orc
v3-fos-license
Tobacco control policies in relation to child health and perinatal health outcomes The global epidemic of tobacco use continues to cause a considerable burden of premature death and disease.1 2 Worldwide, over 1 billion people are regular smokers, and the societal costs of smoking have been estimated at over £1 trillion/year.1 3 Tobacco is relevant to child health in various ways. Unborn children may be exposed to tobacco when their mothers smoke, are exposed to secondhand smoke (SHS) or use smokeless tobacco products. Antenatal tobacco smoke exposure can lead to birth defects, preterm birth, intrauterine growth restriction and stillbirth.4–6 After birth, exposure to SHS increases the risks of neonatal and infant death, otitis media with effusion, respiratory tract infections (RTIs), meningococcal disease and asthma attacks.5 6 Furthermore, early-life tobacco smoke exposure increases the likelihood that the child will become a smoker later in life. In this paper, we discuss how tobacco control measures may improve early life health outcomes and highlight key knowledge gaps. Based on international treaties, in particular, the Convention of the Rights of the Child, there is consensus that every child should have the right to grow up free from the adverse health effects of tobacco.7 Children, particularly when young, are entirely dependent on decisions made by adults in relation to tobacco and SHS exposure. Tobacco control policies can help guide these decisions, for example, by informing the public about the dangers of tobacco use and SHS exposure, prohibiting smoking in public places and in cars and reducing parental smoking through decreasing the attractiveness of tobacco products via price increases and marketing restrictions. To facilitate governments in applying what has been set out in the widely endorsed international Framework Convention for Tobacco Control (FCTC) treaty, the WHO has formulated six key (groups of) tobacco control policies that participating countries need to implement, represented by the MPOWER … Tobacco control policies in relation to child health and perinatal health outcomes Jasper V Been, 1,2,3 Aziz Sheikh 3,4 Burden of ToBacco-reLaTed harm The global epidemic of tobacco use continues to cause a considerable burden of premature death and disease. 1 2 Worldwide, over 1 billion people are regular smokers, and the societal costs of smoking have been estimated at over £1 trillion/ year. 1 3 Tobacco is relevant to child health in various ways. Unborn children may be exposed to tobacco when their mothers smoke, are exposed to secondhand smoke (SHS) or use smokeless tobacco products. Antenatal tobacco smoke exposure can lead to birth defects, preterm birth, intrauterine growth restriction and stillbirth. [4][5][6] After birth, exposure to SHS increases the risks of neonatal and infant death, otitis media with effusion, respiratory tract infections (RTIs), meningococcal disease and asthma attacks. 5 6 Furthermore, earlylife tobacco smoke exposure increases the likelihood that the child will become a smoker later in life. In this paper, we discuss how tobacco control measures may improve early life health outcomes and highlight key knowledge gaps. conTroLLing The ToBacco epidemic Based on international treaties, in particular, the Convention of the Rights of the Child, there is consensus that every child should have the right to grow up free from the adverse health effects of tobacco. 7 Children, particularly when young, are entirely dependent on decisions made by adults in relation to tobacco and SHS exposure. Tobacco control policies can help guide these decisions, for example, by informing the public about the dangers of tobacco use and SHS exposure, prohibiting smoking in public places and in cars and reducing parental smoking through decreasing the attractiveness of tobacco products via price increases and marketing restrictions. To facilitate governments in applying what has been set out in the widely endorsed international Framework Convention for Tobacco Control (FCTC) treaty, the WHO has formulated six key (groups of) tobacco control policies that participating countries need to implement, represented by the MPOWER acronym (box 1). 1 Ample evidence now supports the considerable impact of such policies on reducing tobacco use and improving population health, 8 9 including that of children. impacT of ToBacco conTroL on chiLd heaLTh Smoke-free legislation A recent systematic review identified 35 well-designed studies from North America, Europe and China assessing the impact of smoke-free legislation on child health. 10 Meta-analyses indicated that implementation of smoke-free policies was associated with sizeable reductions in adverse early-life health outcomes, including preterm birth (-3.8%, 95% CI -6.4% to -1.2%), severe asthma attacks (-9.8%, 95% CI -16.6% to -3.0%) and severe lower RTIs (-18.5%, 95% CI -32.8% to -4.2%). 10 These effects are important given that preterm birth and RTIs are the primary contributors to the global burden of adverse child health. 11 In line with evidence from studies in adults, 12 smoke-free laws had the largest benefit when comprehensively applied (ie, covering a wide range of public places). 10 Smoke-free legislation appears to exert its effect through reducing SHS exposure among children and pregnant women, reducing parental smoking prevalence and changing social norms, for example, resulting in many people making their homes smoke free. 5 6 Furthermore, recent evidence from the UK indicates that implementation of smoke-free legislation may help reduce smoking initiation at school age. 13 Taxation and other policies Tobacco taxation is the most effective tool to reduce smoking prevalence, 1 and through doing so it has been shown to benefit perinatal and child health. 10 For example, consistent reductions in infant mortality were demonstrated following tobacco tax/price increases in the USA, Canada and the European Union. [14][15][16] Improvements in child health have also been demonstrated after implementation of other tobacco control policies in the USA 10 : governmental provision of smoking cessation services has been associated with a reduction in severe RTIs, 17 and there was a reduction in low birthweight babies following an increase in the legal age for cigarette purchasing. 18 KnowLedge gapS impact of novel policies Now that MPOWER's impact is well established, [8][9][10] there is a need to formally assess the effectiveness of newer tobacco control efforts in promoting population health, including that of children. This includes legislation to reduce SHS exposure in outdoor public places frequented by children, such as playgrounds, school grounds and parks, and also in enclosed private spaces such as cars. Smoking in cars results in very high exposure to harmful tobacco combustion products, and prohibiting smoking in private-in addition to public-vehicles can effectively reduce children's SHS exposure. 19 20 Adolescents may in addition benefit from policies to reduce the attractiveness of smoking, such as banning the display of tobacco products in shops and introducing plain packaging. In New Zealand, the display ban was followed by a reduction in smoking experimentation and initiation among youth. 21 Introduction of plain packaging in Australia was followed by a stronger-than-anticipated response Leading article among adolescents in terms of initiation and quitting behaviour. 22 Additional studies in other countries where similar policies were recently implemented, such as the UK, are needed to support these initial observations. Impact assessment is furthermore needed of policies prohibiting flavoured tobacco products; such products are particularly appealing to youth, who wrongfully tend to perceive them as being less harmful than non-flavoured products. 23 Tobacco control in low-and middleincome countries (Lmics) A particularly pressing issue is the lack of studies assessing the child health impact of tobacco control policies in LMICs. 10 Tobacco companies are increasingly targeting LMICs, which are already experiencing the largest burden of tobacco-related premature mortality and morbidity. 2 Studies assessing the effectiveness of tobacco control policies in LMICs are therefore urgently needed. Partnerships between institutions from high-income countries and LMIC partners, supported by initiatives such as the Global Challenges Research Fund, offer the opportunity to address this knowledge gap. Thirdhand smoke (ThS) The potential effects of THS are likely to have been underestimated so far. THS are tobacco smoke constituents that remain on surfaces that have been exposed, such as clothes, hair and skin, and also curtains, walls and floors. Children may experience potential harm from THS via inhalation, dermal absorption or ingestion. Its lingering nature was highlighted in a recent study demonstrating relevant THS exposure in a significant proportion of non-smokers up to 2 months after moving into a house previously owned by smokers. 24 Environmental THS pollution is present in homes of families with young infants 25 and even in a neonatal intensive care environment, including on incubators and parents' hands despite handwashing. 26 Smoking outside is ineffective in preventing THS exposure in the home 25 or in normalising the risk of respiratory symptoms among children. 27 Research is needed to further assess the potential harms associated with childhood THS exposure, as well as to assess the effectiveness of efforts to eliminate exposure. 6 e-cigarettes Electronic nicotine delivery systems (ENDS) are upcoming on the tobacco market and are causing much debate. While they may confer benefit as a harm reduction approach among established smokers, evidence from the USA suggests that ENDS can be a gateway to smoking among youth. 28 ENDS have caused harm via explosion on a number of occasions, and unintentional ingestion of nicotine refill liquids can cause serious harm among toddlers. Although ENDS avoid inhalation of harmful combustion products, research has raised concerns over their impact on health. 28 More research is needed, including on the potential health impact of secondhand aerosol exposure and of using ENDS during pregnancy. Meanwhile, it is prudent to regulate ENDS in similar ways as combustibles and restrict their promotion to youth. Comprehensive reports on ENDS are available for background reading. 28 29 impLicaTionS for poLicy and pracTice Translating evidence into policy Considering the large evidence base supporting the effectiveness of tobacco control in reducing smoking prevalence, SHS exposure and related harms and the ratification of the FCTC by 181 countries, it is of significant concern that MPOWER policies are only fully implemented by a minority of countries. 1 To accelerate the global adoption of effective tobacco control measures, it is essential that research findings are successfully communicated to policymakers. Researchers in the field should be aware of their responsibility in this regard and seek opportunities to engage with policymakers and the media so as to help shape evidence-based policy in the future. 30 Additionally, advocacy by health professionals has helped accelerate implementation of smoke-free public places as well as smoke-free cars in the UK, 31 providing an example for other countries where such policies are currently lacking. Tackling tobacco industry involvement It is important to be aware of the tobacco industry's role in frustrating the policy process towards effective tobacco control as well as their tactics to reduce the effectiveness of such policies. As an example, evidence from the UK indicates that the industry responds to tobacco tax increases by lowering the price of the cheapest cigarette brands, allowing smokers to switch to budget cigarettes and through doing so sustain their addiction. 32 A recent study across 23 European Union countries found that this approach is associated with increased infant mortality, 16 highlighting the public health relevance of recognising and addressing such tactics. Tobacco endgame In recent years, policy development has progressed from thinking about how to control the impact of the tobacco epidemic towards pursuing a tobacco-free society within a specific timeframe. 33 An excellent overview of promising policies that fit this 'tobacco endgame' concept was recently provided by McDaniel and colleagues. 33 Examples include reducing the nicotine content of tobacco products to make them less addictive and prohibiting features designed to mask the harshness of tobacco smoke inhalation, such as additives and filters. Cigarette use may be regulated via issuing smokers' licences with age restrictions and purchase limits or via provision of cigarettes on prescription, provided only after prior cessation attempts have failed. Other approaches include restricting the number of outlets or licences to sell tobacco products and introducing quota on cigarette production and import. Additional and potentially more forward-thinking policies are likely to be developed in the near future, and there is a need to assess their potential to benefit population health, including that of children. concLuSion Children benefit substantially from policies to reduce smoking and SHS exposure. Governments should accelerate the global uptake of such policies while the effectiveness of novel approaches is scientifically assessed so that protection from tobacco-related harm is further optimised for some of the most vulnerable members of society. contributors JVB drafted the manuscript; AS supervised the writing. funding JVB is funded by personal fellowships from the Netherlands Lung Foundation (4.2.14.063JO) and the Erasmus MC. AS is supported by the Farr Institute and the Asthma UK Centre for Applied Research.
2018-04-26T19:15:18.385Z
2018-04-03T00:00:00.000
{ "year": 2018, "sha1": "3a1fee09bb836acc6309f314eaba0fc09f2f55e7", "oa_license": "CCBYNC", "oa_url": "https://adc.bmj.com/content/archdischild/103/9/817.full.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "3a1fee09bb836acc6309f314eaba0fc09f2f55e7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
202733345
pes2o/s2orc
v3-fos-license
A fully automated pipeline for mining abdominal aortic aneurysm using image segmentation Imaging software have become critical tools in the diagnosis and the treatment of abdominal aortic aneurysms (AAA). The aim of this study was to develop a fully automated software system to enable a fast and robust detection of the vascular system and the AAA. The software was designed from a dataset of injected CT-scans images obtained from 40 patients with AAA. Pre-processing steps were performed to reduce the noise of the images using image filters. The border propagation based method was used to localize the aortic lumen. An online error detection was implemented to correct errors due to the propagation in anatomic structures with similar pixel value located close to the aorta. A morphological snake was used to segment 2D or 3D regions. The software allowed an automatic detection of the aortic lumen and the AAA characteristics including the presence of thrombus and calcifications. 2D and 3D reconstructions visualization were available to ease evaluation of both algorithm precision and AAA properties. By enabling a fast and automated detailed analysis of the anatomic characteristics of the AAA, this software could be useful in clinical practice and research and be applied in a large dataset of patients. Abdominal aortic aneurysm (AAA) is associated with high rates of morbidity and mortality 1 . The only curative treatment available relies on surgical approaches including open and endovascular surgery 2 . Recent advances in medical imaging technology have led to the development of medical image analysis software allowing to create reconstructions of the AAA. Such software is useful to measure aortic and vessels lengths and diameters in order to optimize the sizing of endografts 2 . Nevertheless, commercialized software currently available for CT-scan images are semi-automatic and require human intervention to initiate aorta localization and measurements of vessels. In addition, they are not constructed to provide automatic quantitative analysis of AAA anatomic characteristics such as vessel calcifications or the presence of intra-luminal thrombus. Automatic segmentation of the AAA is challenging due to the heterogeneity of the AAA morphology and the low discrimination between the AAA and the surrounding tissues 3 . Automatic software would be of interest for clinical practice to facilitate the sizing for surgeons and standardize the procedure. In addition, it would be useful for clinical research to provide a fast and detailed analysis of the anatomic characteristics of the AAA. In this paper, we present our pipeline and describe the key algorithms that we implemented to develop a fully automated methodology, independent of human manual appreciation, in a training dataset of CT-scan images of AAA. This software allows an automatic detection of the main characteristics of AAA: the aneurysmal localization in the aorta, the distance to the renal and iliac arteries, the presence of calcifications and intraluminal thrombus. Materials and Methods Dataset and method. CT-scans images were obtained from 40 patients with an infrarenal AAA who had multidetector CT scanners with arterial-phase intravenous injection of contrast liquid. The protocol was approved by the University Hospital of Nice Review Board. All methods were carried out in accordance with the French Regulatory Health Authorities and informed consent was obtained from all subjects. Images were given in Digital Imaging and Communications in Medicine (DICOM) format providing a matrix of size 512 × 512 for image preprocessing. The preprocessing is a mandatory step before image segmentation. Standard image filtering and denoising algorithms for medical imaging were used: Gaussian, median and bilateral filters and fast non-local means denoiser from the OpenCV library 4 . Grayscale pixel intensities G are computed from the Hounsfield values H using the following formula: where L is the window level and W is the window width. In this study, a window level of 40 and a window width of 400 are used to obtain an adapted contrast to lumen segmentation. For the thrombus segmentation, the window width may be reduced to 300 or 200 to increase the contrast. Lumen segmentation. The methodology for lumen segmentation was designed to allow an automatic detection and localization of the aorta. The method was built to discriminate contrasted arterial system from surrounding tissues which have pixel gray scale close to the agent contrast such as the spine and the vertebrae. The pseudocodes of the lumen segmentation algorithms are given in Fig. 2. Lumen segmentation with the boundary propagation method. The method allows to localize a section of the aorta and then segment arterial system by propagation from the aorta position. The algorithm detects the contours potentially belonging to the aorta lumen in each slice by thresholding the slice. A border following algorithm 5 implemented in the OpenCV library is used to detect the candidates to arterial system lumen in the binary slice produced. Candidates set in each slice is pruned based on a priori shape properties of the aorta: size, aspect and solidity. The size of the contour is given by its area and contour length, as these properties of aorta have been studied deeply in medical research 6 ( Fig. 3A-1). The solidity of a contour represents its smoothness, which is expected to be high for blood vessels compared to bones tissues ( Fig. 3A-2). The aspect of a contour provides an indication of its roundness, which is expected to be high for the aorta in horizontal slices of the body 7 ( Fig. 3A-3). The aspect A properties of a contour are given by: where w and and ℎ are respectively the width and height of the minimum area rotated rectangle enclosing the contour and ℎa are respectively the area of the contour and the area of the convex hull of the contour. Candidates that exhibit best oriented-ellipse fitting properties, with high aspect and solidity values, are kept 7 . Contours are pulled together as they intersect themselves following the body vertical direction. The aorta is localized as the largest set of contours pulled together. From the section of aorta, the arterial system is detected by propagation through intersection of contours in a set of contours. This larger set of contours is obtained with the same border following algorithm as the aorta initial localization but with less severe shape properties constraints. A result is shown on a stack of 4 frames in Fig. 3B. Spine and lumen regions may have similar pixel values. Moreover, these two structures are close and may intersect in some frames. This can lead to a loss of accuracy in the automatic lumen segmentation, with a propagation of the arterial system detection in the spine region. To increase the robustness of the method, spine region is segmented in a first step, and removed from the area of research for the lumen segmentation. The border propagation method presented is used to segment the spine, and blacked out for the lumen segmentation. Lumen segmentation with the active contour method. In this paper, several initial lumen surface candidates are computed in each image using a threshold-based contour detection. Each aorta volume candidate is computed propagating an initial contour in z-direction. Connected components are then extracted and compared to automatically detect the starting lumen region. The length in z-direction is selected as a relevant metric for the comparison. This process is automatic and thus does not require the end-user to manually select the lumen initial region. Active contours methods are used to segment 2D and 3D regions. In our approach, we consider the active contours without edges algorithm since it does not rely on well-defined borders. We minimize the ACWE functional F c c ( , , ) 1 2  of a surface  Given by: Figure 3C illustrates the evolution of an initial contour, in red, over 40 iterations until convergence is achieved. The convergence is based on the ratio between volume increase at each iteration and the total volume of the region. We use the following stopping criterion: where d is the variation of the volume between two successive iterations,  is the volume of the region and tol is a user defined tolerance set to 10 −6 . In this study, a morphological smoothing based on binary dilate and erode operators is applied at each iteration. In the two-dimensional case, the binary erosion E and dilation D are compiled using the following formula: where where P is a pattern given in 8 , x(i, j) is the binary value of a pixel located at coordinates (i, j). In practice, to balance the contribution of both operators, we alternate the two following smoothing operators S 1 and S 2 through iterations: 1 2 The dependency of the algorithm results to its parameters values were evaluated through the Dice Similarity Coefficient (DSC) of the segmented volumes 9 : www.nature.com/scientificreports www.nature.com/scientificreports/ where R 1 and R 2 are the segmented regions. When there is no overlap, the outcome of DSC is 0, and for complete overlap the outcome is 1. Thrombus segmentation. In our segmentation pipeline, the computation of the lumen region is an important step for the thrombus segmentation. The lumen region is set as initial level set for segmenting the thrombus region. A fully automated segmentation of the thrombus represents a technical challenge. Semi-automated approaches require that the end-user pick a region of the thrombus to calibrate the intensity of the pixels in this region. In this paper, the thrombus intensity is automatically computed using the initial thrombus segmentation such that an end-user intervention is not required. As for the segmentation of the lumen, a morphological snake ACWE is applied to grow the thrombus region. However, a different LUT filter is applied to increase the contrast of the CT images. Since the contrast is increased, a than twenty-five iterations are required to segment the thrombus region. The assumption that the Gaussian filter is also applied to reduce the noise of the image. Practically, less thrombus region has a globally ellipsoidal shape in 2D which is of major importance for the fully-automated procedure. A morphological smoothing based on dilate and erode operators is applied at each iteration but with a higher strength than for the lumen case. Instead of using 3 × 3 patterns 8 , 5 × 5 and 7 × 7 patterns are applied. This results in a smoother contour in each slice of the CT volume. Segmentation of calcifications. The algorithm developed was initially created based on injected CT-scan images but must work in non-contrasted images. It relies on a previous detection of the wall of the aorta as presented. The LUT and level sets commonly used to visualize arterial system in injected CT-scans unable the proper visualization of calcifications but calcifications have significantly higher absorption of rays than contrasted blood, therefore a higher Hounsfield unit (Fig. 4A). Each slice is analyzed within a segmentation mask, obtained from the contour of the aorta wall. A morphological dilate operator is applied to the aorta mask, and a morphological erode of the same size, which are then subtracted to the original mask. The working region is therefore reduced to a ring-shaped region to search for calcifications. www.nature.com/scientificreports www.nature.com/scientificreports/ Finally, a threshold is applied in the ring-shaped working region. All the volume elements, or voxels, with Hounsfield units above the threshold are considered as calcifications. The threshold is computed from the mean of the inner lumen pixel values multiplied by a constant factor superior to one. An illustration of these steps is provided in Fig. 4B. Visualization. 2D visualization is performed using a pixel matrix with a gray-scale color map. It allows to render slices of CTA scans in z-direction one-by-one. 2D visualization allows to evaluate the contours of the segmented regions and to compare with manually drawn contours by experts. Metrics can then be applied to benchmark the proposed methods quantitatively. We also use 3D visualization to render, in real time, 3D complex surfaces such as the arterial system. Numerical results can be quickly and qualitatively examined by a medical end user. Validation methodology. The method was compared with respect to evaluation of ground truth provided by a human expert. A quantitative validation of our algorithm was performed using CT-scan image data sets acquired from 40 patients with infrarenal AAA. For each patient, a manual segmentation of the aortic lumen and the thrombus was independently performed by an expert vascular surgeon. The manual tracing was performed using a contour drawing tool of the image processing software Image J (version 1.52). The segmentation was sequentially performed by analyzing slices of the infrarenal aorta on a mean length of 152 +/− 54 slices per patient. The segmentation results were evaluated using the following distinct metrics: where h(A, B) is called the directed Hausdorff distance and given by the formula: Results Lumen segmentation with the boundary propagation method. The dependency to the value of the threshold converting the grayscale image in a level set is low in case of "ideal" scans: when the aorta is surrounded by soft tissues, without stent, well contrasted (see series 1 and 2 in Fig. 5A). In these cases, increasing the threshold decreases slowly the depth of the arterial system detected. However, if the threshold value is high, detection may be "stopped" by stent grafts reflection, too small arteries or improperly applied contrast liquid (false negative). Segmentation results were evaluated using the Dice Similarity Coefficient (DSC). The DSC of series 3 (Fig. 5A) illustrates cases where stent graft or low contrast product "stops" the detection of the lumen if the threshold is (2019) 9:13750 | https://doi.org/10.1038/s41598-019-50251-8 www.nature.com/scientificreports www.nature.com/scientificreports/ high. On the opposite, the algorithm can allow propagation in other tissues touching the aorta (false positive) if the threshold value is low (see series 4 in Fig. 5A). A medium value of 200 enables to robustly detect a large part of the arterial system with few false positive. In the 40 test cases, 6 cases have 1 to 3 slices with false positives and 6 cases include false negatives parts (aorta portion or a renal arteria departure undetected). Two of the 6 false negative cases are due to metallic interference of stent grafts. Two set of physiological parameters values are used: restrictive values to be sure to exclude false positive while trying to locate the aorta, then permissive values to propagate in the entire vascular system. The algorithm result is more dependent of permissive parameters than to restrictive set (see Fig. 5B,C). However, the dependency to those parameters values remains very low compared to the threshold. Lumen segmentation with the active contour method. A morphological snake ACWE is applied to segment 2D and 3D regions. Figure 6A illustrates the intersection between spine and lumen regions. Since both regions have similar pixel intensities, a threshold-based contour extraction or a ACWE method is inaccurate in this case. To segment the lumen region, a smoothing, i.e. a balloon force, is applied at each iteration of the morphological snake algorithm. Practically, less than fifty iterations are required to segment the lumen region containing at least: the aorta, two renal and two iliac arteries. Figure 6B shows the 3D segmentation of lumen region after 50 iterations of morphological ACWE. Abdominal aorta, celiac trunk artery, superior mesenteric artery, iliac arteries and renal arteries are segmented. Internal and external iliac arteries are also visible. One hundred and fifty iterations were sufficient to properly discriminate the aorta and the arteries. Figure 6C shows DSC for different weight ratios and numbers of smooth operations. The weight ratio is defined by the quotient of the inner weight parameter to the outer weight. The reference is the area segmentation without smoothing and with a weight ratio set to 1. Up to three smoothing operations are applied at each iteration of the ACWE algorithm. An increase of the weight ratio leads to a decrease of the DSC, up to 6% of the reference value. In this example, the DSC is slightly impacted by the number of smoothing operations. For a reasonably selective weight ratio of 5, the lumen segmentation error, defined by the DSC between the segmented region and the reference region, is bounded by 5%. Computational efficiency. The sequential version of the morphological snake method requires a significant computational time, which can be an issue when dealing with a lot of patient CTA-scans in clinical research, as well as in the case of decision making for a surgical emergency. However, a parallel version of the active contour technique has been developed in this work. The resulting parallel active contour require less than one minute to perform a segmentation using several cores on a standard laptop. The boundary propagation method is faster than the active contour since it is a threshold-based technique. Thrombus segmentation. Several patterns for the smoothing operator have been assessed when segmenting the thrombus: 3 × 3, 5 × 5 and 7 × 7 pattern. Qualitatively, they respectively correspond to weak, medium and strong smoothing. Figure 7A represents the results of the thrombus segmentation for three different CT-scan slices. Each row corresponds to a single slice. Three different patterns, or strength, are used and the number of www.nature.com/scientificreports www.nature.com/scientificreports/ smooth operators applied at each iteration of the ACWE algorithm varies from one to three. It highlights that, for this segmentation, a weak smoothing is not sufficient to properly segment the thrombus region since the morphological snake goes into the surrounding tissues. Three medium strength smoothing operations at each iteration or a single strong smoothing operation are sufficient to extrapolate the elliptic shape of the thrombus wall, without well-defined edges. The combination of all these techniques have shown a high robustness and accuracy in the thrombus segmentation for 2D and 3D cases. The region grows and stops even with partially defined borders. This case occurs when neighboring organs or arteries are close to the thrombus. Figure 7B shows the result of the lumen and thrombus segmentations in a 3D case. The thrombus region is located between the two iliac arteries and the two renal www.nature.com/scientificreports www.nature.com/scientificreports/ arteries. For the segmentation of the thrombus region, the lumen area is set as the initial levelset. A 7 × 7 pattern for the discrete smoothing operator is applied in order to segment quite smooth regions. Segmentation of calcifications. Calcifications are detected around arterial system wall, approximated by either the lumen or the thrombus contour. On the dataset, the use of the thrombus has led to an increase of the average volume of calcifications by 7.8% +− 0.11, as seen in Fig. 8A. Representative 3D images of calcifications detected along the aorta are shown in Fig. 8B. Validation of the methodology. To evaluate the results of the segmentation obtained with our automatic pipeline, we performed a quantitative comparison with the results obtained from manual segmentation by an expert vascular surgeon on a dataset of 40 CT-scan from patients with infrarenal AAA. Representative images of the manual and the automatic segmentation of the aortic lumen and the intraluminal thrombus are presented in Fig. 9. For the aortic lumen, the analysis was performed on 620 slices and demonstrated an excellent correlation of the surfaces measured with the manual and the automatic segmentation methods, with a Pearson's coefficient correlation of 0.99, P < 0.0001 (Fig. 10). The results on the metrics used to evaluate the segmentation errors are presented in Table 1. The mean volume similarity was 0.96 +/− 0.04; the mean sensitivity 0.90 +/− 0.06; the mean specificity 0.9997 +/− 0.0004; the mean Jaccard index 0.87 +/− 0.07; the mean Dice Similarity Coefficient 0.93 +/− 0.04 and the mean Hausdorff distance 1.78 +/− 0.38. For the segmentation of the thrombus, 525 slices were analyzed and the Pearson's coefficient correlation between the 2 methods was 0.90, P < 0.0001 (Fig. 10). The mean volume similarity was 0.91 +/− 0.11; the mean Jaccard index 0.80 +/− 0.15; the mean Dice Similarity Coefficient 0.88 +/− 0.12 and the mean Hausdorff distance 2.13 +/− 0.61 (Table 2). For the automatic method, the segmentation time varied from 5 seconds to 1 minute per patient. For the human manual method, the segmentation time ranged from 25 minutes to 40 minutes per patient. Discussion The precision and robustness of the automatic detection method result from a complex combination of several techniques. Image processing using filtering and denoising is essential for an accurate segmentation. Computer tomography machines generally produce an unwanted noise and decrease the image quality. The noise changes the pixel www.nature.com/scientificreports www.nature.com/scientificreports/ values and results in a heterogeneous image. Several authors aimed to compare different noise removal algorithms in medical imaging [11][12][13] . The authors used several measures to assess the quality of the denoising. They compared different filters, such as Wiener and median filters, and denoisers, such as wavelet approaches or non-local means. The latter approach requires much time compared to others. For this reason, other versions of non-local means have emerged, with the aim at being faster while keeping a satisfying accuracy. Noise removal for CT-scan imaging is still an active field of research 14,15 . Some investigators compared several filters and showed that a combination of median filter followed by Wiener filter is more effective to remove different noises present in CT images. In this study, the median filter exhibited the best compromise between image quality and computational efficiency. Vascular segmentation is a challenging task and investigators proposed a large review of 3D vessel lumen segmentation techniques with models, features and extraction schemes [16][17][18][19] . Some authors compared several extraction schemes such as region growing approaches 20 or active contours 21,22 . To cope with most complex applications, advanced techniques often rely on a combination of several models and features. Boundary propagation and active contour methods are proposed in this paper to segment the lumen. Both methods exhibit a good accuracy for the aortic wall detection, from the celiac trunk artery, to the two iliac arteries. The main difference between 2D and 3D method results in the way the detection of the arterial system stops. Depending of the patient anatomy and the way the contrast liquid is applied, one method or the other will perform a more complete detection of the arteries. However, the accuracy of the process in regions with a metallic stent graft may be reduced due to the interferences with the CT machine. In the boundary propagation method, the lumen detection process may be artificially stopped in the graft zone due to the presence of high intensity pixels coming from foreign body interferences and decreasing the quality of the aortic wall. In this study, the combined use of boundary propagation method and active contour model demonstrated a high robustness, allowing to cope with the complex problem of automatically segmenting the lumen region from the spine for all patients. Active contours methods are widely used to segment vascular regions. Two main approaches exist: the geodesic active contours (GAC) 23 and the active contours without edges (ACWE) 24 . The ACWE does not rely on a preprocessed image, the method can handle not well defined borders. In both methods, a time-dependent partial differential equation (PDE) is solved, which is a computationally expensive operation. Marquez-Neila et al. have shown that some morphological operators, such as dilation and erosion, can be expressed as PDEs 8 . Moreover, they approximate the numerical solution of region growing PDEs by the successive composition of morphological operators. The morphological approach has several advantages over the PDE approach since the implementation is simpler, has fewer parameters, and no numerical instability issues. Besides, morphological algorithms are faster, which makes them suitable for processing large images and volumes such as those from CT scans. For robustness purpose, we have used a ACWE algorithm combined with several smoothing operations at each iteration. Larger numbers of smoothing operations lead to smoother segmentations. More recently, several approaches using deep learning techniques have been proposed [25][26][27][28] to segment the lumen region. The approaches are based on deep convolutional networks that are able to extract deep feature: the first convolution layer extracts the low-level features, like edges, lines and corners, while the deeper the layers, the higher-level is the information they provide. The main limitation of CNN relies on the fact that this technique requires a large amount of training data due to the huge amount of parameters that it has. In this work, we have chosen to use a feature-based approach, which does not rely on a large amount of input data. A future work would be to generate synthetic data from the feature-based approach and use them in a deep neural network approach to speed up the segmentation process. Spine segmentation using threshold-based methods and connected component extraction are essential to a robust and precise detection of the lumen. The lumen segmentation using boundary propagation or active contour methods is used as the initial region for the thrombus segmentation. Thrombus and lumen segmentation approaches share similar techniques but, the edges of the thrombus are not so different from the surrounding tissues. Therefore, thrombus segmentation is a challenging problem since borders of thrombus are not well defined. Among popular methods applied to thrombus segmentation, one can cite active contours 29,30 , graph cut algorithms 31,32 or non-parametric statistical gray-level appearance model-based 33 www.nature.com/scientificreports www.nature.com/scientificreports/ automatic detection and segmentation of abdominal aortic thrombus using a deep convolutional neural network 35 . The pipeline was trained, validated and tested in 13 CT-scan of patients with AAA. The comparison with manually delimited volume resulted in a Dice score of 0.82 +/− 0.07. In our study, the thrombus segmentation was achieved using active contour methods with a strong balloon force to extract the elliptic feature of the thrombus. Our results showed the agreement of our method with the ground truth defined by the experienced human, with a Dice score of 0.88 +/− 0.12. These results are within the same range of other previously published methods. The segmentation of calcifications was performed using a threshold-based method in the aortic wall region. Several methods to detect calcifications in abdominal aorta have been proposed in the literature so far, most of them rely on a previous localization of the aorta walls and considers only high-density areas within the segmented volume. A few fully-automatic methods were proposed. Isgum et al. proposed a supervised learning algorithm for the segmentation of calcifications 36 . The main limitation of this type of methods relies on the requirement of a large amount of training data, 433 CT-scans have been manually annotated. In this study, we propose an automated calcification segmentation from contrasted scans and performed without learning datasets. In this objective, several threshold-based methods have been proposed in the literature: some investigators applied a fixed threshold in Hounsfield unit 37,38 whereas others proposed a refined method, based on an adaptive thresholding in low contrasted scans 39 . Here we propose a similar approach, but with a detection in high contrasted datasets, enabling a fully automated evaluation of calcifications from segmented aorta wall in CT-scans. As shown in the results, segmenting the thrombus can lead to increase significantly the number of calcifications detected. Comparing images to evaluate the quality of segmentation is an essential step to validate the methodology. Segmentation evaluation consists in comparing two segmentations methods by measuring the distance or similarity between them 9 . We compared our new automatic method to the ground truth of manual segmentation performed by an expert using several metrics including overlap, volume and surface distance based metrics. This work resulted in a proper segmentation of most patients of the dataset, despite some low contrast CT-scans or metallic interferences due to the presence of stent grafts. Our results demonstrated the agreement of the method with manual segmentation and the robustness of the pipeline to detect and automatically segment the lumen and the thrombus of AAA. It presents the advantage to reduce segmentation time and user interaction. An adaptation of the nominal value of some parameters remains available for a precise characterization of patients with specific anatomies. The expertise of a medical professional remains essential when interpreting the 2D and 3D visualizations to evaluate if a new set of parameter values is required or not. Several methods have been previously proposed for aneurysm segmentation. De Bruijne et al. developed semi-automatic methods for aneurysm sac segmentation as well as thrombus segmentation inspired from the active shape model segmentation 40,41 . The methods demonstrated accurate segmentation but required minimal user interaction to initiate the detection of the aorta. Another semi-automatic method based on a two-step segmentation for the inner and for the outer aneurysm border has been proposed and compared to manual segmentation. While it exhibited relative errors close to those obtained with human experts, it still required a minimal user interaction 42 . In the method proposed by Zhuge et al., the user intervention is not required except to identify the most proximal and distal slices of the aneurysm 43 . The results on a data set of 20 CT-scan were compared to the gold standard established by manual tracing from experts. The mean volume overlap was 95.3% +/− 1.4 and the mean segmentation time per patient was reduced to 7.4 min +/− 3.8 vs 20 to 30 min per patient with the human manual method. Another semi-automatic method to segment the lumen interface and the aortic wall of AAA was developed based on graph cut theory 44 . The comparison of the results with those obtained by human tracing from experts based on three metrics including the maximum aortic diameter, the volume overlap and the Hausdorff distance demonstrated the reliability of the method. Finally, Joldes et al. recently proposed a finite element analysis-based approach to analyze the rupture potential of an AAA and presented the preliminary results from 48 cases 3 . The software system consists of a collection of programs to enable image segmentation, geometry creation, meshing, finite element analysis and rupture potential index computation. While the analysis is performed automatically, the segmentation of the AAA and the intraluminal thrombus remains semi-automatic. Based on our results, several perspectives can be suggested. Further studies to compare the quantitative data obtained from this pipeline with established published scores on a larger dataset of CT-scan images would be of interest. The availability of the data is a key issue for training and validation of new imaging processing techniques. In practice, it is extremely difficult to obtain datasets from previously published cohorts as medical data sharing is subjected to legal and ethical restrictions and is not publically available 45 . The development of publically-available large-scale computational resources and datasets would be a step forward in the development of automated-imaging analysis. The full-automation of this method offers interesting perspectives to easily and quickly characterize a high quantity of CT-scans of AAA, which could be useful for applications in clinical research. This method brings a standardized characterization of AAA and could be useful in clinical practice to homogenize and facilitate the sizing of endograft. Even if further studies are required to validate the method on a larger set of patients, this fully automated pipeline could bring new insights in imaging processing and could have potential applications in both clinical practice and clinical research. Data Availability All data generated or analysed during this study are included in this published article.
2019-09-24T14:40:24.749Z
2019-09-24T00:00:00.000
{ "year": 2019, "sha1": "ff68295f4e13d3e3cac502af7d52df841bf35343", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-019-50251-8.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ff68295f4e13d3e3cac502af7d52df841bf35343", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
17101659
pes2o/s2orc
v3-fos-license
A note on phase transitions for the Smoluchowski equation with dipolar potential In this note, we study the phase transitions arising in a modified Smoluchowski equation on the sphere with dipolar potential. This equation models the competition between alignment and diffusion, and the modification consists in taking the strength of alignment and the intensity of the diffusion as functions of the order parameter. We characterize the stable and unstable equilibrium states. For stable equilibria, we provide the exponential rate of convergence. We detail special cases, giving rise to second order and first order phase transitions, respectively. We study the hysteresis diagram, and provide numerical illustrations of this phenomena. Introduction In this short note, we study the following modified Smoluchowski equation (also called Fokker-Planck equation), for an orientation distribution f (ω, t) defined for a part is made for the sake of simplicity. It leaves enough flexibility to to reveal key behaviors in terms of phase transitions. It would be easy to remove it at the price of an increased technicality. Additionally, it means that when f is more concentrated in the direction of Ω f , the relative strength of the alignment force compared to diffusion is increased as well. This can be biologically motivated by the existence of some social reinforcement mechanism. We will see that we can observe a wealth of phenomena, including hysteresis. The purpose of this note is to summarize the analytical results, as well as some numerical simulations which illustrate this phenomena. All the proofs are detailed in [6], where (1) arises as the spatially homogeneous version of a space-dependent kinetic equation, obtained as the mean-field limit of a self-propelled particle system interacting through alignment. This spatially homogeneous study is crucial to determine the macroscopic behavior of this space-dependent kinetic equation. .1 Existence and uniqueness We first state results about existence, uniqueness, positivity and regularity of the solutions of (1). Under hypothesis 1.1, we have the following Theorem 1. Given an initial finite non-negative measure f 0 in H s (S), there exists a unique weak solution f of (1) such that f (0) = f 0 . This solution is global in time. Moreover, f ∈ C ∞ ((0, +∞) × S), with f (t, ω) > 0 for all positive t, and we have the following instantaneous regularity and uniform boundedness estimates (for m ∈ N, the constant C being independent of f ), for all t > 0: For later usage, we define Φ(|J|) as an anti-derivative of h: dΦ d|J| = h(|J|). In this case, the dynamics of (1) corresponds to the gradient flow of the following free energy functional: Indeed, if we define the dissipation term D(f ) by we get the following conservation relation: Equilibria We now define the von Mises distribution which provides the general shape of the non-isotropic equilibria of Q. Definition 2.1. The von Mises distribution of orientation Ω ∈ S and concentration parameter κ 0 is given by: The order parameter c(κ) is defined by the relation and has expression: The concentration parameter c(κ) defines a one-to-one correspondence κ ∈ [0, ∞) → c(κ) ∈ [0, 1). The case κ = c(κ) = 0 corresponds to the uniform distribution, while when κ is large (or c(κ) is close to 1), the von Mises distribution is closed to a Dirac delta mass at the point Ω. Some comments are necessary about the interval of definition of σ. First note that, under hypothesis 1.1, h is defined from [0, +∞), with values in an interval [0, κ max ), where we may have κ max = +∞. So σ is an increasing function from [0, κ max ) onto R + . Moreover, for later usage, we can define where ρ c > 0 may be equal to +∞, and where we recall that n denotes the dimension. The equilibria are given by the following proposition: Proposition 2.1. The following statements are equivalent: (i) f ∈ C 2 (S) and Q(f ) = 0. (iii) There exists ρ 0 and Ω ∈ S such that f = ρM κΩ , where κ 0 satisfies the compatibility equation: Let us first remark that the uniform distribution, corresponding to κ = 0 is always an equilibrium. Indeed, we have c(0) = σ(0) = 0 and (8) is satisfied. However, Proposition 2.1 does not provide any information about the number of the non-isotropic equilibria. Indeed, equation (8) can be recast into: which is valid as long as σ = 0. We know that σ is an increasing unbounded function from its interval of definition [0, κ max ) onto [0, +∞), and thanks to hypothesis 1.1 and to (7), we know that σ(κ) ∼ ρc n κ as κ → 0 (if ρ c < +∞). So since c(κ) ∼ 1 n κ as κ → 0 (see for instance [11]), we have the two following results (also valid in the case ρ c = +∞): We deduce that this function reaches its maximum, and we define For ρ < ρ * , the only solution to the compatibility condition is κ = 0, and the only equilibrium is the uniform distribution f = ρ. Except from these facts, we have no further direct information of this function κ → c(κ)/σ(κ), since c and σ are both increasing. Figure 1 depicts some examples of the possible shapes of the function κ → c(κ)/σ(κ). We see that depending on the value of ρ, the number of families of non-isotropic equilibria, given by the number of positive solutions of the equation (9), can be zero, one, two or even more. We now turn to the study of the stability of these equilibria, through the study of the rates of convergence. Rates of convergence to equilibrium The main tool to prove convergence of the solution to a steady state is LaSalle's principle, that we recall here (the proof follows exactly the lines of [11]). By the conservation relation (4), we know that the free energy F is decreasing in time (and bounded from below since |J| is bounded). LaSalle's principle states that the limiting value of F corresponds to an ω-limit set of equilibria: Let f 0 be a positive measure on the sphere S. We denote by Since we know the types of equilibria, we can refine this principle to adapt it to our problem: If no open interval is included in the set {κ, ρc(κ) = σ(κ)}, then there exists a solution κ ∞ to the compatibility solution (8) such that we have: This last proposition helps us to characterize the ω-limit set by studying the single compatibility equation (8). When κ = 0 is the unique solution, then this gives us that f converges to the uniform distribution. Otherwise, two cases are possible, either κ ∞ = 0, and f converges to the uniform distribution, or κ ∞ = 0, and the only unknown behavior is the one of Ω f (t) . If we are able to prove that it converges to Ω ∞ ∈ S, then f converges to a fixed non-isotropic steady-state ρM κ∞Ω∞ . However, Proposition 2.3 does not give information about quantitative rates of convergence of |J f | to ρc(κ ∞ ), and of f (t) − ρM κ∞Ω f (t) H s to 0, as t → ∞. So we now turn to the study of the behavior of the difference between the solution f and a target equilibrium ρM κ∞Ω f (t) . This study consists in two types of expansion. If we expand the solution around the uniform equilibrium, some simple energy estimates give us exponential convergence when ρ < ρ c . But when we expand the solution around a non-isotropic equilibrium ρM κ∞Ω f (t) , we see that the condition of stability is related to the monotonicity of the function κ → c(κ)/σ(κ). Hence we can see directly on the graph of this function (see examples on Figure 1) both the number of family of equilibria and their stability: if the function is decreasing, the family is stable. By contrast it is unstable when the function is increasing. When the difference between f and ρM κ∞Ω f (t) converges exponentially fast to 0 (on the stable branch), we are able to control the displacement of Ω f (t), which gives convergence to Ω ∞ ∈ S. We then have convergence of f to a given equilibriumρM κ∞ Ω∞ . All these results are summarized in the following two theorems. In what follows, we say that a constant is a universal constant when it does not depend on the initial condition f 0 (that is to say, it depends only on ρ, n and the coefficients of the equation ν and τ , and on the exponent s of the Sobolev space H s in which the result is stated). Theorem 2. We have the following instability and exponential stability results around the uniform equilibrium: There • If ρ > ρ c , and if J f 0 = 0, then we cannot have κ ∞ = 0 in Proposition 2.3: the solution cannot converge to the uniform equilibrium. To study the stability around a non-isotropic equilibrium, we fix ρ, and we denote by κ a positive solution to the compatibility equation (we will not write the dependence of c and σ on κ when there is no possible confusion). We denote by F κ the value of F (ρM κΩ ) (independent of Ω ∈ S). Theorem 3. We have the following instability and exponential stability results when starting close to a non-isotropic equilibrium: For all s > n−1 2 , there exist universal constants δ > 0 and C > 0, such that for any initial condition f 0 satisfying f 0 − ρM κΩ H s < δ for some Ω ∈ S, there exists Ω ∞ ∈ S such that where the rate is given by The constant Λ κ is the best constant for the following weighted Poincaré inequality (see the appendix of [5] for more details on this constant, which does not depend on Ω): • Suppose ( σ c ) ′ (κ) < 0. Then any equilibrium of the form ρM κΩ is unstable, in the following sense: in any neighborhood of ρM κΩ , there exists an initial condition f 0 such that F (f 0 ) < F κ . Consequently, in that case, we cannot have κ ∞ = κ in Proposition 2.3. Second order phase transition Let us now focus on the case where we always have ( σ c ) ′ > 0 for all κ > 0 (see for example the lowest two curves of Figure 1). In this case, the compatibility equation (9) has a unique positive solution for ρ > ρ c . With the results of the previous subsection about stability and rates of convergence, we obtain the behavior of the solution for any initial condition f 0 with initial mass ρ. • If ρ < ρ c , then the solution converges exponentially fast towards the uniform distribution f ∞ = ρ. • If ρ = ρ c , the solution converges to the uniform distribution. The special case where J f 0 = 0 leads to the heat equation ∂ t f = τ 0 ∆ ω f . Its solution converges exponentially fast to the uniform distribution, but this solution is not stable under small perturbation of the initial condition. Let us remark that for some particular choice of the coefficients, as in [11], it is also possible to get an algebraic rate of convergence in the second case ρ = ρ c . For example when σ(κ) = κ, we have f − ρ Ct − 1 2 for t sufficiently large. So we can describe the phase transition phenomena by studying the order parameter of the asymptotic equilibrium c = |J f∞ | ρ , as a function of the initial density ρ. We have c(ρ) = 0 if ρ ρ c , and c is a positive continuous increasing function for ρ > ρ c . In the common situation where c σ = 1 ρc − aκ 1 β + o(κ 1 β ) when κ → 0, it is easy to see, since c(κ) ∼ 1 n κ when κ → 0, that we have Since c σ is Lipschitz, we always have β 1. So the first derivative of c is discontinuous at ρ = ρ c . This is the case of a second order phase transition (also called continuous phase transition). The critical exponent β can take arbitrary values in (0, 1], as can be seen by taking h(|J|) such that σ(κ) = c(κ)(1 + κ 1 β ). In general, we have the following practical criterion, which ensures a second order phase transition. Typical example We now turn to a specific example, where all the features presented in the stability study can be seen. We focus on the case where ν(|J|) = |J|, as in [11], but we now take τ (|J|) = 1/(1 + |J|). From the modeling point of view, this occurs in the Vicsek model with vectorial noise (also called extrinsic noise) [1,2]. In this case, we have h(|J|) = |J| + |J| 2 , so the assumptions of Lemma 1 are not fulfilled, and the function σ is given by σ(κ) = 1 2 ( √ 1 + 4κ − 1). Expanding c σ when κ is large or κ is close to 0, we get Consequently, there exist more than one family of non-isotropic equilibria when ρ is close to ρ c = n (and ρ > ρ c ). The function κ → c(κ) σ(κ) can be computed numerically. The results are displayed in Figure 2 in dimensions n = 2 and n = 3. We observe the following features: • There exists a unique critical point κ * for the function c σ , corresponding to its global maximum 1 ρ * (in dimension 2, we obtain numerically ρ * ≈ 1.3726 and κ * ≈ 1.2619, in dimension 3 we get ρ * ≈ 1.8602 and κ * ≈ 1.9014). • The function c σ is strictly increasing in [0, κ * ) and strictly decreasing on (κ * , ∞). From these properties, it follows that the solution associated to an initial condition f 0 with mass ρ can exhibit different types of behavior, depending on the three following regimes for ρ. • If ρ * < ρ < n, there are two families of stable solutions: either the uniform equilibrium f = ρ or the von Mises distributions of the form ρM κΩ , for Ω ∈ S where κ is the unique solution with κ > κ * of the compatibility equation (8). If f 0 is sufficiently close to one of these equilibria, there is exponential convergence to an equilibrium of the same family. The von Mises distributions of the other family (corresponding to solution of (8) such that 0 < κ < κ * ) are unstable in the sense given in Theorem 3. • If ρ > n and J f 0 = 0, then there exists Ω ∞ ∈ S such that f converges exponentially fast to the von Mises distribution ρM κΩ∞ , where κ is the unique positive solution to the compatibility equation ρc(κ) = σ(κ). At the critical point ρ = ρ * , the uniform equilibrium is stable (and for any initial condition sufficiently close to it, the solution converges exponentially fast to it), but the stability of the family of von Mises distribution {ρ * M κ * Ω , Ω ∈ S} is unknown.. At the critical point ρ = n, the family of von Mises distribution {nM κcΩ , Ω ∈ S} is stable, where κ c is the unique positive solution of (8). For any initial condition sufficiently close to nM κcΩ for some Ω ∈ S, there exists Ω ∞ such that the solution converges exponentially fast to nM κcΩ∞ . However, in this case, the stability of the uniform distribution f = n is unknown. As previously, in the special case J f 0 = 0, the equation reduces to the heat equation and the solution converges to the uniform equilibrium. Since c(κ) is an increasing function of κ, we can invert this relation κ → c(κ) into c → κ(c) and express the density ρ = σ(κ(c)) c as a function of c. The result is depicted in Figure 3 for dimension 2 or 3. With this picture, we recover the phase diagram in a conventional way: the possible order parameters c for the different equilibria are given as functions of ρ. The dashed lines corresponds to branches of equilibria which are unstable. We can also obtain the corresponding diagrams for the free energy and the rates of convergences. For this particular example, the free energies F (ρ) and F κ (we recall that they correspond respectively to the free energy of the uniform distribution and of a von Mises distribution ρM κΩ for a positive solution κ of the compatibility equation (8), including both stable and unstable branches) are given by The plots of these functions are depicted in dimensions 2 and 3 are depicted on the left part of Figure 4. Since the functions are very close in the figure for some range of interest, we depict the difference F κ − F (ρ) in a more appropriate scale, in the right part of Figure 4. The dashed lines correspond to unstable branches of equilibria. We observe that the free energy of the unstable non-isotropic equilibria (in dashed line) is always above that of the uniform distribution. There exist ρ 1 ∈ (ρ * , ρ c ) and a corresponding solution κ 1 of the compatibility solution (8) (with κ 1 > κ * , corresponding to a stable family of non-isotropic equilibria) such that F κ 1 = F (ρ F ). If ρ < ρ 1 , the global minimizer of the free energy is the uniform distribution, while if ρ > ρ 1 , then the global minimum is reached for the family of stable von Mises equilibria. The physical relevance of this value is not clear though, as we will see in the numerical illustration of the next subsection. The rates of convergence to the stable equilibria, following Theorems 2 and 3, are given by where λ 0 is the rate of convergence to the uniform distribution ρ, and λ κ is the rate of convergence to the stable family of von Mises distributions ρM κΩ , where κ is the unique solution of the compatibility condition (8) such that κ > κ * . Details for the numerical computation of the Poincaré constant Λ κ are given in the appendix of [5]. The computations in dimensions 2 and 3 are depicted in Figure 5. Numerical illustrations of the hysteresis phenomenon In order to highlight the role of the density ρ as the key parameter for this phase transition, we introduce the probability density function f = f ρ and we get When ρ is constant, this equation is equivalent to (1). We now consider ρ as a parameter varying slowly with time (compared to the time scale of the convergence to equilibrium, see Figure 5). If this parameter starts from a value ρ < ρ * , and increases slowly, the only stable distribution is initially the uniform distribution f = 1, and it remains stable. So we expect that the solution stays close to it, until ρ reaches the critical value ρ c . For ρ > ρ c , the only stable equilibria are the von Mises distributions, and the solution converges to one of these equilibria. The order parameter defined as c( f ) = |J f |, then jumps from 0 to c c = c(κ c ). If then the density ρ is further decreased slowly, the solution stays close to a von Mises distribution, and the order parameter slowly decreases, until ρ reaches ρ * back. For ρ < ρ * , the only stable equilibrium is the uniform distribution, and the concentration parameter jumps from c * = c(κ * ) to 0. This is a hysteresis phenomenon: the concentration parameter describes an oriented loop called hysteresis loop. Let us now present some numerical simulations of the system (16) in dimension n = 2. We start with a initial condition which is a small perturbation of the uniform distribution, and we take ρ = 1.75 − 0.75 cos( π T t), with T = 500. We use a standard central finite different scheme (with 100 discretization points), implicit in time (with a time step of 0.01). The only problem with this approach is that the solution converges so strongly to the uniform distribution for ρ < ρ c , so after passing ρ c , the linear rate of explosion for J f is given by e ( ρ ρc −1)t , and is very slow when ρ is close to ρ c . So since J f is initially very small when passing the threshold ρ = ρ c we would have to wait extremely long in order to see the convergence to the stable von Mises distribution. To overcome this problem, we adding a threshold ε and strengthen |J f | when f − 1 ∞ ε, by We note that this transformation that we still have f − 1 ∞ ε if it was the case before applying the transformation. Figure 6 depicts the result of a numerical simulation with a threshold ε = 0.02. We clearly see this hysteresis cycle, which agrees very well with the theoretical diagram. The jumps at ρ = ρ * and ρ = ρ c are closer to the theoretical jumps when T is very large. We were not able to see any numerical significance of the value ρ 1 (for which uniform and non-isotropic distributions have the same free energy) in all these numerical simulations. In particular, ρ 1 is close to ρ * (see Figure 4), so in most of the cases where both uniform and non-isotropic distribution are stable, the uniform distribution is not the global minimizer of the free energy, but in practical, metastability is very strong, and the solution still converges to the uniform distribution. Conclusion In this note, we have given the summary of strong results on the stability and instability of the equilibrium states of the modified Smoluchowski equation (1). This allows to have a precise description of the dynamics of the solution when time goes to infinity: it converges exponentially fast to a fixed equilibrium, with explicit formulas for the rates of convergence. We have also exhibited a specific example in which we observe a first order phase transition with a hysteresis loop (in contrast with the second order phase transition of the original Smoluchowski equation with dipolar potential [11]). The details of the proofs will be found in a longer paper [6] as well as numerical comparisons between the particle and kinetic models to confirm that the hysteresis is really intrinsic to the system and not simply an artifact of the kinetic modeling. (16), with time varying ρ, in dimension 2. The red curve is the theoretical curve, the blue one corresponds to the simulation.
2012-12-17T08:14:52.000Z
2012-06-25T00:00:00.000
{ "year": 2012, "sha1": "ed8f1e1de41de66280789a3757f54f18318019e5", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "ed8f1e1de41de66280789a3757f54f18318019e5", "s2fieldsofstudy": [ "Mathematics", "Physics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
53706895
pes2o/s2orc
v3-fos-license
Spin, charge and orbital ordering in ferrimagnetic insulator YBaMn$_2$O$_5$ The oxygen-deficient (double) perovskite YBaMn$_2$O$_5$, containing corner-linked MnO$_5$ square pyramids, is found to exhibit ferrimagnetic ordering in its ground state. In the present work we report generalized-gradient-corrected, relativistic first-principles full-potential density-functional calculations performed on YBaMn$_2$O$_5$ in the nonmagnetic, ferromagnetic and ferrimagnetic states. The charge, orbital and spin orderings are explained with site-, angular momentum- and orbital-projected density of states, charge-density plots, electronic structure and total energy studies. YBaMn$_2$O$_5$ is found to stabilize in a G-type ferrimagnetic state in accordance with experimental results. The experimentally observed insulating behavior appears only when we include ferrimagnetic ordering in our calculation. We observed significant optical anisotropy in this material originating from the combined effect of ferrimagnetic ordering and crystal field splitting. In order to gain knowledge about the presence of different valence states for Mn in YBaMn$_2$O$_5$ we have calculated $K$-edge x-ray absorption near-edge spectra for the Mn and O atoms. The presence of the different valence states for Mn is clearly established from the x-ray absorption near-edge spectra, hyperfine field parameters and the magnetic properties study. Among the experimentally proposed structures, the recently reported description based on $P$4/$nmm$ is found to represent the stable structure. I. INTRODUCTION Perovskite-type transition-metal oxides ABO 3 and their oxygen-deficient relatives exhibit a variety of interesting physical properties including high-temperature superconductivity, metal-insulator transition and a variety of cooperative magnetic phenomena. Among them manganites have recently attracted particular attention because of the discovery of colossal negative magnetoresistance (CMR) in La 1−x Sr x MnO 3 , 1 La 1−x Ba x MnO 3 1 and related phases. [2][3][4][5] This renewed interest in the mixed-valence manganese perovskites such as La 1−x Pb x MnO 3 is due to their potential technological applications. 6,7 In addition, the search for new high-temperature superconductors in mixed-oxide materials is a driving force for attention. The mechanism of high-temperature superconductivity is believed to be linked to cooperative interaction between copper 3d x 2 −y 2 and 3d z 2 orbitals and oxygen 2p orbitals. 8 An attractive approach to obtain insight in the nature of this phenomenon is to examine the magnetic and electrical properties of non-copper oxide analogues of known high-temperature superconducting cuprates. Manganese is a good choice for such a task, as in an octahedral perovskite-like configuration, Mn 3+ (d 4 high spin with a single electron in an e g orbital) will experience a similar Jahn-Teller (JT) distortion to that of Cu 2+ [d 9 high spin with three electrons (or one hole) in the e g orbitals]. The hole-doped manganese perovskites show some similarities to the corresponding hole-doped Cu phases in which superconductivity occurs. 9 Structural similarities between these two groups of materials suggest that new Mn analogues of the high-temperature superconducting Cu oxides may be prepared. Since the discovery of CMR phenomena in perovskite-related manganites, extensive studies have been performed on manganese oxides with atomic arrangements related to the perovskite and pyrochlore stuctures over a wide variety of compositions with the aim to explore exotic spin-charge coupled state. 10 Anisotropic CMR phenomena have also recently been reported in layered Ruddlesden-Popper variants of perovskites (RE, AE) n+1 Mn n O 3n+1 (RE = rare earth; AE = alkaline earth) 11 for n = 2. Similar effects have also been observed in the oxygen-deficient cubic pyrochlore Tl 2 Mn 2 O 7−δ . The chemical features common to these materials are an intimately connected Mn-O-Mn network, within a three-dimensional or multi-layered structure, and an average Mn oxidation state between +3 and +4 (obtained by hole-doping of Mn 3+ ). 12 At low temperatures, manganese perovskites are characterized by strong competition between charge-carrier itineracy and localization. In the former case a ferromagnetic (F) metallic state is formed. In the latter case, the localized carriers tend to form charge-ordered (CO) states, which have a predominantly antiferromagnetic (AF) insulating character. Hence, we have a competition between F with metallic behavior and cooperative JT distortion with CO. The CO state can be converted into a F metallic state, by the application of a magnetic field. The intriguing doping-induced, temperature-dependent metal-insulator transition and the interwoven magnetic (spin), orbital and charge-ordering phenomena in mixed-valence manganese perovskites and transition-metal oxides have attracted much attention in recent years. 13 An active role of the orbital degree of freedom in the lattice and electronic response can be most typically seen in manganese perovskite oxides. As a matter of fact such properties appear to have their origin in the unique electronic structures derived from the hybridized Mn 3d and O 2p orbitals in the particular structural and chemical environment of a perovskite. The thus resulting intra-atomic exchange and the orbital degrees of freedom of the Mn 3d electrons play essential roles in this constellation. Furthermore, various kinds of structural distortion profoundly influence the electronic properties. The extensive study of CMR in RE 1−x AE x MnO 3 have brought forth novel features related to CO in these oxides. In transition-metal oxides with their anisotropic-shaped d orbitals, Coulomb interaction between the electrons (strong electron correlation effect) may be of great importance. Orbital ordering (OO) gives rise to anisotropic interactions in the electron-transfer process which in turn favors or disfavors the double-exchange and the superexchange (F or AF) interactions in an orbital direction-dependent manner and hence gives a complex spin-orbital-coupled state. OO in the manganese oxides occasionally accompanies the concomitant CO. 14 The ordered oxygen-deficient double perovskites REBaT 2 O 5+δ (T = transition metals like Fe, Co and Mn) have attracted much attention as new spin-charge-orbital coupled systems and new CMR materials. In the isostructural phases with RE = Gd and Eu CMR effects of some 40% are observed. 15 As the experimental findings have been made available only in recent years, little theoretical work has been undertaken to understand the origin of these microscopic properties. The present study reports a detailed theoretical investigation on the electronic structure and optical properties of YBaMn 2 O 5 . At low temperature, YBaMn 2 O 5 is an AF insulator with CO of Mn 3+ and Mn 2+ accompanied by OO and spin ordering (SO). 25 The mechanism of CO and SO in manganites is not at all clear. Different authors have emphasized the importance of different ingredients such as on-site Coulomb interactions, [16][17][18] JT distortion 19,20 and inter-site Coulomb interactions. 21 Therefore, we have attempted to study CO, OO and SO through full potential linear muffin-tin orbital (FPLMTO) and full potential linear augmented plane wave (FPLAPW) methods. Similar to Fe 3 O 4 and SmBaFe 2 O 5+w where Fe takes the conventional valence states of Fe 3+ and Fe 2+ at low temperature, 22,23 Mn is reported to occur as Mn 3+ and Mn 2+ in YBaMn 2 O 5 . Above the so called Verwey temperature (T V ) valence-state mixing has been observed in Fe 3 O 4 as well as in SmBaFe 2 O 5+w . This brings in an additional interesting aspect in the study of the electronic structure and magnetic properties of Mn in YBaMn 2 O 5 . The first structure determinations based on powder x-ray (300 K) 12 and neutron (100−300 K) 24 diffraction data found that YBaMn 2 O 5 crystallizes in space group P 4/mmm, whereas a more recent powder neutron (2−300 K) 25 diffraction study (PND) found P 4/nmm. So, a theoretical examination of the total energy for the two different structural alternatives is required. The rest of the paper is arranged as follows: Sec. II gives crystal structure details for YBaMn 2 O 5 . The theoretical methods used for the calculations are described in Sec. III. The analysis of the band structure is given in Sec. IV A. Sec. IV B deals with the nature of the chemical bonding in YBaMn 2 O 5 , analyzed with the help of site-, angular momentumand orbital-projected density of states (DOS). Sec. IV C discusses CO and OO in YBaMn 2 O 5 with the help of charge density plots. The results from calculations of optical spectra and x-ray absorption near edge (XANE) spectra are discussed in Sec. IV D and IV E respectively. Sec. IV F deals with hyperfine parameters. Finally the important conclusions are summarized in Sec. V. II. CRYSTAL STRUCTURE Chapman et al. 12 synthesized rather impure YBaMn 2 O 5 and reported the crystal structure parameters according to space group P 4/mmm [described as double or ordered, oxygendeficient perovskite; closely related to YBaCuFeO 5 ]. These findings were subsequently confirmed by McAllister and Attfield 24 who also establised a model for the ferrimagnetic ordering of the magnetic moments of Mn in YBaMn 2 O 5 (still based on an impure sample). More recently Millange et al. 25 have succeeded in preparing phase-pure YBaMn 2 O 5 and these authors report crystal and magnetic structure parameters according to space group P 4/nmm. Within the P 4/nmm description YBaMn 2 O 5 crystallizes with a = 5.5359 and c = 7.6151Å; Y in 2(b), Ba in 2(a), Mn(1) in 2(c) with z = 0.2745, Mn(2) in 2(c) with z = 0.7457, O(1) in 8(j) with x = 0.4911, y = 0.9911 and z = 0.4911 and O(2) in 2(c) with z = 0.0061. The lacking oxygens in the yttrium plane, compared with the perovskite-aristotype structure reduces the coordinate number of yttrium to 8, while barium retains the typical twelve coordination of the perovskite structure. The Mn-O network consists of double layers of MnO 5 square pyramids, corner shared in the ab plane and linked via their apices. According to the P 4/nmm description (Fig. 1 The present calculations have used the full-potential linear muffin-tin orbital (FPLMTO) method 26 where no shape approximation is assumed for the one-electron potential and charge density. The basis geometry consists of muffin-tin (MT) spheres centered around the atomic sites with an interstitial region in between. Inside the MT spheres the charge density and potential are expanded by means of spherical harmonic functions multiplied by a radial component. The interstitial basis function is a Bloch sum of linear combinations of Neumann or Henkel functions depending on the sign of the kinetic energy κ 2 (corresponding to the basis functions in the interstitial region). Each Neumann or Henkel function is then augmented (replaced) by a numerical basis function inside the MT spheres, in the standard way of the linear MT orbital method. 27 Since a Bloch sum of atomic centered Henkel or Neumann functions has the periodicity of the underlying lattice it may be expanded in a Fourier series, as done here. The spherical-harmonic expansion of the charge density, potential and basis functions was performed up to ℓ max = 6. The basis included Y 4p, 5s, 5p and 4d states, Ba 5p, 6s, 6p, 5d and 4f states, Mn 4s, 4p and 3d states and O 2s, 2p and 3d states. Furthermore, the calculations are all-electron as well as fully relativistic. The latter level is obtained by including the mass velocity and Darwin (and higher order) terms in the derivation of the radial functions (inside the MT spheres) whereas the spin-orbit coupling was included at the variational step using an (ℓ,s) basis. Moreover, the present calculations made use of a so-called double basis, to ensure a well-converged wave function. This means that two Neumann or Henkel functions were applied, each attached to its own radial function with an (n,ℓ) quantum number. The integrations over the Brillouin zone (BZ) in the ground state calculations were performed as a weighted sum, using the special point sampling, 28 with weights reflecting the symmetry of a given k point. We also used a Gaussian smearing width of 20 mRy for each eigenvalue in the vicinity of the Fermi level to speed up the convergence. For the DOS and optical calculations, the tetrahedron integration was employed. The calculations were performed for the experimentally determined structural parameters (see Sec. II). For the exchange-correlation functional, E xc (n), we have used the generalized gradient approximation (GGA) where the gradient of the electron density is taken into account using Perdew and Wang 29 implementation of GGA. 192 k points in the irreducible part of the primitive tetragonal BZ were used for the self-consistent ground state calculations and 352 k points for the optical calculations. B. The FPLAPW computations For the XANES and orbital-projected DOS calculations we have applied the full-potential linearized-augmented plane wave (FPLAPW) method 30 in a scalar-relativistic version without spin-orbit coupling. The FPLAPW method divides space into an interstitial region (IR) and non-overlapping MT spheres centered at the atomic sites. In IR, the basis set consists of plane waves. Inside the MT spheres, the basis set is described by radial solutions of the one-particle Schrödinger equation (at fixed energies), and their energy derivatives multiplied by spherical harmonics. The charge densities and potentials in the atomic spheres were represented by spherical harmonics up to ℓ = 6, whereas in the interstitial region these quantities were expanded in a Fourier series with 3334 stars of the reciprocal lattice vectors G. The radial basis functions of each LAPW were calculated up to ℓ = 10 and the non-spherical potential contribution to the Hamiltonian matrix had an upper limit of ℓ = 4. Atomic-sphere radii R M T of 2.5, 2.8, 1.8 and 1.6 a.u. for Y, Ba, Mn and O, respectively, were used. Since the spin densities are well confined within a radius of about 1.5 a.u, the resulting magnetic moments do not depend appreciably with the chosen atomic-sphere radii. The initial basis set included 5s, 5p, 4d valence and 4s, 4p semicore functions for Y, 6s, 6p, 6d valence and 5s, 5p semicore functions for Ba, 4s, 4p, 3d valence and 3s, 3p semicore functions for Mn and 2s, 2p and 3d functions for O. These basis functions were supplemented with local orbitals 31 for additional flexibility to the representation of the semicore states and for generalization of relaxation of the linearization errors. Owing to the linearization errors DOS are reliable only to about 1 to 2 Ry above E F . Therefore, after selfconsistency was achieved for this basis set we included higher energy local orbitals: 5d-and 4f -like function for Y, 6d-and 4f -like function for Ba, 5s-and 5p-like functions for Mn and 3p-like functions for O. The BZ integration was done with a modified tetrahedron method 32 and we used 140 k points in the irreducible wedge of BZ. Exchange and correlation effects are treated within density-functional theory (DFT), using GGA. 29 C. Optical properties Optical properties of matter can be described by means of the transverse dielectric function ǫ( q, ω) where q is the momentum transfer in the photon-electron interaction and ω is the energy transfer. At lower energies one can set q = 0, and arrive at the electric dipole approximation, which is assumed throughout this paper. The real and imaginary parts of ǫ(ω) are often referred to as ǫ 1 and ǫ 2 , respectively. We have calculated the dielectric function for frequencies well above those of the phonons and therefore we considered only electronic excitations. In condensed matter systems, there are two contributions to ǫ(ω), viz. intraband and inter-band transitions. The contribution from intra-band transitions is important only for metals. The inter-band transitions can further be split into direct and indirect transitions. The latter involves scattering of phonons and are neglected here, and moreover these only make small contribution to ǫ(ω) in comparison to the direct transitions, 33 but have a temperature broadening effect. Also other effects, e.g., excitons (which normally give rise to rather sharp peaks) affect the optical properties. The direct inter-band contribution to the imaginary part of the dielectric function, ǫ 2 (ω) is calculated by summing all possible transitions from occupied to unoccupied states, taking the appropriate transition-matrix element into account. The dielectric function is a tensor for which all components are needed for a complete description. However, we restrict our considerations to the diagonal matrix elements ǫ νν (ω) with ν = x, y or z. The inter-band contribution to the diagonal elements of ǫ 2 (ω) is given by where e is the electron charge, m its mass, f kn the Fermi-Dirac distribution function, P ν nn ′ the projection of the momentum matrix elements along the direction ν of the electric field and E k n one electron energies. The evaluation of the matrix elements in Eq. (1) is done separately over the MT and interstitial regions. Further details about the evaluation of matrix elements are found in Ref. 34. The integration over BZ in Eq. (1) is performed using linear interpolation on a mesh of uniformly distributed points, i.e., the tetrahedron method. The total ǫ νν 2 was obtained from ǫ νν 2 (IBZ), i.e. ǫ νν 2 was calculated only for the irreducible (I) part of BZ using where N is the number of symmetry operations and σ i represents the symmetry operations; for shortness, ǫ(ω) is used instead of ǫ νν (ω). Lifetime broadening was simulated by convoluting the absorptive part of the dielectric function with a Lorentzian, whose full width at half maximum (FWHM) is equal to 0.005(hω) 2 eV. The experimental resolution was simulated by broadening the final spectra with a Gaussian of constant FWHM equal to 0.01 eV. After having evaluated Eq. (2) we calculated the inter-band contribution to the real part of the dielectric function ǫ 1 (ω) from the Kramers-Kronig relation In order to calculate ǫ 1 (ω) one needs a good representation of ǫ 2 (ω) up to high energies. In the present work we have calculated ǫ 2 (ω) up to 41 eV above the E F level, which also was the truncation energy used in Eq. (3). To compare our theoretical results with the experimental spectra we have calculated polarized reflectivity spectra using the following relation. The specular reflectivity can be obtained from the complex dielectric constant in Eq. (1) through the Fresnel's equation, We have also calculated the absorption coefficient I(ω) and the refractive index n using the following expressions: IV. RESULTS AND DISCUSSION A. Electronic band structure The FPLMTO calculations were performed on YBaMn 2 O 5 for three different magnetic configurations, viz., paramagnetic (P), ferromagnetic (F) and antiferromagnetic (AF). From Table I it can be seen that in the AF configuration, the spins are not cancelled and hence this state is really ferrimagnetic (Ferri). Moreover, Table I shows that Ferri YBaMn 2 O 5 has lower energy than the P and F configurations. The energy-band structure of Ferri YBaMn 2 O 5 is shown in Fig. 2a and 2b for up-and down-spin bands, respectively. YBaMn 2 O 5 is seen to be an indirect-band-gap semiconductor. A closer inspection of the energy-band structure shows that the band gap is between the top of the valence band (VB) at the Γ point and the bottom of the conduction band (CB) at the Z point. As the unit cell contains 18 atoms, the band structure is quite complicated and Fig. 2 therefore only depicts energy range of −7.5 to 7.5 eV. There is a finite energy gap of 1.307 eV between the top-most occupied VB and the bottom-most unoccupied CB in the up-spin channel. For the purpose of more clarity, it is convenient to divide the occupied portion of the band structure in the up-spin channel into three energy regions: (i) Bands lying at and below −4 eV. (ii) Bands lying between −4 and −2 eV . (iii) The top of VB, closer to E F , viz., the range −2 to 0 eV. Region (i) contains 17 bands with contributions from Y 4s, 5s, Mn 3s, 3d and O 2p electrons. Region (ii) comprises bands which originate from completely filled O(1) and O(2) 2p orbitals. Region (iii) includes 10 bands. Among them one finds delocalized (dispersed) bands originating from Y 5s and O 2p orbitals and somewhat localized bands attributed to Mn(1) d z 2 , d x 2 −y 2 , d xz and d yz orbitals. The top-most occupied band contains electrons stemming from the Mn d z 2 orbital. In the unoccupied portion of the band structure, a corresponding division leads to two energy regions: (1) The bottom-most CB from 0 to 2 eV and (2) the middle range of CB from 2 to 4 eV. (Above 4 eV the bands are highly dispersed and it is quite difficult to establish the origin of the bands.) There are 9 bands in region (1) which have some Y 5s, Ba 5d, Mn(1) 3d xy and Mn(2) 3d characters. In region (2) the bands retain Y 6s, 4d, Ba 7s, 5d and Mn 4s, 4p and 3d characters. The energy band structure of the down-spin channel (Fig. 2b) has 16 bands in the region (i) up to −4 eV which arise from the s and p electrons of the Y, Ba, Mn and O atoms. The mainly s and p electron character of the bands, makes them appreciably dispersed. The second energy region (ii) contains 12 bands, which have Y 5s, Ba 5p, Mn(2) 3d and O(1), O(2) 2p character. The third region (iii) closer to E F has 10 bands which are mainly arising from Mn(1), Mn(2) 3d and O(1), O(2) 2p orbitals. Unlike the up-spin channel the down-spin channel contains two bands at E F which arise from the originally half-filled t 2g orbitals of Mn(1) and the half-filled d xy orbital of Mn (2). A finite band gap of 1.046 eV opens up between the highest occupied VB and bottommost unoccupied CB. The unoccupied portion of the down-spin channel is quite different from that of the up-spin channel. The lowest-lying unoccupied band has Mn(2) 4s electrons. Between 0 and 2 eV there are 8 bands which arise mainly from Mn(1) 3d electrons as well as from Mn (1), Mn(2) 4s electrons and O(1), O(2) 2p electrons. The dispersed bands present between 2 and 4 eV have Y 5s, 3d, Ba 6s, 5d and Mn(1) 3d characters. B. DOS characteristics In order to theoretically verify which of the two (P 4/mmm or P 4/nmm based) structures is energetically more stable, we performed first-principle calculations for both variants. The calculated DOS value at E F for the P phase P 4/mmm variant is 192.82 states/(Ry f.u.) and for P 4/nmm 149 states/(Ry f.u.) in the P state. Hence, a larger number of electrons are present at E F for the former variant, which favours the relative structural stability of the latter. Moreover, our calculations show that the P 4/nmm variant is 860 meV/f.u. lower in energy than the P 4/mmm variant. Therefore we conclude that YBaMn 2 O 5 is more stable in space group P 4/nmm than in P 4/mmm, viz. in accordance with the most recent PND-based experimental study. Our calculated total DOS curves for YBaMn 2 O 5 in the P, F and Ferri configurations are given in Fig. 3. The highest occupied energy level in VB, i.e., E F is marked by the dotted line. In the P and F cases finite DOS values are present in the vicinity of E F . Hence, both the P and F configurations exhibit metallic character. On going from the P to F case, the electrons start to localize which is seen from the reduced number of states at E F . Due to the electron localization, the gain in total energy of 3.1 eV (Table I) Table I it can be seen that the Ferri configuration has lower energy than the other two configurations. The present observation of the stabilization of a Ferri ground state in YBaMn 2 O 5 is consistent with the established magnetic structure. 25 It is interesting to note that the introduction of the Ferri configuration is essential in order to obtain the correct semiconducting ground state for YBaMn 2 O 5 . Unlike LaMnO 3 (where the energy difference between the F and AF cases is ∼ 25 meV) 36 there is large energy difference (∼ 0.5 eV) between the Ferri and the F states of YBaMn 2 O 5 . So, a very large magnetic field is required to stabilize the F phase and induce insulator-to-metal transition in YBaMn 2 O 5 . In the REMnO 3 phases hole doping induces CE-type magnetic ordering in which the spins are F aligned in zigzag chains with AF coupling between these chains. In YBaMn 2 O 5 , the Mn spins are AF aligned within quasi-one-dimensional chains as well as between the chains. The main difference between YBaMn 2 O 5 and REMnO 3 is that the latter have e g electrons present in the vicinity of E F and that the superexchange interaction originates from the localized t 2g electrons. Owing to the square-pyramidal crystal field in YBaMn 2 O 5 the e g electrons also get localized and hence both e g and the t 2g electrons participate in the superexchange interaction. This is the main reason why the F state has much higher energy than the AF state in YBaMn 2 O 5 as compared with LaMnO 3 . To obtain more insight into the DOS features, we show the angular momentum-and site-decomposed DOS in Fig. 4. The lower panels for Y and Ba show that, in spite of the high atomic numbers for Y and Ba, small DOS values are seen in VB. The Y and Ba states come high up in CB (ca. 4 eV above E F ) indicating a nearly total ionization of these atoms. They lose their valence charge to form ionic bonding with oxygen. According to the crystal structure, Y and Ba are located in layers along c, which is clearly reflected in the electronic charge-density distribution within (110) in the AF configuration (Fig. 6). The distinction between Mn(1) and Mn (2) is clearly reflected in the different topology of their DOS curves. As seen from Table I Fig. 4 shows that the O 2p states are energetically degenerate with Mn 3d states in this energy range, implying that these orbitals form covalent bonds with Mn(1) and Mn(2) through hybridization. The almost empty DOS for O(1) and O(2) in CB implies that the oxygen atoms are in nearly completely ionized states in YBaMn 2 O 5 . In order to progress further in the understanding of the chemical bonding, charge, spin and orbital ordering in YBaMn 2 O 5 , we have plotted the orbital-decomposed DOS for the 3dorbitals of Mn(1) and Mn(2) in Fig. 5. This illustration shows that DOS for the d z 2 orbital for both Mn(1) and Mn(2) are well-localized. There is a sharp peak at −5 eV in the up-spin panel for Mn(1) and in the down-spin panel for Mn(2) which correlates with a well-localized peak in DOS for O(2) (Fig. 4) This is attributed to the 180 o Mn 3+ -O(2)-Mn 2+ bond angle which facilitates p-d σ bond to the O(2) p z orbital and superexchange interaction. 37 As upspin Mn(1) and down-spin Mn(2) are involved, we infer that the superexchange interaction results in AF spin ordering between the Mn atoms involved. The peaks at ca. 1 eV in up-spin Mn(1) and down-spin Mn(2) are attributed to the (non-bonding) d z 2 orbitals. Turning to the other e g orbitals (of d x 2 −y 2 character) for Mn(1) and Mn(2), these are spread in the ab plane. From Fig. 4, we see that the O 2p orbitals are also situated in the same energy range (−5 eV to 0) as the d x 2 −y 2 orbitals of Mn(1) and Mn(2), thus these orbitals and O p x and p y orbitals form p-d σ bond. As the bond angle Mn(1)-O(1)-Mn2 is only 157 o , the strength of this covalent bond is weak and consequently the AF superexchange interaction becomes weakened. 37 Despite the AF superexchange interaction, there is no exact cancellation of the spins of Mn(1) 3+ and Mn(2) 2+ , and the result is a Ferri state with a finite magnetic moment of 0.85 µ B . Transition-metal perovskite oxides which exhibit CO like La 1−x Sr x MnO 3 have an octahedral crystal field, whereas YBaMn 2 O 5 has a square-pyramidal arrangement around Mn. In this case the d orbitals of Mn split into low-lying e g orbitals and relatively higher-lying t 2g orbitals. Fig. 5, shows that t 2g is closer to E F than e g . The same feature is observed for the isostructural phase YBaCo 2 O 5 . 38 From crystal-structure considerations it is deduced that HOMO (highest occupied molecular orbital) is located at the top of the bonding π level of VB, arising from the Mn t 2g orbitals with a band gap to the empty LUMO (lowest unoccupied molecular orbital) located at the bottom of the antibonding π ⋆ level of CB. 35 Among the t 2g orbitals d xz and d yz of both Mn(1) and Mn(2) are energetically degenerate as clearly seen from the character of the DOS curve. (In YVO 3 also d xz and d yz are nearly degenerate 39 .) So Mn(1) and Mn(2) exist in high-spin state with the e g orbitals filled up before the t 2g orbitals. The d xz and d yz orbitals contain one electron each for Mn(1) and Mn (2). For Mn(1) a very small peak is seen, which could come from down-spin d xy , whereas a finite sized peak is observed for the same orbital of Mn (2). It is indeed the occupancy of this orbital which determines the magnetic moment of Mn(1) and Mn (2). If one considers the Goodenough-Kanamori 40 rules for magnetic interactions in manganese oxides, the expected magnetic order should be A-type AF (viz. F ordering within the layers and AF ordering between the layers; arising from superexchange interactions between occupied d x 2 −y 2 orbitals on Mn 2+ and empty d x 2 −y 2 on Mn 3+ ). However, owing to the large deviation of the Mn-O-Mn bond angle from 180 o along with CO the d x 2 −y 2 orbitals for both Mn species become occupied. Hence, AF ordering is observed between Mn within the planes as well as between the planes (see Fig. 7). One Mn couples AF to its six neighboring Mn in a G-type AF arrangement in accordance with experimental findings. 25 Hence, the theoretical calculations have provided the correct ground state with respect to the experiments. Magnetic susceptibility and magnetization measurements, have unequivocally shown that YBaMn 2 O 5 is in a Ferri state at low temperature 12,25 with a saturated moment between 0.5 and 0.95 µ B per YBaMn 2 O 5 formula unit. 12,25 The theoretically calculated magnetic moments for Mn are 3.07 and 3.93 µ B , repectively, giving a net magnetic moment of 0.86 µ B per formula unit for the Ferri state of YBaMn 2 O 5 . Our theoretically calculated value is less than the predicted (spin-only) value of 1.0 µ B . From the DOS analyses, we noted that there is strong hybridization between Mn 3d electrons and O 2p electrons. A finite magnetic moment of 0.0064 and 0.0032 µ B /atom are theoretically found to be present at O(1) and O(2), respectively. Hence, we conclude that the slight deviation in the saturated magnetic moment from that predicted by an idealized ionic model can be attributed to the strong hybridization between Mn 3d and O 2p electrons. We have also performed calculations with the room temperature structural parameters, confirming that the two manganese atoms possess different magnetic moments at this temperature. C. Charge and orbital ordering For the pseudo-cubic and layered perovskite manganese oxides, essentially three parameters control the electron-correlation strength and the resultant structural, transport and magnetic properties. 10 First, the hole-doping level (charge-carrier density or the band-filling level of CB). In the case of perovskite oxides the substitution of trivalent RE by divalent AE introduces holes in the Mn 3d orbitals. Second, the effective one-electron bandwidth (W ) or equivalently the e g electron-transfer interaction. The magnitude of W is directly determined by the size of the atom at the RE, AE site which makes the Mn-O-Mn bond angle deviate from 180 o and thus hinders the electron-transfer interaction. The correlation between CO and the size of RE and AE is studied by several workers and is well illustrated by a phase diagram in Ref. 41. Third comes the dimensionality: the lowering of the electronic dimensionality causes a variety of essential changes in the electronic properties. The carrierto-lattice coupling is so strong in manganites, that the charge-localization tendency becomes very strong. In general the ground state of mixed-valent manganite perovskites is, therefore, either F and metallic or AF and CO. In all CO systems, the magnetic susceptibility drops rapidly at the CO temperature (T CO ). CO drastically influences the magnetic correlations in manganites. Investigations on the CO state have established an intimate connection to lattice distortion. It seems to be the lattice distortion associated with OO which localizes the charge and thus initiates CO. 41 The effect of the CO state on cooperative magnetic states is to produce insulating behavior. A high magnetic field induces a melting-like phenomenon of the electron lattice of the CO phase giving rise to a huge negative magnetoresistance. 42 For these reasons, it is interesting to study CO in YBaMn 2 O 5 . Charge localization, which is a prerequisite for CO is mutually exclusive with an F state according to and the double-exchange mechanism. The double-exchange mechanism requires hopping of charge carriers from one Mn to an adjacent Mn via an intervening O. The CO state is expected to become stable when the repulsive Coulomb interaction between carriers dominates over the kinetic energy of the carriers. Hence, CO arises because the carriers are localized at specific long-range-ordered sites below the CO temperature. CO is expected to be favored for equal proportions of Mn 2+ and Mn 3+ as in the present case, and in YBaMn 2 O 5 it is associated with the AF coupling between Mn in the ab plane. CO does not occur in Pr 0.5 Sr 0.5 MnO 3 10 where A-type AF is the ground state, whereas CO is observed in Nd 0.5 Sr 0.5 MnO 3 10 below 150 K where CE-type AF is the ground state. CO depends on the d-electron bandwidth and hence it is worth to consider this feature in some detail. On reduction of the Mn-O-Mn angle, the hopping between the Mn 3d and O 2p orbital decreases and hence the e g bandwidth decreases. Consequently the system stabilizes in a Ferri-CO-insulating state. Usually the CO-insulating state transforms to a metallic F state on the application of a magnetic field. This may be the reason for the metallic behavior of the F phase found in our calculation. CO in YBaMn 2 O 5 is characterized by the real-space ordering of Mn 2+ and Mn 3+ species. Our calculations predict that a long-range CO of Mn 2+ and Mn 3+ with a rocksalt-type arrangement occurs at low temperatures. This can be viewed as chains of Mn 2+ and Mn 3+ running parallel to b and correspondingly alternating chains running along a and c (viz. a checker-board arrangement as seen from Fig. 7). Furthermore, there exists orbital degrees of freedom for the e g electrons and OO can lower the electronic energy through the JT mechanism. Therefore, mixed-valent manganites can have OO in addition to CO. 43 OO gives rise to the anisotropy in the electron-transfer interaction. This favors or disfavors the double-exchange interaction and the (F or AF) superexchange interaction in an orbital-dependent manner and hence gives a complex spinorbital-coupled state. Therefore, it is also interesting to study OO in some detail. Fig. 6 shows the electron charge density of YBaMn 2 O 5 in (001) and (110) planes. The electroncharge density is plotted in the energy range where the 3d orbitals reside, the shape of the d x 2 −y 2 and d z 2 orbitals are well reproduced in Fig. 6a and b, respectively. In Fig. 6a, Mn(1), O(1) and Mn(2) are linked through covalent bonds which is easily seen as the finite electron density on the connecting lines between the atoms. This illuminates the path for the AF superexchange interaction between them. When the size of RE and AE becomes smaller the one-electron bandwidth (or e g electron-transfer interaction) decreases in value. 41 For Y 3+ with an ionic radius of 1.25Å, [smaller than other REs like La 3+ (1.36Å) 39 ] the Mn(1)-O(1)-Mn(2) angle is much less than 180 o so the e g electron bandwidth is small compared with the t 2g electron bandwidth. Fig. 6a shows that despite the finite electron density between Mn and O, the p orbitals of O are not directed towards the lobes of the Mn d x 2 −y 2 orbitals. Hence, the strength of the resulting p-d covalent bond is decreased. The orbital projected DOS in Fig. 5 shows that the t 2g bandwidth is larger than the e g bandwidth owing to this hybridization effect. The transfer integral between the two neighboring Mn atoms in the crystal lattice is determined by the overlap between the 3d orbitals with the 2p orbital of O atom. Fig. 6b shows the electron density along (110) of the unit cell. The d z 2 orbital is ordered along c and for both Mn(1) and Mn (2), this orbital hybridizes with the O(1) p z orbital resulting in a p-d π bond. However, as the d z 2 orbital forms a strong σ bond with the p z orbital of O(2), the strength of this π bond is weak. The overlap between the d x 2 −y 2 and p z orbitals is zero because of their different orientation in the ab plane. Therefore, the electron in the d x 2 −y 2 orbital can not hop along c. 37 In this manner the e g electrons get localized and cause CO and OO. Owing to the fact that the Mn(1)-O(1) bond length is 1.908Å compared with 2.086Å for Mn(2)-O(1), more electronic charge is present on Mn 2+ than on Mn 3+ . This is visible in the orbital decomposed DOS (Fig. 5), where the d xy orbital of Mn 2+ has more states (electrons) than that of Mn 3+ . In cubic perovskites, the electron transfer is almost prohibited along c because of the orbital ordering of d x 2 −y 2 , which is also the origin of the inter-plane AF coupling. 14 PND 25 indicates that Mn 3+ has the occupied d z 2 orbital extending along [001], whereas the unoccupied d x 2 −y 2 orbital extends along [110] and [110]. A corresponding OO could be expected for Mn 2+ with both d z 2 and d x 2 −y 2 orbitals occupied. However, our detailed electronic structure studies show that both d z 2 and d x 2 −y 2 orbitals are partially occupied for Mn 2+ as well as Mn 3+ as shown in Fig. 5. On the other hand, according to our chargedensity analysis ( Fig. 6a and b) the d z 2 orbital is ordered along [001] and d x 2 −y 2 along [110] for Mn(1) and Mn(2) (Fig. 7), which is consistent with experimental findings. The Mn(1)-O-Mn(2) bond angle is much smaller than 180 o which reduces the effective e g -e g hopping, the e g bands get localized and the bandwidth reduced. This is the main reason for OO in YBaMn 2 O 5 . Both d z 2 and d x 2 −y 2 orbitals are aligned in the same orientation within the layer as well as between the layers as shown in Fig. 7. So, this type of OO is named F type. D. Optical properties Further insight into the electronic structure can be obtained from the calculated interband optical functions. It has been earlier found that the calculated optical properties for SnI 2 , NaNO 2 and MnX (X = As, Sb, Bi) [45][46][47] are in excellent agreement with the experimental findings, and we have therefore used the same theory to predict the optical properties of YBaMn 2 O 5 . Since, this material possesses unique Ferri ordering and insulating behavior along with an uniaxial crystal structure it may find application in optical devices. Yet another reason for studying the optical properties is that, it has been experimentally established 44 that the optical anisotropy of Pr 0.6 Ca 0.4 MnO 3 is drastically reduced above T CO . It is therefore expected that the optical anisotropy will provide more insight about CO and OO in YBaMn 2 O 5 . For YBaMn 2 O 5 with its tetragonal crystal structure, the optical spectrum is conveniently resolved into two principal directions E a and E c viz. with the electric field vector polarized along a and c, respectively. In the top-most panel of Fig. 8, the dispersive part of the diagonal elements of the dielectric tensor are given. The anisotropy in the dielectric tensor is clearly seen in this illustration. In the second panel of Fig. 8, the polarized ǫ 2 spectra are shown. The spectrum corresponding to E a and E c differ from one another up to ca. 10 eV whereas less difference is noticable in the spectra above 10 eV. Since there is an one-to-one correspondance between the inter-band transitions and band structures (discussed in Sec. IV A), we investigate the origin of the peaks in the ǫ 2 spectrum with the help of our calculated band structure. As YBaMn 2 O 5 stabilizes in the Ferri state, VB has an unequal number of bands in the up-and down-spin channels (Fig. 2), viz. 36 bands in the former and 38 in the latter. The two extra bands of the down-spin channel closer to E F in VB play an important role in the transitions as discussed below. We name the top-most band of VB as no. 38 and bottom-most band of CB as no. 39. The lowest-energy peak A results from inter-band transitions (no. 35 to 41, mostly O(1) 2p to Mn(2) d z 2 and no. 35 to 39, mostly Mn(1) d z 2 to Mn(2) 4p) and peak B results from transitions (no. 38 to 48 and 36 to 49, mostly Mn 3d to Mn 4p). The peak C originates from many transitions, including O 2p to Mn 3d, Y 5s to Y 5p etc. Peaks D, E and F are contributed by several transitions including O 2p to Mn 3d, Y 5s to Y 5p. Further, a very small peak is present in the higher-energy region (∼ 17 eV) of ǫ 2 which is due to transitions from lower-lying occupied levels to higher-lying unoccupied levels. The accumulation of broad Y 4d and Ba 5d bands in the high-energy part of CB results in very little structure in the higher-energy part of the optical spectra. The optical gaps for E a and E c are approximately the same indicating that the effective inter-site Coulomb correlation is the same for the in-plane and the out-of-plane orientation for the Ferri phase. This can be traced back to the G-type Ferri coupling in this material. In order to understand the origin of the optical anisotropy in YBaMn 2 O 5 we have also made optical property calculations for the F phase, which show that (in contrast to the Ferri phase) the ǫ 2 spectrum for E a is shifted some 3.5 eV to higher values than E c. Further, the ǫ 2 components of E c for the F phase is much smaller than that for the Ferri phase indicating that the large optical anisotropy in YBaMn 2 O 5 is originating from the G-type Ferri ordering. To emphasize the above finding, we have also plotted the spin-projected ǫ 2 spectra along the a [ǫ a 2 (ω)] and c [ǫ c 2 (ω)] directions (third and fourth panels of Fig. 8, respectively). Although the optical gap is approximately same for E a and E c in the ǫ 2 spectrum, there is a finite difference in the optical gaps related to up-and down-spin electrons in the ǫ a 2 (ω) and ǫ c 2 (ω) spectra. The optical gap for the down-spin case is smaller than that for up-spin case owing to the presence of the two narrow bands very close to E F in the down-spin channel of VB. There is a large difference between the spectra for up-and down-spins up to ca. 7 eV. The ǫ a 2 (ω) spectrum resulting from the up-spin states has somewhat more dispersed peaks than that from down-spin states. The ǫ a 2 (ω) spectrum resulting from the down-spin states has four well-defined peaks; two prominent peaks in the region 1.75 to 2 eV, and two additional peaks at ca. 2.25 and 3 eV. The magnitude of the down-spin peaks are higher than those of the up-spin peaks in the ǫ a 2 (ω) spectrum. The ǫ c 2 (ω) spectrum originating from up-and down-spin states have appreciable differences up to ca. 6 eV. The down-spin part has two well-defined peaks at ca. 1.75 and 2.75 eV. The up-spin part has dispersed peaks of lower magnitude than the down-spin part, the magnitude of the up-spin peaks in ǫ c 2 (ω) being generally higher than in ǫ a 2 (ω). The optical anisotropy is noticable in the direction-as well as spin-resolved ǫ 2 spectra. Hence, it is verified that the optical anisotropy originates both from crystal field effects as well as from the Ferri ordering. As reflectivity, absorption coefficient and refractive index are often subjected to experimental studies, we have calculated these quantities and reproduced them in Fig. 9. We now advertice for experimental optical studies on YBaMn 2 O 5 . E. XANES studies X-ray absorption spectroscopy (XAS) has developed into a powerful tool for the elucidation of the geometric and electronic structure of amorphous and crystalline solids. 48 X-ray absorption occurs by the excitation of core electrons, which makes this technique element specific. Although the X-ray absorption near edge structure (XANES) only provides direct information about the unoccupied electronic states of a material, it gives indirect information about the valence of a given atom in its particular environment and about occupied electronic states. This is because the unoccupied states are affected by the occupied states through interaction with the neighbors. The oxygen atoms are in two different chemical environment in YBaMn 2 O 5 as clearly seen in PDOS in Fig. 4. The calculated K-edge spectra for O(1) and O(2) shown in Fig. 10 involve transition from the 1s core state to the unoccupied p state. In this context the Mn K-edge mainly probes the unoccupied Mn 4p states. It is generally accepted that O K-edge spectra is very sensitive to the local structure of transition-metal oxides as documented for YBaMn 2 O 5 contains Mn in the valence states Mn 3+ and Mn 2+ which as discussed in Sec. IV C, experience CO. An experimental technique to visualize CO is not available. In order to visualize the presence of different oxidation states for Mn, we have theoretically calculated the XANES K-edge spectra for these and presented them in Fig. 10. Both Mn atoms are seen to have four peaks within the energy range considered, reflecting that both are surrounded by five O within 2.08Å. However, owing to the different valence states there are intensity differences as well as energy shifts (some 1 eV) of these peaks. For example, the lower-energy peak has large intensity in the Mn(2) K-edge spectrum compared with that for Mn(1). On the contrary, the three higher-energy peaks in the Mn(2) K-edge spectrum are less intense than in the Mn(1) K-edge spectrum. When experimental XANES spectra become available for YBaMn 2 O 5 the above features should be able to confirm the two different valence states for Mn. F. Hyperfine parameters The calculation of hyperfine parameters is useful to characterize different atomic sites in a given material. Many experimental techniques such as Mössbauer spectroscopy, nuclear magnetic and nuclear quadrupole resonance and perturbed angular correlation measurements are used to measure hyperfine parameters. Hyperfine parameters describe the interaction of a nucleus with electric and magnetic fields created by its chemical environment. The resulting splitting of nuclear energy levels is determined by the product of a nuclear and an extranuclear quantity. In the case of quadrupole interaction, it is the nuclear quadrupole moment that interacts with the electric-field gradient (EFG) produced by the charges outside the nucleus 53 . EFG is a ground state property of a material which depends sensitively on the asymmetry of the electronic charges. The direct relation of EFG and the asphericity of the electron density in the vicinity of the probe nucleus enables one to estimate the quadrupole splitting and the degree of covalency or ionicity of the chemical bonds provided the nuclear quadrupole moment is known. Quantities describing hyperfine interactions (e.g, EFG and isomer shift) are widely studied nowadays both experimentally and theoretically. Blaha et al. 54 have showed that the linear augmented plane wave (LAPW) method is able to predict EFGs in solids with high precision. The charge distribution of complex materials such as YBa 2 Cu 3 O 7 , YBa 2 Cu 3 O 6.5 and YBa 2 Cu 3 O 6 have been studied theoretically by Schwarz et al. 55 by this approach. In this study, we have attempted to establish the different valence states of Mn in YBaMn 2 O 5 with the help of EFG and the hyperfine field calculated using FPLAPW method as embodied in the WIEN97 code. 30 The total hyperfine field (HFF) can be decomposed in three terms: a dominant Fermi contact term, a dipolar term and an orbital contribution. We limit our consideration to the contact term, which in the non-relativistic limit is derived from the spin densities at the nuclear site: EFG is defined as the second derivative of the electrostatic potential at the nucleus, written as a traceless tensor. This tensor can be obtained from an integral over the nonspherical charge density ρ(r). For instance the principal component V zz is given by where P 2 is the second-order Legendre polynomial. A more detailed description of the calculation of EFG can be found elsewhere. 56 The calculated EFG and HFF at the atomic sites in YBaMn 2 O 5 are given in Table II which confirm that there is a finite difference in the value of both EFG and HFF between the two Mn atoms. So we can conclude that their charge distribution is quite different. The higher value of EFG and HFF in Mn 2+ than in Mn 2+ is justified because, more charge is found on the former. This can be seen from the orbital projected DOS as well as from the magnetic moments possessed by the two ions. HFF for Mn 3+ in LaMnO 3 is found to be −198 kG 36 which is quite close to −179 kG found for Mn 3+ in our case. Consequently we substantiate that Mn(1) corresponds to Mn 3+ . The two oxygens ions also differ mutually in their values for EFG and HFF (Table II) suggesting that the strength of the covalent bond formed by them with Mn(1) and Mn (2) is different. V. SUMMARY Like hole-doped REMnO 3 -based CMR materials YBaMn 2 O 5 also carries mixed-valence states of manganese, ferrimagnetic ordering, charge ordering and apparently undergoes a combined insulator-to-metal and Ferrimagnetic-to-Ferromagnetic transition. Hence YBaMn 2 O 5 may be a potential CMR material which deserves more attention. We have made a detailed investigation on the electronic properties of YBaMn 2 O 5 using the full potential LMTO method as well as the full potential LAPW method and conclude the following. 1. The G-type ferrimagnetic insulating state is found to be the ground state in accordance with experimental findings. 2. The existence of the two different types of Mn atoms is visualized by differences in site-and orbital-projected DOS curves. In order to further emphasize the different valence states of Mn, we have calculated K-edge XANE spectra. For Mn as well as O the existence of two types of valence induced atomic species is established with the help of K-edge spectra. 3. The occurrence of checker-board-type charge ordering and F-type orbital ordering is seen from the charge-density plots. The small size of Y 3+ makes the Mn-O-Mn bond angle deviate from 180 o , which in turn imposes a reduction in the e g bandwidth. The charge-and orbital-ordering features are believed to result from this perturbation of the e g orbitals. 4. As YBaMn 2 O 5 is an ferrimagnetic insulator, it is useful to probe its optical properties for potential applications. We have analyzed the inter-band contributions to the optical properties with the help of the calculated electronic-band-structure features. We found large anisotropies in the optical spectra originating from ferrimagnetic ordering and the crystal field splitting. No experimental optical study of YBaMn 2 O 5 is hitherto available. 5. Hyperfine parameters such as hyperfine field and electric-field gradients have also been calculated showing very large differences in the computed values for the crystallographically different manganese and oxygen atoms. This substantiates that Mn exist in two different valence states in YBaMn 2 O 5 .
2018-11-01T07:34:21.365Z
2001-05-05T00:00:00.000
{ "year": 2001, "sha1": "7d4c40e27e56c44817cc2e42172faf854b33b521", "oa_license": null, "oa_url": "http://arxiv.org/pdf/cond-mat/0105117", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "7d4c40e27e56c44817cc2e42172faf854b33b521", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Materials Science" ] }
49541485
pes2o/s2orc
v3-fos-license
Illustrated versus non-illustrated anatomical test items in anatomy course tests and German Medical Licensing examinations (M1) Illustrated Multiple-choice questions (iMCQs) form an integral part of written tests in anatomy. In iMCQs, the written question refers to various types of figures, e. g. X-ray images, micrographs of histological sections, or drawings of anatomical structures. Since the inclusion of images in MCQs might affect item performance we compared characteristics of anatomical items tested with iMCQs and non-iMCQs in seven tests of anatomy courses and in two written parts of the first section of the German Medical Licensing Examination (M1). In summary, we compared 25 iMCQs and 163 non-iMCQs from anatomy courses, and 27 iMCQs and 130 non-iMCQs from the written part of the M1 using a nonparametric test for unpaired samples. As a result, there were no significant differences in difficulty and discrimination levels between iMCQs and non-iMCQs, the same applied to an analysis stratified for MCQ formats. We conclude that the illustrated item format by itself does not seem to affect item difficulty. The present results are consistent with previous retrospective studies which showed no significant differences of test or item characteristics between iMCQs and non-iMCQs. Introduction Today, there are diverse resources for anatomy assessment. Before the introduction of the Multiple-choice (MC) format in the seventies, state examinations were viva voce [6]. Unstructured oral examinations lack good reliability. Structured oral examinations (SOE) can show good reliability using an item blueprint and a scoring template [13]. There are various formats of test items in written assessment. Currently, the most common is the MC format with four or five answer options. In Single-best answer (SBA) questions, there is only one best (correct) answer. SBA is the most popular MC format. In True/false (T/F) items, all correct answers (more than one) must be marked. Simple T/F items might be acceptable. Multiple T/F items with combinations of answer options, used in medical exams in the past, are no longer recommended [4]. The Extended-matching question (EMQ) includes an option list and at least two item stems, and for each stem, the examinee chooses the single best answer from the list. Multiple-choice examinations show high test reliability [6], [13]. Open questions can be answered by a written essay or keywords (Short-answer question, SAQ). Open questions are more time consuming, compared to MC formats [6]. The Modified essay question (MEQ) is a structured variant of the essay format. In spotter/tag tests, MCQs or SAQs refer to marked (tagged) structures in specimens or images [1], [13]. Multiple-choice questions (MCQs) are widely used in medical exams. In addition, many medical textbooks nowadays include some self-assessment MCQs at the end of a chapter. The National Board of Medical Examiners (NBME) and other authors published guidelines for the creation of MCQs [4], [8]. Visual resources in exam questions should be accurate, complete, relevant and unambiguous [5]. Instructions on how to produce visual material for MCQs and common pitfalls in anatomy MCQs are published [1], [14]. Illustrated MCQs (iMCQs) form an integral part of anatomy tests. Different MCQ formats, e. g. SBA questions or EMQs, can be combined with illustrations. Various illustrations can be included, from x-ray or histological images to photographs of gross preparation specimens or illustrations of functional systems. An item analysis shows the difficulty and discrimination of individual MCQs. The difficulty index is the proportion of participants choosing the correct answer. Item discrimination is the correlation between the item score and the test score (item total correlation). Good MCQs have a high correlation coefficient [7], [11]. Previous studies did not find significant differences in item or test characteristics between iMCQs and non-iMCQs [3], [9], [12], [15], except for a study on final year students tested with MCQs presenting a clinical problem. In this study on problem-based radiology questions, illustrated items requiring image interpretation were more difficult compared to questions testing recall of knowledge [10]. However, the integration of illustrations in MCQs might affect item difficulty and overall test difficulty. Therefore, the aim of the present study was to assess characteristics of illustrated and non-illustrated anatomical items from seven anatomy course tests and two written parts of the first section of the German Medical Licensing Examination (M1) in autumn 2015 and 2016. Multiple-choice questions MCQs from seven consecutive anatomy course tests from winter 2014 to summer 2016 provided the basis for this study. First and second year medical and dentistry students participated in the tests. A test with 30 MCQs was written at the end of course one (musculoskeletal system), course two (internal organs), course three (head and neck and neuroanatomy) and the anatomy seminar for medical students. Between 592 and 364 students participated in the anatomy course tests. Medical students of the Goethe-University Frankfurt wrote M1 examinations with 80 anatomy questions each, in autumn 2015 and 2016 with 393 and 330 participants. Anatomy course tests included between 3 and 7 and the written parts of the M1 12 and 15 illustrated anatomical items. Exam papers were evaluated with EvaExam software (Electric paper, Lüneburg, Germany). MCQs classified as doublets and iMCQs with identical illustrations were excluded from the study. Microsoft Excel was used to calculate item difficulty and discrimination from the raw data. The difficulty index was determined as the mean item score. Item discrimination was calculated as the Pearson product-moment correlation coefficient of the individual item score and the sum score of the remaining items (corrected item discrimination). Item analyses of M1 questions were produced and are under copyright by the Institute for Medical and Pharmaceutical Exam Questions (IMPP, Mainz, Germany). Statistical analysis Data were inspected and tested for normal distribution (Q-Q plot, Shapiro-Wilkinson test). The Kolmogorov-Smirnov test for unpaired samples was used to compare groups of MCQs. Statistical analysis was performed with GraphPad Prism version 7.00 for Windows (GraphPad Software, La Jolla, California, USA). Data were plotted with median and range. A comparison of iMCQs and non-iMCQs stratified for MCQ formats was performed with the stratified van-Elteren U-test (Bias, Version 11.02, epsilon Verlag, 2016). Results From anatomy course tests, 25 iMCQs and 163 non-iMCQs were included in this study. IMCQs consisted of 13 histological and 5 radiological images (conventional x-ray or CT), 4 anatomical illustrations, 2 surface anatomy pictures and 1 image of a gross brain section (see Figure 1, translation of the original question). MCQs followed the A-type format (one best answer and four distractors). (p>0.05) (see Figure 2), the same applied to the stratified analysis. Discussion Visual resources are widely used in anatomy teaching and performance assessment. Each anatomy course test includes some iMCQs, and they are part of the written part of the first section of the Medical Licensing Examination (M1). Thus, in the present study, we were interested in the performance of this item format. Therefore, we compared iMCQs and non-iMCQs in anatomy course tests and the written part of the M1. We found that iMCQs and non-iMCQs did not differ significantly in difficulty and discrimination. The fact that iMCQs and non-iMCQs are based on alternative sources of information, i.e. images and text, does not seem to affect item characteristics. IMCQs have been assessed previously. Hunt compared two sets of problem-based MCQs in radiology. One set included an image, the other a description of the image, e. g. a radiologist's report. Final year students wrote the sets in two parallel exams. As a result, the set of items with visual content was significantly more difficult. In Hunt's view the results "are consistent with the belief that questions calling for interpretation of data or problem-solving require a higher level of performance or additional skill to that required for questions which supply written descriptions of that data" ( [9], p. 420). In a study on Part 1 FRACS (Fellowship of the Royal Australasian College of Surgeons) exam questions, the authors compared 77 triplets of MCQs in anatomy and pathology. The MCQs presented four answer options. The triplets consisted of a visual and a verbal question of the same content and an additional verbal one of similar content. There were no significant differences in item difficulty and discrimination. The authors argued that their study was limited by a small sample size and that a lower competence in written English of non-native speaker candidates in the FRACS exams might have influenced the results [3]. Vorstenborsch et al. compared 39 EMQs with either an answer list or a labelled anatomical illustration in the item stem. Two test versions were constructed and half of the students wrote each test. Students volunteered for this informal exam, which was similar to the circumstances of an official exam. Using a label, some questions were more and some less difficult, compared to the non-labelled version. Contrary to our study, the authors used extended-matching items instead of MCQs and created closely matched items (labeled image vs answer list). Finally, they were able to compare overall difficulty and reliability of separate test versions. Apart from variable individual effects, the authors did not find overall differences between test versions [15]. Holland et al. reviewed histology exams from three consecutive years with 95 iMCQs and 100 non-iMCQs, and found no significant differences in item difficulty or discrimination [9]. In the present study, we included 25 items of all anatomical subjects including 13 histology questions. Similarly, in a retrospective analysis on text-only and items with reference images in anatomy examinations, there were no significant differences in difficulty or discrimination between item formats. In this study, the illustration was an addition to the item and did not replace written content, thus images "were considered not to be critical to answering the item" [ [12], page 3]. Concerning study design, the studies by Hunt and Vorstenbosch were trial or informal examinations respectively. Students were allocated at random to test groups, and students were not informed about the nature of the examination. Though it was an informal test, test conditions were comparable to an official exam [10]. Each student answered items in both formats [10], [15]. The studies by Buzzard and Hunt included radiological items, which went beyond recall of knowledge and asked for thinking in a clinical context (see item examples) [2], [10]. Hunt categorized items according to clinical setting, supplementary data, interpretation, diagnosis and treatment presented in the question stem and options. In all subgroups, items were more difficult in the illustrated format [10]. In the present study, most of the MCQs cover basic anatomical knowledge on a lower cognitive level. Hunt showed the increase and decrease of difficulty and discrimination of items created in pairs. 43 out of 70 item pairs increased in difficulty [10]. In the present study, we compared formats of independent items without a pairwise allocation. In addition, we stratified for MCQ formats (wording and structure of item stems and options) (see Figure 3). However, the integration of illustrations in MCQs had no significant effect on item difficulty and discrimination. Conclusion In conclusion, iMCQs can be used whenever appropriate. IMCQs can motivate students who are good in visual knowledge and thinking and can be written for lower and higher cognitive levels of exam questions. IMCQs are used to reflect teaching subjects and provide feedback about the effectivity of teaching. Thereby, the introduction of additional visual teaching material can be evaluated by corresponding iMCQs. Using iMCQs, the images must be of sufficient quality and size and accurately labeled. According to constructive alignment, a test blueprint helps choosing iMCQs for the exam. Different kinds of illustrations (histological images, x-rays) will reflect the diversity of visual input in medicine. Checking the quality of iMCQs will also improve students' learning from trial exam questions. Finally, the results of this study might reassure question writers to use iMCQs.
2018-07-01T00:19:35.034Z
2018-05-15T00:00:00.000
{ "year": 2018, "sha1": "f16c1fa50731ac322d86b8adfecc70a7909d2ec1", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "f16c1fa50731ac322d86b8adfecc70a7909d2ec1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Psychology", "Medicine" ] }
54457969
pes2o/s2orc
v3-fos-license
Focused ultrasound for the treatment of bone metastases: effectiveness and feasibility Background To evaluate the effectiveness and feasibility of high-intensity focused ultrasound (HIFU) for the treatment of bone metastases. Methods A single-center prospective study was made involving 17 consecutive patients with symptomatic bone metastases. Patients were treated by Focused Ultrasound (FUs) performed with magnetic resonance (MR) guidance. Surgical treatment or radiotherapy treatment was not indicated for patients who underwent FUs. Lesions were located in the appendicular and axial skeleton and consisted of secondary symptomatic lesions. The clinical course of pain was evaluated using the Visual Analog Scale (VAS) before treatment, at 1 week, and at 1 month after treatment and the Oral Morphine Equivalent Daily Dose (OMEDD) was also recorded. We used Wilcoxon signed rank test to assess change in patient pain (R CRAN software V 3.1.1). Results We observed a significant decrease in the pain felt by patients between pre- procedure and 1 week post-procedure (p = 2.9.10–4), and pre-procedure and 1 month post-procedure (p = 3.10–4). The proportion of responders according to the International Bone Metastases Consensus Working Party was: Partial Response 50% (8/16) and Complete Response 37.5% (6/16). Conclusions HIFU under MR-guidance seems to be an effective and safe procedure in the treatment of symptomatic bone lesions for patients suffering from metastatic disease. A significant decrease of patient pain was observed. Trial registration NCT01091883. Registered 24 March 2010. Level of evidence: Level 3. Background For 50 years, high-intensity focused ultrasound (HIFU) has been a subject of interest for medical research [1]. HIFU triggers selective tissue necrosis in a very well-defined volume, at a variable distance from the transducer, through heating or cavitation [2]. Its potential as a non-invasive thermal ablation treatment, using real-time imaging (magnetic resonance or ultrasound) for target definition, treatment planning and closed-loop of energy deposition, has been utilized in many settings including the treatment of tumors of the liver, kidney, breast, uterus, pancreas, bones and for the relief of chronic pain of malignant origin [3][4][5]. Thanks to Magnetic Resonance Imaging (MRI) guidance, real-time thermal feedback of heated zones makes it possible to ablate targeted tissue in real time without damaging normal structures. The precision of the technique and the immediate feedback obtained make it an attractive and safe alternative to surgical or radiation therapy for both benign and malignant tumors [6]. Clinically, the sites accessible for HIFU treatment are limited by the need for a suitable wide and naturally available acoustic window [7]. Traditionally, HIFU treatment consists of multiple single focal point sonifications [8,9]. In volumetric ablation, the focal spot is electronically steered along multiple concentric circles of increasing diameter and is thus more energy-efficient than point by point ablation [10]. In 2011, the Magnetic Resonance-guided Focus Ultrasound (MRgFUS) system received the European Compliance (CE) marking for the treatment of painful bone metastases [4]. Pain due to bone metastases is a common clinical problem in cancer patients [11]. The primary palliative treatment for patients with painful bone metastases is external beam radiation therapy, which achieves effective pain control in around 60-74% of patients [12][13][14]. More than 40% of patients are still not controlled after a second course of irradiation [15]. Magnetic resonance-guided high intensity focused ultrasound (MR-HIFU) has recently emerged as an effective treatment option for painful bone metastases by means of periosteal nerve-ending ablation. However, there exist few articles in the literature related to HIFU for this indication [4,[16][17][18][19]. The objective of our study was to describe our experience in the treatment of painful bone metastases using volumetric MR-HIFU ablation and to assess the technical feasibility and safety of the procedure [20]. Patient population and selection We present a prospective observational study on 17 consecutive patients (seven males, ten females, mean age: 61 years) suffering from symptomatic bone metastases of the appendicular skeleton. From October 2012 to March 2018, 17 patients with metastatic disease were enrolled. Most patients were suffering from intense inflammatory pain, often associated with mechanical pain and disability for walking or standing, depending on the localization of lesions. The lesions were located in the appendicular skeleton, involving the tibial diaphysis (two cases), femoral diaphysis (two cases), iliac bone (four cases), clavicle (one case), scapula (one case), humerus (one case), and in the axial skeleton, involving the ribs (six cases). All patients had exhausted maximum radiotherapy and analgesic treatment options for their painful bone metastasis. For inclusion, the pain arising from the lesion had to be self-rated by the patient as ≥5 on an 11-point numeric visual analog scale (VAS) from 0 (no pain) to 10 (worst imaginable pain) [21]. Exclusion criteria were the presence of > 3 painful bone metastases, metastases located in the spine, sternum, or skull, contraindications to MR imaging or procedural sedation and analgesia (PSA), presence of a potentially unstable fracture at the site of the lesion, and lesion inaccessibility (≤1 cm distance between the lesion and major nerves, joints, blood vessels or organs) [16]. Pretreatment magnetic resonance imaging (GE Healthcare MRI 1,5 Tesla, Milwaukee, WI) was available for all patients in order to confirm the location of the bone metastases. MR images were evaluated by a radiologist with 10 years' experience to determine treatment accessibility of the target lesion. Final treatment eligibility was determined in a multidisciplinary setting. The MRI protocol included T1-weighted (T1W) turbo spin echo (TSE) and T2-weighted (T2W) sequences in two orientations and fat-suppressed T1W (SPIR) sequences in two orientations after intravenous administration of gadobutrol, a gadolinium-based contrast agent (Gadovist, Bayer Pharma AG, Berlin, Germany, 0.1 mmol/kg). Approval was obtained from the institutional review board of the Centre Antoine Lacassagne (Nice, France) and written informed consent for the treatment and for the use of their anonymized data for this study was obtained from each patient. MR-HIFU system ablation Treatments were performed by an interventional radiologist (with 5 years' experience) using the MR-HIFU system (ExAblate 2000® MRgFUS system, Insightec, Israel). All procedures were performed under general anesthesia. Patients were placed in a prone or supine position depending on the lesion location in order to be as close as possible to the transducer for optimal treatment. After identification of the target lesion, sonications were delivered to the patient, with a number of sonications depending on the lesion size, and were adjusted in real time using temperature control at the lesion site (Fig 1). The number of sonications delivered in our study ranged between 8 (smallest lesion) and 27 (biggest lesion). The duration of each sonication was 15 s. The average duration of the entire procedure was 2 h. An immediate post-operative MRI was made at the end of each procedure to evaluate bone metastasis destruction, including T1-weighted fat-suppressed sequence after gadolinium injection. The non-enhanced area after treatment corresponded to the necrotic zone. Follow-up and response assessment A prospective follow-up was done, consisting of post-operative evaluations at 1 week and at 1 month to assess the pain felt by the patients. Quantification of pain was made by each subject on an 11-point numeric visual analog scale (VAS) with values from 0 to 10 (where 10 indicates the strongest pain ever experienced and 0 indicates absence of pain) and was supervised by an independent evaluator. Pain evaluation was made specifically on the anatomical site treated by focused ultrasound (if there were other pain sites, they were not taken into account). A difference in VAS > 2 points was considered a clinically significant result [21]. We also recorded the oral morphine equivalent dose (OMEDD) before treatment and 1 month after treatment. Partial response was defined as either a ≥ 2-point decrease in VAS at the treated site with no increase in OMEDD or as an OMEDD reduction of 25% or more from baseline without an increase in pain. Complete response was defined as a pain score of 0 at the treated site with no simultaneous increase in OMEDD. Pain progression was defined as a ≥ 2-point increase at the treated site with stable OMEDD or an increase of 25% or more in OMEDD compared with baseline with the pain score stable or 1 point above baseline [16]. A clinical examination was made before treatment and at 1 week and at 1 month after treatment to evaluate pain evolution. Statistical analysis The VAS score was measured at these three follow-up examinations. OMEDD was recorded at baseline and after 1 month. Pre-and post-operative VAS and OMEDD were compared using the non-parametric Wilcoxon signed-rank test for paired data. P < 0.05 was considered statistically significant. Confidence intervals were computed by bootstrapping data. Statistical analyses were performed using R CRAN Software (Version 3.1.1). Results Data are summarized in Table 1. Procedure Treatment was technically successful in 17 cases and clinically successful in 16 cases according to VAS (Fig 1). After the MR-HIFU procedure, all lesions were totally or partially destroyed (Figs 2, 3, 4). The feasibility of the procedure was 100% in our study. One case was recorded in which the patient experienced no relief of his pain following the procedure. We observed no immediate or delayed complications, in particular no skin burns. One patient received no morphine treatment and was not reported by OMEDD. Follow-up Compared to the onset of treatment, all patients experienced a decrease in pain after 1 week and after 1 month. More than 40% of the cohort reported no pain at all after 1 month. The average evaluation of pain was 7.53/17 (SD: 1.33) before treatment, 2.29/17 (SD: 1.86) 1 week after treatment, and 1.88/17 (SD: 1.99) 1 month after treatment. Our results show a significant decrease of the pain felt by patients between before the procedure and 1 week following the procedure-(p = 2.94.10 − 4 ), and before the procedure and 1 month following the procedure-(p = 2.99.10 − 4 ). All patients, except patient 2, had decreasing or stable pain between 1 week and 1 month. Sixteen of the 17 patients were satisfied with their long-lasting result following the procedure and would recommend the intervention to relatives. Mean OMEDD was respectively 270. 6 Four patients died several weeks after treatment due to their primary diseases. Progression of pain was observed in 12.5% (2/16) of patients. One patient who experienced pain progression was documented to have not been properly treated for pain. The other patient experiencing pain progression had a VAS score of zero but had a significantly higher OMEDD after 1 month. Discussion To our knowledge, there are only a few articles in the literature demonstrating the benefit of HIFU with MR guidance for the palliative treatment of bone metastases [16,22,23]. Our study shows a significant decrease in patient pain after treatment by HIFU with MRI guidance (p < 0,05) at 1 week and at 1 month post procedure. Ten elderly patients were significantly relieved of their pain following the procedure. These patients had several co-morbidities and had exhausted maximum radiotherapeutic and analgesic treatment options for their painful bone metastases. All suffered intense inflammatory pain, often associated with mechanical pain severely decreasing their quality of life. The procedure indication was evaluated very carefully for each patient, as management of symptomatic bone metastases by HIFU depends largely on clinical symptoms and the degree of pain felt by patients. All the patients suffered from a symptomatic bone lesion. The association of bone marrow edema and enhancement of the lesion on the MRI performed before treatment indicated the concordance between clinical symptoms and imaging. The aim of treatment was to obtain pain palliation and not total lesion destruction. We believe that correct selection of patients is crucial. Indeed, we recorded one patient who experienced no relief of his pain and for whom the effectiveness of the denervation treatment was inconclusive, an observation which argues in favor of a multifaceted etiology of pain. The feasibility of the HIFU procedure was 100% in our study. All procedures were performed under general anesthesia in order to ensure greater patient comfort since the interventions are lengthy and can be painful during sonications. This procedure can be performed under local anesthesia if necessary in case of contraindication to general anesthesia or, if the patient wishes, by applying good analgesic sedation before the procedure [22]. Hurwitz et al. recently reported the first completed phase III randomized trial investigating MRgFUS in patients with painful bone metastases. They showed a response rate of 64.3% in the MRgFUS arm compared with 20.0% in the placebo arm (P < .001) at 3 months [1]. Napoli et al. also showed a slightly higher response rate of 88.9% at 3 months post-treatment [24]. In the recent literature, an international consensus statement recognized MRgFUS as a safe and effective secondary treatment option in painful radiation-refractory bone metastases outside the spine [25]. MRgFUS may be These therapeutic techniques appear to be complementary for oncology patients suffering from metastatic disease and can be used in combination for optimal antalgic and therapeutic effectiveness. Further evidence from large randomized control studies is needed to establish MRgFUS as a possible palliative treatment of bone metastases alongside other available therapeutic options [24]. HIFU seems to be a safe and effective treatment procedure as no immediate or delayed complications were observed in our study. To improve pain control during the intervention, patients were treated by direct approach with focused ablation of the periosteum as this is the most highly innervated component of mature bone tissue. In the literature, several possible ablation approaches have been described using HIFU: the "near-field approach" in patients with (partially) intact cortical bone at the targeted lesion, in which treatment cells are initially positioned behind the cortical bone; and the "direct approach" in which treatment cells are positioned on the bone/soft-tissue interface. Although both ablation approaches can induce thermal ablation of the bone/ soft-tissue interface, the direct approach requires lower sonication energies compared to the near-field approach, thus minimizing the risk of thermal damage beyond the targeted volume [16]. HIFU treatment also seems to induce a decrease of serum immunosuppressive cytokines such as vascular endothelial growth factor (VEGF) in patients with solid tumors [26]. For optimal effectiveness, the benefit of the HIFU procedure should be estimated in terms of patient pain. However, this can be difficult as pain can be multifactorial in oncology patients. Bone metastases are painful current lesions in oncology patients. HIFU treatment enables us to relieve the pain felt by patients but does not promote bone consolidation, which is often needed in such fragile patients. In these circumstances, the benefits provided by a therapeutic association between different modalities such as cementoplasty, radiotherapy and immunotherapy becomes paramount [11,[27][28][29]. In MR-HIFU, image guidance is crucial for treatment planning and real-time temperature monitoring as lethal cell damage occurs when temperatures > 55°C are maintained for longer than 1 s [30]. The measured temperatures represent an approximation of the true temperature of the bone/soft tissue interface, as only temperature differences occurring in aqueous soft-tissue adjacent to the cortical bone can be measured. The method is also sensitive to magnetic field disturbances and artifacts and partial volume may induce a degree of inaccuracy in temperature estimates [31][32][33][34] (Fig 5). We observed one patient who presented post-procedure superficial skin irritation, which we treated by anti-inflammatories for 5 days. During the long-term follow-up, four patients in our series died as a result of advanced metastatic disease. Our study has several limitations. First, we did not compare our results with a control group treated conservatively or with radiation therapy alone. In fact, it was difficult not to treat demanding patients suffering intense pain and to whom we could propose a safe and efficient treatment option. Second, our patient sample was small and the follow-up period was relatively short; long-term clinical outcomes still need to be evaluated. MR-HIFU also has its limitations as the procedure is time-consuming and required general anesthesia in our study. The cost of the technique and its availability are also limiting factors. Indications and benefits of MR-guided HIFU in the treatment of bone metastases should be clearly defined for routine use by interventional radiologists, while new indications for the technique are currently under investigation [35,36]. In conclusion, MR-guided HIFU seems to be a safe and efficient therapeutic option for patients suffering from bone metastases. The technique can be used alone or in combination with other treatments such as cementoplasty or radiotherapy for palliative treatment in metastatic disease. Patients should be carefully screened for optimal therapeutic effectiveness of the procedure. Future research should include a large well-designed cohort study with longer follow-up, in which patients with persistent metastatic bone pain are treated. The benefits versus risks ratio seems very positive, with a significant decrease in patient pain and the advantages of a non-invasive procedure. This non-invasive interventional radiology technique appears to be a promising additional tool for the management of patients in oncology.
2018-12-17T19:28:17.455Z
2018-11-30T00:00:00.000
{ "year": 2018, "sha1": "64f432a313da22dd5796e24763c495dec7cc4527", "oa_license": "CCBY", "oa_url": "https://jtultrasound.biomedcentral.com/track/pdf/10.1186/s40349-018-0117-3", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "64f432a313da22dd5796e24763c495dec7cc4527", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
10084271
pes2o/s2orc
v3-fos-license
Impact of protein energy wasting status on survival among Afro-Caribbean hemodialysis patients: a 3-year prospective study Background We assessed the prognostic value of protein-energy wasting (PEW) on mortality in Afro-Caribbean MHD patients and analysed how diabetes, cardiovascular disease (CVD) and inflammation modified the predictive power of a severe wasting state. Method A 3-year prospective study was conducted in 216 patients from December 2011. We used four criteria from the nomenclature for PEW proposed by the International Society of Renal Nutrition and Metabolism in 2008: serum albumin 38 g/L, body mass index (BMI) ≤23 kg/m2, serum creatinine ≤818 µmol/L and protein intake assessed by nPCR ≤0.8 g/kg/day. PEW status was categorized according the number of criteria. Cox regression analyses were used. Results Forty deaths (18.5 %) occurred, 97.5 % with a CV cause. Deaths were distributed as follows: 7.4 % in normal nutritional status, 13.2 % in slight wasting (1 PEW criterion), 28 % in moderate wasting (2 criteria) and 50 % in severe wasting (3–4 criteria). Among the PEW markers, low serum albumin (HR 3.18; P = 0.001) and low BMI (HR 1.97; P = 0.034) were the most significant predictors of death. Among the PEW status categories, moderate wasting (HR 3.43; P = 0.021) and severe wasting (HR 6.59; P = 0.001) were significant predictors of death. Diabetes, CVD, and inflammation were all additives in predicting death in association with severe wasting with a strongest HR (7.76; P < 0.001) for diabetic patients. Conclusions The nomenclature for PEW predicts mortality in our Afro-Caribbean MHD patients and help to identify patients at risk of severe wasting to provide adequate nutritional support. Electronic supplementary material The online version of this article (doi:10.1186/s40064-015-1257-3) contains supplementary material, which is available to authorized users. . In patients on maintenance hemodialysis (MHD), cardiovascular disease, diabetes mellitus and inflammation contribute to the high death rate (Goodkin et al. 2004). Uremic malnutrition, also called, protein energy wasting (PEW), corresponding to a decrease in energy and body protein, was consistently associated with mortality in different populations (Kovesdy and Kalantar-Zadeh 2009;Noori et al. 2011;Streja et al. 2011) and appears as the strongest risk factor for adverse outcome and death (Kovesdy and Kalantar-Zadeh 2009). In a nomenclature for PEW proposed by the International Society of Renal Nutrition and Metabolism (Fouque et al. 2008), several parameters among four established categories (biochemical criteria; body mass and composition, muscle mass and dietary intakes) are indicative of PEW in individuals with kidney disease. Ethnic disparities were reported for the PEW biochemical markers (Noori et al. 2011) but also for muscle mass (Gallagher et al. 1985;Kramer et al. 2008). Recently, a score derived from this nomenclature predicted survival in the European ARNOS prospective dialysis cohort but, the need to evaluate the predictive ability of this PEW classification for mortality in non-Western populations was highlighted (Moreau-Gaudry et al. 2014). In the present study we assessed the prognostic value of PEW on mortality in Afro Caribbean MHD patients and analyzed how several factors of interest modified the predictive power of a severe wasting state. Patient population In the present study, we included patients who underwent MHD treatment for more than 3 months and evaluated in December 2011 in the AUDRA centre (one of the dialysis facilities in the island of Guadeloupe, France). They were prospectively followed up from December 31, 2011, to December 31, 2014 Standard dialysis treatment consisted of three weekly sessions using bicarbonate buffer and synthetic high flux membrane. Weekly dialysis time was 12 h in 83 % of patients. Dialysis dose delivery was estimated from the urea Kt/V (urea clearance over time). Data collection Demographic and clinical data such as age, sex, dialysis vintage (time on HD), anthropometric parameters, cardiovascular risk factors, history of cardio vascular events and nutritional supplementation were recorded. Body mass index (BMI) in kg/m 2 was calculated as dry weight divided by height squared. Dialysis vintage was defined as the duration of time between the first day of HD treatment and December 31, 2011. Interdialytic weight gain (IDWG) was calculated as the difference between predialysis body weight and preceding postdialysis body weight during the month preceding the start of the study (December 2011) and the month preceding the end of follow-up for each patient. The mean value of 12 IDWG during the month was taken into account. Dates and primary causes of death were recorded. The survival time was defined as the number of days between December 31, 2011 and the date of death or the date of censoring due to loss to follow-up (transfer to another dialysis centre or kidney transplantation) or the end of the follow-up at December 31, 2014. Laboratory measures All laboratory values were measured by automated and standardized methods. Predialysis samples were collected for serum albumin, transthyretin, creatinine and highly sensitive C-reactive protein (hsCRP) measurements. Serum albumin, transthyretin and creatinine concentrations were determined. The normalized protein catabolic rate (nPCR) (Aparicio et al. 1999;Daugirdas 1989) was used to assess the dietary protein intake with the following formula. Definition of clinical factors and events PEW One component in each of the four categories of the wasting syndrome (Fouque et al. 2008) were retained: serum albumin ≤38 g/L, BMI ≤23 kg/m 2 , SCr ≤25th percentile (µmol/L/m 2 ) and nPCR ≤0.8 g/kg/day. Slight wasting was defined when one criteria for PEW was present, moderate wasting when two criteria for PEW was present, and severe wasting in presence of three or four criteria for PEW. • Inflammation was defined as a serum concentration of CRP of >5 mg/L • Cardiovascular disease (CVD) was defined as pre-existing coronary artery disease or stroke. Pre-existing CV complications included, coronary event and stroke occurred before December 2011. • Weight loss: was defined as −5 % over 3 months (Fouque et al. 2008). Outcome data considered as death of all causes were obtained from medical record. Statistical methods Data are presented as percentages for categorical variables and as means ± standard deviations (SD) for continuous variables. Statistical methods included the χ 2 , ANCOVA adjusted for age and sex. Multivariate Cox proportional hazards models were performed to analyze survival and to study the associations of PEW markers and PEW status as independent variables with death of all causes. The PEW markers (serum albumin ≤38 g/L, BMI ≤23 kg/m 2 , SCr ≤25th percentile in µmol/L and nPCR ≤0.8 g/kg/day) and the PEW status (normal nutritional status, slight wasting, moderate wasting and severe wasting) were studied separately. The variables of adjustment were: age (<60 or ≥60 years), gender, dialysis vintage (<5 or ≥5 years), diabetes, pre-existing CVD and hsCRP (<5 or ≥5 mg/L). We also explored the potential additive association between severe wasting and three other risk factors of death in HD patients (pre existing CVD, diabetes and inflammation). Results Overall, 216 patients were included in the analysis. The population was 56.5 % male. All the patients had diuresis lower than 500 mL/day (i.e. no residual renal function). The relevant demographic, clinical, and laboratory data of the study patients are presented in Additional file 1: Table S1. Mean age at baseline was 60 years and median dialysis vintage was 6.4 years. Average Kt/V was 1.31. Inflammation (CRP >5 mg/L) was found in 42.6 % of patients. Diabetic patients were more likely to have previous CVD than non-diabetic patients (23.5 vs. 8.1 %; P = 0.002). Thirty patients were censored: 22 following kidney transplantation and 8 following transfer to another HD centre. Forty deaths (18.5 %) occurred during the 3-year follow up, 97.5 % with a CV cause and 2.5 % with a non CV cause. In comparing at baseline clinical, and laboratory data of patients who died and those who survived, at baseline, patients who died were more likely to be older (P < 0.001), to have diabetes (P < 0.001), a weight loss over 3 months (P < 0.001), previous cardiovascular disease (P = 0.006) and to have lower serum albumin (P < 0.001), lower serum creatinine values (P = 0.013) and higher frequency of inflammation (P = 0.014) (Additional file 1: Table S1). Additional file 1: Table S2 details the frequencies of the nutritional markers and the nutritional status at baseline. Significant differences were noted between patients who died and who survived for the biochemical criteria (P < 0.001), body mass (P = 0.025) and for the surrogate of muscle mass (P = 0.015) but not for dietary intakes (P = 0.114). A normal nutritional status was more frequently noted in those who survived (P = 0.007) whereas moderate or severe wasting were more frequent in patients who died (P = 0.022 and P < 0.001, respectively). Nutritional supplement was more frequently prescribed in this last group (P < 0.001). At the end of follow-up, 84.3 % of HD patients were dialyzed with an arterial venous fistula and 15.7 % with a central venous catheter (CVC). There was no relationship between the type of vascular access and the nutritional status (CVC 15.4 vs 15.9 %; P = 0.925 in subjects with normal nutritional status and in those having 1 or more nutritional markers, respectively). Additional file 1: Table S3 presents the IDWG according to the presence/absence of the nutritional markers. The IDWG at the start of the study and at the end of follow-up were significantly lower in presence of nutritional markers than in their absence, except for serum albumin ≤38 g/L. Additional file 1: Table S4 presents the adjusted HR (95 % CI) of death for PEW markers and for PEW status. Both models were adjusted for the following parameters: age ≥60 years, gender, dialysis vintage ≥5 years, diabetes, pre existing CVD, CRP >5 mg/L. Concerning the models including the four PEW markers (model 1): the multivariate Cox regression showed, significant associations between age ≥60 years (P = 0.049), low BMI (P = 0.034), low serum albumin (P = 0.001) and mortality. The association was nearly significant for low serum creatinine (P = 0.051) but not significant for gender, diabetes, CRP >5 mg/L and low protein intake. Concerning the model including the PEW status (model 2): significant associations were noted between diabetes (P = 0.034), moderate PEW (P = 0.021), severe PEW (P = 0.001) and mortality. The association was nearly significant for age (P = 0.062) and CRP ≥5 mg/L (P = 0.081) but not significant for gender, pre existing CVD and slight wasting. The mean survival time [95 % confidence interval (CI)] was 31.6 (30.2, 32.9) months in the overall study population. This survival time was, respectively 23.5 (17.6, 29.5) months in patients who had a severe wasting; 30.5 (27.7, 32.3) months in those who had a moderate wasting; and 32.5 (30.3, 34.6) months in those who had a slight wasting; and 34.2 (32.6, 35.8) months in those who had a normal nutritional status. These times were significantly different between the four groups (P < 0.001, log-rank test). There was no statistical significant difference in survival between patients with slight wasting and those with normal nutritional status (P = 0.18, log-rank test). The survival curves according to PEW status are shown in Fig. 1. Since cardiovascular disease, diabetes mellitus and inflammation contribute to the high death rate in HD patients, we evaluated the potential additive effects of these co morbid factors on severe wasting (Fig. 2). These factors are all additives in predicting death in our HD patients but, diabetes appears as the strongest predictive factor of death in association with severe wasting. In comparing to patients without severe wasting and without diabetes, patients who had these both factors exhibited a HR of death of 7.76 Fig. 1 Survival curves according to protein-energy wasting status (P < 0.001, log-rank test) (P < 0.001) whereas this HR was 2.25 (P < 0.184) for patients having a severe wasting but no diabetes and 2.13 (P < 0.052) for those having diabetes but no severe wasting. Discussion In this first study examining mortality according to nutritional status in a cohort of 216 Afro-Caribbean MHD patients who were followed for up to 3 years, we found that a lower BMI and lower serum albumin levels predicted worst survival. We also found a strong association between severe wasting status (three or more PEW criteria) and The nutritional and metabolic derangements observed in advanced chronic kidney disease cannot be attributed to any one single factor but, a common feature is the high rate of protein degradation (Marcelli et al. 2015). In the present study, we used four parameters among the established categories proposed by the International Society of Renal Nutrition and Metabolism to identify PEW (Fouque et al. 2008). Body mass index ≤23 kg/m 2 , serum albumin ≤38 g/L, SCr ≤25th percentile (µmol/L/m 2 ) and nPCR ≤0.8 g/kg/day were the selected criteria. Excepted daily protein intake, all these parameters were significantly lower, at baseline, in patients who died during the study period. The frequency of weight loss at baseline (5 % over 3 months) was higher in patients who died than in the others (22.5 vs 4 %, P < 0.001). According to the expert panel, a weight loss of 5 % over 3 months might also be considered an indicator of PEW (Fouque et al. 2008). In addition our patients having a BMI ≤23 kg/m 2 had a HR of death about 2 time higher than the others. By contrast to the general population, overweight is paradoxically associated with improve survival in MHD patients, at least in the short term (Park et al. 2013). While obesity provides some protection against malnutrition, a low BMI should be in part the reflect of undernutrition. Among the four criteria selected for the diagnosis of PEW, a low serum albumin concentration was the strongest predictor of death in our study. Several studies demonstrated the relationship between this criteria and outcome (Lowrie and Lew 1990;Owen et al. 1993). Serum albumin is an indicator of visceral proteins stores and is affected by protein intake but also by several factors including inflammation and other co morbidities (Kaysen 1998). Some authors in a previous study reported that patients with a serum albumin level below 35 g/L had a relative mortality risk of 4 (Lowrie et al. 1995). This relative risk was 3.49 for serum albumin levels below 38 g/L in our study population. Serum creatinine is considered as a surrogate of muscle mass, especially in HD patients without residual renal function (Kalantar-Zadeh et al. 2012) and all patients in our study had no residual renal function. In a previous study among HD patients, creatinine levels below 8.0 mg/dL (704 µmol/L) predicted higher mortality than those above 10 mg/dL (880 µmol/L) (Kalantar-Zadeh et al. 2010). Since there is no recognized threshold for low creatinine levels we chose in our study the 25th percentile of the whole value corresponding to 818 µmol/L (9.3 mg/dL). Patients who died during the study period had lower baseline creatinine levels than the others. Those who had Scr below 818 µmol/L had a hazard ratio of death two time higher than those with SCr above this threshold. It was reported that individuals of African ancestry have higher measures of muscle and lean mass with higher levels of SCr than whites (Noori et al. 2011) probably in relation with higher protein intake. The mean daily protein intake at baseline was higher than 1 g/kg/day for our patient without significant difference in both groups. Protein catabolic rate below 1 g/kg/day was previously associated with increase morbidity and mortality (Acchiardo et al. 1983) and a minimal protein intake of 1.1 g/kg/day is recommended (Fouque et al. 2007). But, the criteria for protein intake in the nomenclature for PEW corresponded to values below 0.8 g/kg/day that was not associated with the risk of death in our study. The IDWG was found significantly lower in presence of the nutritional markers than in their absence, except for serum albumin levels ≤38 g/L. IDWG is a surrogate of patient's fluid and sodium intake but may also be an index of good appetite and nutritional status. A strong association between IDWG and nutritional markers has been reported with a potential risk of malnutrition in HD patients if the weight gain is low (Lopez-Gomez et al. 2005;Sezer et al. 2002) Sezer SRen Fail. Our results indeed show that subjects with insufficient protein intake (nPCR ≤0.8 g/kg/day) or with low BMI had lower IDWG than the others which argues for a probable better nutrition in HD patients with high IDWG. Patients in MHD suffer from multiple traditional or non traditional risk factors that are associated with mortality, such as diabetes, cardio vascular disease and inflammation. The Cox proportional hazard modelling (Fig. 2) showed that these three comorbidities evaluated were all additives in predicting mortality in presence of a severe wasting. In fact, for the association of each risk factor and severe wasting, the HR of death was higher than the addition of HR of the single factors. Diabetes mellitus (DM) is the leading cause of end-stage renal diseases in Guadeloupe (Foucan et al. 2000) as in many countries and appears, in the present study, as the strongest predictive factor of death in association with severe wasting. Although the main cause of death was cardiovascular disease, the predictive value of diabetes was higher than that of pre-existing cardiovascular disease. A possible explanation is that diabetic condition is associated with several abnormalities such as inflammation, oxidative stress (Basta et al. 2004;Nath et al. 2013) prothrombogenic factors (Kirpichnikov and Sowers 2001), accelerated atherosclerosis (Foley and Parfrey 1998) that lead to cardiovascular complications and poor prognosis in HD patients. In fact, as previously reported (Racki et al. 2007), preexisting CVD were more prevalent in our MHD diabetic patients than in non diabetics. Inflammation is known as an important contributor to PEW in ESRD patients since pro-inflammatory cytokines stimulate protein catabolism (Bistrian et al. 1992), reduce albumin synthesis and contribute to anorexia (Plata-Salaman 1998). Protein energy wasting is known to be frequent in diabetic dialysis patients (Malgorzewicz et al. 2004) and to interact with inflammation and CVD (de Mutsert et al. 2008). Finally, all these factors contribute to the high death rate in MHD patients. Our study has some limitations including the fact that the findings are limited to a relatively small number of patients. In addition, data were obtained from a prevalent cohort and potential variations of the criteria during the study period were not taken into account. Another limitation was that, we used a low BMI as one of the criteria for PEW. But, lean body mass and fat mass were not measured whereas it was previously suggested that their influence on survival are different (Beddhu et al. 2003;Noori et al. 2010). One strength of our study was that the potential confounding effect of residual renal function was excluded because all patients in the present study had no residual renal function and in this situation higher creatinine levels relate to larger muscle mass and lower mortality (Park et al. 2013;Kalantar-Zadeh et al. 2010). There was also no bias in relation with type of dialysis or dialysis membrane since dialysis modalities were identical for all the subjects. In addition the data were obtained in a homogenous ethnic group of patients and this is of importance since measures of serum creatinine and other nutritional markers may vary according to ethnics groups. Conclusion In MHD patients, survival decreases with the increasing number of criteria for PEW and an additional deleterious effect on survival related to severe wasting was observed in patients with diabetes or cardiovascular disease or inflammation. The nomenclature and diagnostic criteria for PEW proposed by the International Society of Renal Nutrition and Metabolism predicts mortality in our Afro-Caribbean MHD patients and could help to identify patients at risk of severe wasting to provide them adequate nutritional support. However, our results highlight the need to implement and evaluate prevention strategies and management of malnutrition in MHD patients.
2016-05-04T20:20:58.661Z
2015-08-26T00:00:00.000
{ "year": 2015, "sha1": "e4f858333253b869795e2d3cf778277abd8517d2", "oa_license": "CCBY", "oa_url": "https://springerplus.springeropen.com/track/pdf/10.1186/s40064-015-1257-3", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e4f858333253b869795e2d3cf778277abd8517d2", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
34664688
pes2o/s2orc
v3-fos-license
Sexuality in autism: hypersexual and paraphilic behavior in women and men with high-functioning autism spectrum disorder. Like nonaffected adults, individuals with autism spectrum disorders (ASDs) show the entire range of sexual behaviors. However, due to the core symptoms of the disorder spectrum, including deficits in social skills, sensory hypo- and hypersensitivities, and repetitive behaviors, some ASD individuals might develop quantitatively above-average or nonnormative sexual behaviors and interests. After reviewing the relevant literature on sexuality in high-functioning ASD individuals, we present novel findings on the frequency of normal sexual behaviors and those about the assessment of hypersexual and paraphilic fantasies and behaviors in ASD individuals from our own study. Individuals with ASD seem to have more hypersexual and paraphilic fantasies and behaviors than general-population studies suggest. However, this inconsistency is mainly driven by the observations for male participants with ASD. This could be due to the fact that women with ASD are usually more socially adapted and show less ASD symptomatology. The peculiarities in sexual behaviors in ASD patients should be considered both for sexual education and in therapeutic approaches. Introduction Au tism spectrum disorders (ASD) are neurodevelopmental disorders that comprise a heterogeneous group of conditions, which are characterized by impairments in social interaction and communication, as well as repetitive and stereotyped interests and behaviors. 1 Reported prevalence rates have risen markedly in recent decades (up to 1% lifetime prevalence), with more and more adults being diagnosed with ASD. 2 It is assumed that the male-to-femaleratio is between 3 and 4 to 1, 3 and there exist particular gender differences in ASD. 4 Although nearly half of individuals with ASD are not intellectually impaired and have normal cognitive and language skills (such as individuals with high-functioning autism or Asperger syndrome), the social interaction and communication deficits and difficulties in seeing the perspective of others and intuitively understand-C l i n i c a l r e s e a r c h ing nonverbal social cues constitute hidden barriers to the development of romantic and sexual relationships. 5,6 Sexuality-related problems can arise, especially at the start of puberty, a time when the development of ASD individuals' social skills cannot keep up with increasing social demands, and the challenges of forming romantic and sexual relationships become particularly apparent. 7 Studies on sexuality in individuals with ASD About 10 years after the official entry of autism in the third edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-III) in 1980, the first systematic studies on the sexuality of patients with ASD were published. [8][9][10][11] The current state of research on sexual experiences, sexual behaviors, sexual attitudes, or sexual knowledge of ASD individuals is rather mixed, with some studies finding differences from healthy controls (HCs) while others do not. However, because of the heterogeneous nature of the disorder spectrum and the diverse scientific methodology of the studies, this is not surprising. Previous studies have: (i) included female and/or male patients in residential settings with presumably more impairments and less opportunities for sexual experiences; (ii) focused on persons with intellec-tual impairments or other comorbid developmental disabilities, thereby leading to confounding effects; (iii) used online surveys in which only higher-functioning individuals took part; (iv) relied on reports from family members and care-givers or from the patients themselves; and (v) assessed individuals with ASD in different age ranges. These studies suggest that many individuals with ASD seek sexual and romantic relationships similar to the non-ASD population 12,13 and have the entire spectrum of sexual experiences and behaviors. [12][13][14][15][16][17][18] However, there are still many stereotypes and societal beliefs about individuals with ASD, referring to them as uninterested in social and romantic relationships and as being asexual. 10,19,20 Table I presents an overview of studies assessing different aspects of sexuality in young and older adults with high-functioning autism, on the basis of self-report questionnaires. 11,12,15,[21][22][23][24][25][26][27][28][29][30][31][32][33] We specifically focused the literature review on these studies because their methodology corresponds to the research approach used in the study presented here. The studies presented in Table I confirm that sexuality does matter in ASD individuals, and it becomes clear that the whole spectrum of sexual experiences and behaviors is represented in this group. [11][12][13]15,[20][21][22][23][24][25][26][27][28][29][30][31] Most of the hitherto existing research has focused on men, and few studies have addressed gender-spe- Table I. Literature overview. Note: The following terms were used in the systematic literature search: "sexual," "sexuality," "sexual behavior," "sexual disorder," sexual relationship," "Asperger," and "Autism" in different combinations. The databases PubMed, PsycINFO, and Web of Science were searched. Only studies assessing sexual behavior in individuals with high-functioning autism (HFA) and using selfreport measures were included in the table. ASD, autism spectrum disorder; HC, healthy control; SD, standard deviation. cific issues concerning social, emotional, and cognitive domains, and even fewer studies exist examining sexuality independently in men and women with ASD. 12,13,20,32 The few clinical observations 32 and the small set of systematic studies indicate that women with ASD might present less pronounced social and communication deficits and have special interests that are more compatible to the interests of their peer groups. [33][34][35][36] Furthermore, women with ASD seem to apply coping strategies, such as imitating the social No differences in breadth and strength of sexual behaviors Higher rate of asexuality in ASD individuals C l i n i c a l r e s e a r c h skills of their non-ASD peers, therefore being more socially unobtrusive. 34 Regarding sexuality-related issues, women with ASD seem to have poorer levels of overall sexual functioning, feel less well in sexual relationships than do men with ASD, and are also at greater risk of becoming a victim of sexual assault or abuse. 37 Males with ASD were found to engage more in solitary sexual activities, [11][12][13][14]18,37 as well as to have a greater desire for sexual and romantic relationships 20 ; however, there is some evidence that females with ASD, despite having lower sexual desire, more often engage in dyadic relationships. 13 Although individuals with ASD seek sexual experiences and relationships, development and maintenance of romantic and sexual relationships are greatly affected by the deficits in social and communication skills and the difficulties in understanding nonverbal or subtle interactional cues and with mentalization (meaning being able to understand one's own and others' mental states, eg, emotions, desires, cognitions) experienced by such individuals. 6 Furthermore, many individuals with ASD do not receive sexual education that takes their behavioral peculiarities into consideration, and they are less likely to get information on sexuality from social sources. 5,22,38 Another point to consider is the restricted and repetitive interests, which may be nonsexual in childhood but can transform into and result in sexualized and sexual behaviors in adulthood. Furthermore, the frequently reported sensory sensitivities can lead to an overreaction or underreaction to sensory stimuli in the context of sexual experience. 39 In hypersensitive individuals, soft physical touches can be experienced as unpleasant; on the other hand, hyposensitive individuals may have problems in getting aroused and in reaching orgasm through sexual behaviors. 20 Taken together, the core symptoms of ASD combined with limited sexual knowledge and a lesser facility for having romantic and sexual experiences could predispose some individuals with ASD to developing challenging or problematic sexual behaviors, 22,38 such as hypersexual and paraphilic behaviors, and even sexual offending. Different terms have been used to describe quantitatively above-average sexual behaviors including sexual addiction, sexual compulsivity, sexual preoccupation, and hypersexuality. In this article, we will use the terms hypersexual behavior or hypersexuality referring to quantitatively relatively frequent sexual fantasies, sexual desire, and behaviors. 40,41 However, one should note that the mere presence of quantitatively above-average sexual behaviors does not qualify for assignment of a psychiatric diagnosis (like hypersexual disorder or compulsive sexual behavior disorder). Kafka proposed that diagnostic criteria for a hypersexual disorder diagnosis be included in DSM-5. 40 These criteria define a hypersexual disorder as recurrent and intense sexual fantasies, urges, or sexual behaviors over a period of at least 6 months, causing clinically significant distress, and that are not due to other substances or medical conditions; also, the individual has to be at least 18 years of age. 40,42 Although Reid and colleagues have shown that hypersexual disorder may be validly and reliably assessed through use of these diagnostic criteria, the American Psychiatric Association nevertheless rejected such use because of the still insufficient state of research, calling for more studies about the cross-cultural assessment of the disorder, for representative epidemiological studies, and for studies on the etiology and associated biological features. 43 For the proposed eleventh edition of the International Classification of Diseases (ICD-11), the following definition for diagnosis of compulsive sexual behavior disorder 41 is being considered: Compulsive sexual behavior disorder is characterized by persistent and repetitive sexual impulses or urges that are experienced as irresistible or uncontrollable, leading to repetitive sexual behaviors, along with additional indicators such as sexual activities becoming a central focus of the person's life to the point of neglecting health and personal care or other activities, unsuccessful efforts to control or reduce sexual behaviors, or continuing to engage in repetitive sexual behavior despite adverse consequences (eg, relationship disruption, occupational consequences, negative impact on health). The individual experiences increased tension or affective arousal immediately before the sexual activity, and relief or dissipation of tension afterwards. The pattern of sexual impulses and behavior causes marked distress or significant impairment in per-sonal, family, social, educational, occupational, or other important areas of functioning. With regard to paraphilias, the DSM-5 now distinguishes between paraphilias and paraphilic disorders, thereby aiming at a destigmatization of nonnormative sexual interests and behaviors that do not cause distress or impairment to the individual or harm to others. 42 In the DSM-5, paraphilias are defined as "any intense and persistent sexual interest other than sexual interest in genital stimulation or preparatory fondling with phenotypically normal, physically mature, consenting human partners" (see Box 1 for a list of paraphilic disorders included in DSM-5). 44 Although the proposed criteria for paraphilic disorders in the ICD-11 resemble those of the DSM-5, one major difference between these two Exhibitionistic disorder • Sexual arousal through exposing one's genitals or sexual organs to a nonconsenting person. Fetishistic disorder* • Sexual arousal through play with nonliving objects. Frotteuristic disorder • Sexual arousal through rubbing one's sexual organs against a nonconsenting person. Sexual masochism disorder* • Sexual arousal by being bound, beaten, or otherwise made to suffer physical pain or humiliation. Sexual sadism disorder • Sexual arousal by inflicting psychological or physical suffering or pain on a sexual partner. Transvestic disorder* • Sexual arousal through dressing and acting in a style or manner traditionally associated with the opposite sex. Voyeuristic disorder • Sexual arousal from watching others when they are naked or engaged in sexual activity. Pedophilic disorder • Primary or exclusive sexual attraction to prepubescent children. diagnostic manuals is the removal of paraphilic disorders diagnosed primarily on the basis of consenting behaviors that are not in and of themselves associated with distress or functional impairment. This led to the ICD-11 exclusion of fetishistic, sexual masochism, and transvestic disorder, 41 56 However, to our knowledge, all previous studies on hypersexual and paraphilic behaviors have been conducted in males and in most cases with cognitively impaired ASD individuals. After having reviewed the literature, we aimed to investigate hypersexual behaviors as well as paraphilic fantasies and behaviors in a large sample of male and female ASD patients compared with HCs matched according to gender, age, and educational level. Participants To get direct information from individuals with ASD and to study a preferably homogeneous sample, we only included adult individuals with ASD without intellectual impairments. The rationale to include only individuals with high-functioning autism or Asperger syndrome was to reduce the potentially confounding effect of intellectual disability and thus be able to directly study the impact of ASD on sexuality. On the basis of selfreport, all patients were diagnosed by an experienced psychiatrist or psychologist (n=90, Asperger syndrome; n=6, atypical autism); the mean age at which patients received their ASD diagnosis was 35.7 years (standard deviation [SD]=9.1 years; range=17 to 55 years). The ASD patient group (mean score [M]=26.7; SD=4.9) had significantly higher scores than HCs (M=6.4; SD=3.3) on the German version of the Autism Spectrum Quotient Short Form (AQ-SF; P<0.001). 57 All ASD patients and none of the HCs scored above the proposed cut-off value of 17 points. 57 Participants in both groups were matched for gender, age, and years of education (Table II). Procedure The ethical review board of the Hamburg Medical Council approved the study protocol. For recruitment of individuals diagnosed with ASD, self-help groups throughout Germany were contacted and asked to distribute the study brochure among their participants. Further participants were recruited through the autism outpatient center at the University Medical Center Hamburg-Eppendorf, Germany. Autism Spectrum Quotient Short Form, German version The German version of the Autism Spectrum Quotient Short Form (AQ-SF) questionnaire 57 was used for the assessment of autistic symptoms in all participants. A threshold score of 17 was identified to be a good cutoff value for screening purposes and yielded a sensitivity of 88.9% and a specificity of 91.6% with an area under the curve of the receiver operating characteristics curve of 0.92 in the German validation sample. 58 Hypersexual Behavior Inventory (HBI-19) The Hypersexual Behavior Inventory (HBI-19) 58,59 consists of 19 items and assesses hypersexual behaviors. All items have to be answered on a 5-point Likert scale and are phrased gender neutrally. Participants that have a score above 49 are usually classified as hypersexual. The German version of the questionnaire yielded an excellent internal consistency of α=0.90 for the total score. 60 Questionnaire about Sexual Experiences and Behaviors (QSEB) The Questionnaire about Sexual Experiences and Behaviors (QSEB) 61 consists of 120 items and assesses information concerning family background, sexual socialization, sexual behaviors, and different sexual practices. Furthermore, the questionnaire assesses information about sexual fantasies and behaviors (including paraphilic sexual fantasies and behaviors). Most items refer to an observational period of 12 months; in clinically relevant items, the questionnaire asks participants to specify the duration the clinical symptom has been present. For the present study, only the items concerning the frequency of masturbation and partnered sexual activities, as well as paraphilic fantasies and behaviors, were analyzed. Statistical analyses Group differences were analyzed using χ 2 tests in categorical variables, and t-tests for independent samples for continuous variables. Because multiple statistical tests were performed on the same data set, we controlled the level of significance for the accumulation of type-I error through use of the false discovery rate (FDR) based on the approach developed by Benjamini and Hochberg. 62 Controlling for multiple testing leads to a reduction in the P-value threshold. In the present study, the corrected P-value threshold was 0.0158, meaning that only P-values below this cutoff should be considered as significant. Thereby, the FDR is less conservative than the traditionally used Bonferroni correction; however, just recently, it was suggested that the FDR should receive preference over the Bonferroni method, especially in health and medical studies. 63 Relationship status Of the individuals with ASD, significantly more women (n=18; 46.2%) than men (n=9; 16.1%) were currently in a relationship (P<0.01). No significant difference was found in the number of women (n=11; 27.5%) and men (n=8; 14.3%) with ASD who reported having their own children. Comparing the ASD individuals with the HCs, we observed that significantly more HC women (n=31; 79.5%; P<0.01) and more HC men (n=47; 82.4%; P<0.01) than individuals with ASD were currently in a relationship. No differences were observed in the number of participants having their own children (HCs: n=7; 7.3%). Females As shown in Table III, no differences were found between the female participants in the frequency of masturbation (P>0.05). However, female HCs indicated more frequent sexual intercourse than the female ASD patients (P<0.05). The same pattern was found with regard to the question "how often do you desire to have sexual intercourse," indicating that HC women had a greater desire for sexual intercourse than their ASD counterparts (P<0.05). Males With regard to the masturbation frequency in men, male ASD participants reported more frequent masturbation than male HCs (P<0.01). In comparison of the frequency of sexual intercourse, an opposite pat-tern was found, with HCs reporting a higher frequency of sexual intercourse than ASD individuals. ASD men reported a greater sexual desire for sexual intercourse than their HC counterparts (P<0.05, Table III Hypersexual behaviors On the HBI, ASD patients (HBI sum =35.1; SD=13.7) had a significantly higher sum score than the HCs (HBI sum =29.1; SD=8.7; P<0.001), and significantly more ASD individuals had scores above the proposed cutoff value of 49 points and could thus be classified as hypersexual (P<0.01). As shown in Table IV, men with an ASD diagnosis reported more hypersexual behaviors, whereas there were no such differences between female patients with ASD and female HCs. Furthermore, whereas 17 male individuals with ASD scored above the cutoff value of 49 points and could thus be described as hypersexual, only two male HCs scored above the proposed cutoff (P<0.001). No difference was found between female ASD patients and HCs in the rate of hypersexuality. Paraphilic fantasies and behaviors Altogether, paraphilic sexual fantasies and behaviors were reported more frequently in male patients with ASD than in male HCs. After correcting for multiple testing, significant differences were still present in the number of individuals reporting masochistic fantasies, sadistic fantasies, voyeuristic fantasies and behaviors, frotteuristic fantasies and behaviors, and pedophilic fantasies with female children (see Table IV). Female C l i n i c a l r e s e a r c h patients with ASD showed no differences in the frequency of paraphilic fantasies or behaviors in comparison with their HC counterparts, except in the frequency of masochistic behaviors, where more female HCs indicated masochistic behaviors than the female ASD patients. Discussion To our knowledge, this is the first study to explore gender-specific aspects of hypersexual and paraphilic fantasies and behaviors in a cohort of high-functioning individuals with ASD in comparison with a matched control group. Our main findings are that individuals with ASD show more hypersexual and paraphilic fantasies and behaviors than HCs. Previous research suggested that in individuals with ASD, although mainly regarded as being heterosexual, 18 there were higher rates (up to 15% to 35%) of homosexual or bisexual orientation than in the non-ASD population. 14,64 In the present study also, fewer individuals with ASD reported being heterosexual than HCs; however, it has to be noted that all HCs were heterosexual and are thus not comparable to the general population. In the Global Online Sexuality Survey, a total of 10% of participants indicated being homosexual. 65 Different assumptions have been made about the broader range of sexual orientation in the ASD population. Maybe gender is not that relevant in choosing a partner, due to limited access to romantic or sexual relationships and limited experience and sociosexual exchange with their peers. In combination with less sexual knowledge, this could lead to a restricted understanding of sexual orientation or preference. 33,35,37 Furthermore, there is evidence that ASD individuals are possibly more tolerant toward same-sex relationships, 15 and it could be possible that ASD individuals choose their sexual preferences more independently of what is socially accepted or demanded, maybe partly due to a lower sensitivity to social norms or gender roles. 15 Significantly more HCs than individuals with ASD reported being in a relationship with marked genderspecific differences. More women than men with ASD were in a relationship. The results of other studies examining gender differences in relationship status are inconclusive, but there is some evidence that although men desire dyadic relationships more than women, ASD women are more often in a romantic and sexual relationship. 11,31 This could be due to the ASD women's ability to call on more advanced coping strategies (eg, imitating the social skills of their non-ASD peers), leading to less impairment in social functioning. [33][34][35][36] Regarding the frequency of sexual behavior, women with ASD reported more solitary than person-oriented sexual behavior and less desire to have sexual intercourse with a partner than their non-ASD female counterparts. A similar pattern was found in ASD males, which is in line with other studies. 12,23,24,33 However, disregarding social norms together with the frequently found restricted social skills and the sensory hyposensitivities or hypersensitivities could also increase the risk for engaging in nonnormative or quantitatively above-average sexual behaviors. 22,38 Underscoring this assumption, we found that hypersexual behaviors were more frequently reported for ASD individuals than HCs; however, these differences were mainly driven by the male ASD patients, and no differences between the female groups were observed. On the basis of precise operationalization of hypersexual behaviors, previous studies have found prevalence estimates ranging from 3% to 12% for healthy male subjects. [66][67][68] In an online survey of almost 9000 German men, Klein and colleagues found a prevalence of hypersexual behaviors (defined as more than seven orgasms per week over a period of 1 month) of 12%. 69 Clearly, this indicates that more male ASD subjects in our study showed hypersexual behaviors than these populationbased estimates. So far, only Fernandes and colleagues have assessed hypersexual behaviors in ASD individuals and found lower rates than we did. 70 Of the 55 high-functioning male ASD individuals assessed, 7% reported on hypersexual behaviors, defined as more than seven sexual activities per week, and 4% were engaged in sexual activities for more than 1 hour a day, which is clearly below the numbers found in the present study. However, Fernandes et al did not mention how they defined sexual activities, and it is possible that the participants in their study only rated dyadic sexual activities, explaining the lower number of hypersexual behaviors. 70 The possible causes of the higher rates of hypersexuality in ASD men remain unclear, but it can be hypothesized that they are a part of the repetitive behaviors or influenced by sensory peculiarities. Because we did not differentiate between person-oriented and self-oriented sexual behavior, the higher rate of hypersexual behaviors in the ASD men could also be an expression of excessive masturbation, which has been found in other studies and case reports. It was suggested that excessive masturbatory behavior could reflect the desire to be sexually active although not being able to achieve this because of problems engaging in a dyadic sexual relationship due to limited social skills. 14,[46][47][48]52 With regard to women, much less research has been conducted about the frequency of hypersexual behaviors, and due to small sample sizes, prevalence estimates range from 4% to 40% in the general population. 60 In the German validation study of the HBI, 4.5% of the almost 1000 women included scored above the proposed hypersexuality cutoff. 59 As part of the DSM-5 field trials for hypersexual disorder, it was found that 5.3% of all patients seeking help at a specialized outpatient care center were women, 43 indicating that the rate of hypersexual behaviors might be much lower in women than in men. As female ASD patients seem to be better socially adapted and usually show less-pronounced ASD symptomatology (eg, less repetitive behaviors), it is not surprising that hypersexual behaviors in the present study were also found less frequently in female than in male ASD individuals. So far, there are almost no existent systematic studies about paraphilias in the ASD population 64,70 ; most of the information comes from case studies. Moreover, almost all case studies addressed paraphilic behaviors in male ASD individuals with some kind of cognitive impairment; thus, comparison with findings from the present study is clearly limited. In the study of Fernandes and colleagues (to our knowledge the only previous study that addressed paraphilias in high-functioning ASD men), the paraphilias found most frequently were voyeurism and fetishism. 70 Voyeuristic fantasies and behaviors were also among the most frequently found paraphilias for ASD men and women in the present study. Furthermore, frequently reported paraphilias were masochistic and sadistic fantasies and behaviors. Again, this could be an expression of the pronounced hyposensitivity in the ASD population, indicating that such individuals need above-average stimulation to become sexually aroused. Furthermore, Fernandes et al found that the occurrence of a paraphilia was associated with more ASD symptoms, lower levels of intellectual ability, and lower levels of adaptive functioning, pointing out that lower cognitive abilities seem to be an important factor in the etiology of paraphilic fantasies and behaviors in ASD. 70 It can be hypothesized that awareness of social norms and behavioral self-control is even lower in ASD individuals with cognitive impairments, explaining the higher rate of paraphilic behaviors. Although many ASD individuals in the present study had paraphilic fantasies, considerably fewer individuals actually showed overt paraphilic behaviors, supporting the suggestion that high-functioning ASD individuals could have higher self-control abilities than ASD patients with cognitive impairments. Information on paraphilias in the general population is also scarce, with most of the studies involving men, mainly recruited in clinical or forensic settings. 71 In the general population, the prevalence rate of any paraphilia is assumed to be between 0.4% and 7.7%. [72][73][74][75] Also, using the QSEB, Ahlers et al found a rate of 59% for any paraphilic fantasies and a rate of 44% for any paraphilic behavior in their general-population sample of 367 German men, with the most common paraphilic fantasies being voyeuristic (35%), fetishistic (30%), and sadistic (22%) fantasies. 61 In the present study, especially for male ASD individuals, the rates of paraphilic fantasies and behaviors were higher than the prevalence estimates found in most of the general-population studies. Again, we found pronounced gender differences in the frequency of paraphilic fantasies and behaviors in our ASD population. A possible explanation for these differences could be that a stronger sex drive in ASD men could mediate the existence of paraphilias via a heightened energy in acting out their sexual interests or that those with a high sex drive more easily habituate to certain activities, thereby leading them to strive for novel activities. 71,76,77 Furthermore, hypersexuality could also lead to lower baseline sexual disgust or aversion toward paraphilic fantasies or behaviors clarifying the link between the higher rate of hypersexual, as well as paraphilic, behaviors. 77 The results of our study are limited because they are solely based on self-report, and one cannot be sure that all participants were diagnosed by a trained psychologist or psychiatrist. However, all ASD participants scored above the cutoff value of the German version of the AQ, ensuring that they showed pronounced ASD symptomatology. Furthermore, all participants were recruited through ASD self-help groups or ASD outpatient care centers, indicating that their contact with the medical system was due their symptomatology. Our study results are also limited by the potential that individuals with a higher interest in sexuality-related issues, and perhaps also having more sexual problems, were more likely to volunteer to participate, thus affecting the study population. This could have led to an overestimation of the actual rate of hypersexual and paraphilic fantasies and behaviors in the ASD group. Nevertheless, if true, this should also have occurred in the HC group. The present study is the first to examine hypersexual and paraphilic fantasies and behaviors in a large sample of high-functioning male and female ASD individuals in comparison with a matched control group, showing that although ASD individuals have a high interest in sexual behaviors, because of their specific impairments in social and romantic functioning, many of them also report some sexual peculiarities. o
2018-04-03T01:17:15.790Z
2017-12-01T00:00:00.000
{ "year": 2017, "sha1": "9b4b774a164f034ec30b2afe6a4de15bd5810ee3", "oa_license": "CCBYNCND", "oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5789215", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "9b4b774a164f034ec30b2afe6a4de15bd5810ee3", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine", "Psychology" ] }
269306269
pes2o/s2orc
v3-fos-license
Addressing Combative Behaviour in Spanish Bulls by Measuring Hormonal Indicators Simple Summary Aggressiveness in fighting bulls is a natural characteristic of this breed, but little is known regarding it physiological mechanisms. Hormones such as serotonin, dopamine and testosterone are known to be involved in the development of aggressive behaviour. This study determines that serotonin and dopamine levels are correlated with combative behaviour, and its determination in calves is a useful indicator for the selection of combative bulls. Abstract The fighting bull is characterised by its natural aggressiveness, but the physiological mechanisms that underlie its aggressive behaviour are poorly studied. This study determines the hormonal component of aggressiveness in fighting bulls by analysing their behaviour during a fight and correlating it to their serotonin, dopamine and testosterone levels. We also determine whether aggressive behaviour can be estimated in calves. Using 195 animals, samples were obtained when the animals were calves and after 5 years. Aggressiveness scores were obtained by an observational method during bullfights, and serotonin, dopamine and testosterone levels were determined in all animals using validated enzyme immunoassay kits. The results revealed a strong correlation of serotonin and dopamine levels with aggressiveness scores in bulls during fights, but no correlation was found with respect to testosterone. These correlations led to established cut-off point and linear regression curves to obtain expected aggressiveness scores for calves at shoeing. There were no significant differences between the expected scores obtained in calves and the observed scores in bulls. Therefore, this study demonstrates that hormone determination in calves may be a great indicator of combativeness in bulls and can reliably be used in the selection of fighting bulls. Introduction Aggressiveness is defined as a tendency to act or respond violently.It is considered a primitive and conserved animal behaviour.The domestication of different animal species, including cattle, has led to the selection of more docile animals, discarding animals with aggressive traits [1].However, in some species, aggressiveness is a trait used for the selection of certain breeds, such as in the Lidia cattle breed.Popularly known as the fighting bull, this cattle breed is characterised by a strong physical appearance, extraordinary strength and natural aggressiveness [2].Lidia cattle are mainly used for breeding under extensive conditions in central and southern Spain, France, Portugal and Latin American countries.This breed, also known as "toro bravo", refers to the male specimens of a heterogeneous bovine population developed, selected and bred for use in different bullfighting spectacles.They come from the indigenous breeds of the Iberian Peninsula.They are characterised by atavistic instincts of defence and temperament, which are characterised by so-called "bravery", as well as physical attributes such as large forward-facing horns and powerful locomotor apparatus [2,3]. Vet. Sci.2024, 11, 182 2 of 21 Its breeding system is extensive, and its importance in traditional shows has promoted wide diversity in the breed given the requirement for bulls with different behaviours and characteristics that have been traditionally selected for aggressiveness, ferocity and mobility.The isolation of individuals from the herd, their confinement or their harassment triggers the aggressive responses that characterise it [2].This has led to the population fragmenting into subpopulations, traditionally called 'encastes', with different phenotypic characteristics and different gene flow levels among them [3]. Several studies have determined that aggressiveness (the animal's ability to confront or to try to escape) and ferocity (the amount of force it uses to attack with its whole body and its resistance to pain) have strong genetic [4] and environmental bases [5].However, other studies have found that even hard-tempered cattle (including fighting bulls) and other species eventually habituate to new environmental conditions and reduce their behavioural reactivity [6].The mechanisms that underly the aggressive behaviour have been studied in numerous species, and a number of neurophysiological hormones and neurotransmitters are involved in the development of aggressive behaviour [7].A well-established molecule implicated in the neurobiological mechanism of aggression is serotonin.Serotonin (5-hydroxytryptamine, or 5-HT) is a neurotransmitter monoamine and a hormone synthesised mainly in serotonergic neurons in the central nervous system, in the pineal gland and in the enterochromaffin cells (Kulchitsky cells) in the gastrointestinal tract.The main source of serotonin is the platelets in the blood circulation [8]. Serotonin plays an important role as a neurotransmitter in aggression inhibition and endocrine regulation, along with appetite and body temperature control, mood, sleep, cardiovascular functions, muscle contraction, learning and memory [9].Low serotonergic low activity has been associated with a history of aggressive behaviour [7,[10][11][12][13], both in primates and in humans with impulsive-type aggression and not with controlled or predatory aggression [11,[14][15][16]. The relationship between low serotonergic activity and impulsive aggression has also been tested in different animal models such as mice, rats, and dogs.Several studies indicate that, in general, serotonin has an inhibitory effect on the brain [11,17] and is deeply involved in the regulation of emotion and behaviour, including the inhibition of aggression [18].Therefore, it can be argued that serotonergic circuits are determinants in the control of aggression in humans and other animals [19,20].Dopamine, in contrast, is a hormone and neurotransmitter that has several functions in the central nervous system, including behaviour and cognition control, motor activity, motivation and reward, milk production regulation, sleep, mood, attention and learning [21].These monoamines are recognised for their capacity to modulate aggressive behaviour.Whilst serotonergic neurons that execute such influence have been identified, less is understood about the specific role of dopaminergic neurons in controlling aggression [22]. Pharmacologically induced dopamine increases are associated with increased aggressive behaviour under some conditions.Low to moderate doses of amphetamine or apomorphine can heighten the aggression levels of isolated mice or rats, whereas higher doses of amphetamine increase the defensive responses of rats [23].The dysregulation of dopamine mechanisms in response to environmental stimuli leads to the perception of these stimuli as threatening.This decrease could be involved in the occurrence of affective or defensive aggression responses [24].However, increased dopaminergic activity of limbic regions facilitates impulsive-type aggression in rats [25]. In addition to these neurotransmitters, steroid hormones such as testosterone are also involved in aggressive behaviour.Testosterone is a steroid hormone that is produced in the adrenal cortex and in the male and female gonads.In vertebrates, testosterone is involved in the modulation of sexual and aggressive behaviour.The relationship between elevated testosterone activity and aggressive-competitive-dominance behaviour is well established in all mammals.This type of response is termed hormonal or testosterone-dependent aggression.However, testosterone does not directly change behaviour, but increases the probability that a behavioural response will occur in the presence of specific stimuli [26].Specifically, some authors indicate that aggressive and dominance behaviour in non-human primates and humans is less dependent of androgen mechanisms and that neurochemical control must be the main mechanism of aggressive behaviour [27]. In the brain, androgen and oestrogen receptors are mainly located in the limbic system [26], although they can also be found diffusely distributed throughout the brain.Testosterone implantation in the hypothalamus and caudate nucleus of castrated male rats can restore aggressive behaviour eliminated by castration [28]. Intraspecific social aggression (territorial, inter-male, rank-related) is a form of hormonal aggression in non-human mammals [29].This form of competitive hostility is also observed in females, especially during lactation and offspring care (maternal aggression) [30].Also, Peterson and Harmon-Jones (2012) associate the subjective experience of anger in humans with an increase in testosterone but not cortisol concentrations [31]. These factors are involved in the aggressive behaviour of several animal species.By its nature, the fighting bull is defined as an aggressive animal [2].During the fight, the animal is exposed to numerous stimuli that promote changes in the animal's behaviour.However, the mechanisms underlying these behavioural changes in bulls during fights have been little studied. The aim of this study is therefore to determine the changes in the neuroendocrine variables serotonin, dopamine and testosterone during fights and their relation with aggressive behaviour.In addition, this study investigates whether these neuroendocrine variables are constant throughout the life of these animals and related to the development of aggressive behaviour, with the aim to use them as markers for the selection of fighting bulls. Animals A total of 195 male lidia bulls (Bos taurus L.) of different ages were used for this study.The animals in this study belonged to the following herds, all of which are registered in the Union de Criadores del Toro de Lidia (UCTL) of Spain: Jandilla, Fuente Ymbro, Juan Pedro Domecq, Núñez del Cuvillo, Cebada Gago, Victorino Martín, Adolfo Martin, Fermín Bohórquez, Urcola, Barcial and Francisco Galache.The animals studied belonged to six different "encastes" (all of them belonging to the lidia breed): Domecq (DQ), n = 37; Núñez (UN), n = 33; Albaserrada (AL), n = 32; Murube (MU), n = 34; Urcola (UR), n = 31 and Vega-Villar (VV), n = 28.An encaste is defined as follows: a strain, variety or closed population of animals of a breed which has been created based on reproductive genetic isolation for a minimum of five generations (RD 2129/2008, of 26-XII.). Three groups of animals were established in this study: Lidia calves: 180 calves between 6 and 8 months of age that were sampled at the time of shoeing. Control: 15 bulls destined for bullfights that were not used in bullfights and were slaughtered in the slaughterhouse of the arena. Lidia bulls: Of the 180 calves sampled at the time of shoeing, only 135 of them reached the age of 4 to 5 years old, since 45 animals did not reach the age of 4 to 5 years old for reasons unrelated to this study. All procedures involving animals were reviewed and approved by the Animal Research and Ethics Committee of the Complutense University of Madrid (Reference number: UCM 0016/005). Bullfighting Bullfighting consists of a fight between a human (bullfighter) and an animal (bull) that ends with the death of the animal.It is governed by rules established by the Regulations of Bullfighting Shows [32].Usually, the bull arrives at the arena and, hours before the start of the bullfight, it is isolated.When the bullfight begins, the bull is released into the arena, where it is received with a cape by the bullfighter.Afterwards, the bullfight is divided into three thirds: • Third of sticks: the animal receives one or two pricks in the front area of the back. • Third of skewers: three pairs of skewers are placed in the front area of the animal's back.• Third of crutch: The bullfighter passes the crutch to the bull and the fight ends with the death of the animal. Blood Sample Collection, Processing and Hormonal Analysis All animals were sampled at two different times: at the farrier stage (calves) and after the bullfight (bulls).Control bulls were sampled at 5 years of age. In the calf groups, the animals were separated from their mothers 15 h prior to sample collection and kept in pens together until the time of marking.At that time, they were isolated and placed in a dressing box where they remained for 2 to 3 min.During this time, the animal was shod, and blood was obtained from the caudal vein. Blood samples from controls and ordinary fighting bulls were obtained from the jugular vein immediately after the bullfight in the slaughterhouse of the arena.All blood samples were kept refrigerated at 4 • C from collection to processing.To obtain the serum samples, blood samples were centrifuged at 1200× g, 4 • C, for 20 min, and the serum was separated and stored frozen at −20 • C before being assayed.The serotonin, dopamine and testosterone concentrations were determined in serum samples using commercial kits (Demeditec Diagnostics GmbH, Kiel, Germany) following the manufacturer's instructions.These are competition-type ELISA kits, where the competition is based on competition between the hormone in the sample and the hormone fixed to the solid phase, to bind the antibody.All commercial kits used were validated for bovine species, and the hormone concentrations are expressed in ng/mL.The detection limits were serotonin (6.2 ng/mL); dopamine (49 pg/mL); and testosterone (83 pg/mL).The intra-and inter-assays were less than 20% in all hormones analysed.The recovery rates were as follows: serotonin, 96%; dopamine, 89%; and testosterone, 93%. Direct Observational Method To evaluate the aggressive behaviour of bulls during a fight, a direct observational method was used by describing, through the observer's perception, the bull's stimuli and behavioural actions related to aggressiveness.These observations were recorded in a simple template expressly designed for this purpose that is detailed in the Supplementary Materials (Table S1). The template was divided into three main parts.The observers scored these sections in a range from 1 to 5, with 1 being the partial score of an animal that did not present any of the aggressive actions and 5 the partial score of an animal that did develop the four aggressive actions specified in each part.Thus, for each aggressive action developed by a bull, one point would be added to each partial score.The average of the three partial scores, corresponding to the three parts of the bullfight, was calculated, which resulted in a preliminary score.This allowed us to determine an aggressiveness score for their behaviour in a more objective way since this template only recorded the aggressive actions of the animal: 1. Very aggressive. Statistical Analysis Graph Pad Prism version 6 (GraphPad Software, San Diego, CA, USA) was used for statistical analysis of the results. To study whether there were differences among the different groups (bulls, calves and controls) with respect to the variables studied (serum concentrations of serotonin, dopamine and testosterone), an analysis of variance (ANOVA) was used.The distributions of values differed significantly between two groups at a p-value < 0.05.This test was also used to compare the variables cited in the different encastes. The parameters studied were correlated with the grade of aggressive behaviour assigned to the animal after its fight.Pearson's correlation test, a parametric test, was used in the case of the bull group.In the case of the calf group, a Spearman´s correlation test, a non-parametric test, was employed due to the smaller amount of data in this group. Receiver operating characteristic (ROC) curve analysis was performed to determine if serotonin, dopamine and testosterone were parameters that were indicators of the behaviour developed during the bullfight.This type of analysis also allowed us to choose cut-off points that would facilitate decision making when classifying the behaviour of an animal based on the concentration of a given physiological variable. For the comparison of the populations of expected and observed aggressive behaviour scores, a chi-square (χ2) statistical test was performed.Populations of values were determined to differ between the two groups when p < 0.05. Distribution of Aggressive Behaviour Scores in Bull Populations After the application of the direct observational method, an aggressive behaviour score was obtained for each animal studied during the fights (n = 135).The percentages corresponding to each grade are detailed in Table 1; the most frequent grades were 3 and 4, and the least frequent ones were 2.5, 4.5 and 5.The distribution of aggressive behaviour scores by breed was also analysed, revealing significant differences in aggressiveness scores among the different encastes (Figure 1).The cattle breeds with the lower average aggressive behaviour scores were Domecq, Murube and Núñez, whereas those with the highest aggressive behaviour scores were Vega-Villar, Urcola and Albaserrada. Serum Serotonin Concentrations No significant differences were found in serum serotonin concentration among the groups, although the controls had slightly lower serum serotonin concentrations (541.1 ± 4.9 ng/mL), similar to those observed in the calf group (546.7 ± 85.6 ng/mL) (Figure 2A).This indicates that serotonin concentrations are constant throughout the animal s life, since there were no differences between the mean serotonin values at 6-8 months (calf Serum Serotonin Concentrations No significant differences were found in serum serotonin concentration among the groups, although the controls had slightly lower serum serotonin concentrations (541.1 ± 4.9 ng/mL), similar to those observed in the calf group (546.7 ± 85.6 ng/mL) (Figure 2A).This indicates that serotonin concentrations are constant throughout the animal's life, since there were no differences between the mean serotonin values at 6-8 months (calf group) of age and at 4-5 years of age (control group).Regarding dopamine levels, no significant differences were found among the diffe ent groups studied (Figure 3A).However, the control group showed serum dopami concentrations (10.78 ± 0.46 ng/mL) similar to those observed in the calf group (11.05 ± 0. ng/mL).This indicates that the dopamine concentration is constant throughout the an mal s life since there were no differences between the mean dopamine values at 6 months of age and at 4-5 years of age. Also, the bull group showed higher serum dopamine concentrations (12.07 ± 0. ng/mL) compared to the control group, although this difference was not significant. Comparison among the encastes revealed that the lowest dopamine values we found in the animals belonging to the Urcola, Vega-villar and Albaserrada encastes.T encastes with the highest serum dopamine concentrations were Murube, Núñez an Regarding dopamine levels, no significant differences were found among the different groups studied (Figure 3A).However, the control group showed serum dopamine concentrations (10.78 ± 0.46 ng/mL) similar to those observed in the calf group (11.05 ± 0.50 ng/mL).This indicates that the dopamine concentration is constant throughout the animal's life since there were no differences between the mean dopamine values at 6-8 months of age and at 4-5 years of age. Vet. Sci.2024, 11, x FOR PEER REVIEW 8 of Domecq.We observed statistically significant differences in the dopamine concentration among breeds (Figure 3B). Serum Testosterone Concentrations We observed significant differences among the three groups studied (Figure 4A).significant increase in testosterone concentrations was found in bulls (22.70 ± 0.52 ng/mL and calves (9.74 ± 0.28 ng/mL) compared to the control group (4.22 ± 0.73 ng/mL).Bul showed significantly higher testosterone levels than calves.However, the testosterone lev els of the different encastes studied did not differ significantly (Figure 4B).Also, the bull group showed higher serum dopamine concentrations (12.07 ± 0.50 ng/mL) compared to the control group, although this difference was not significant. Comparison among the encastes revealed that the lowest dopamine values were found in the animals belonging to the Urcola, Vega-villar and Albaserrada encastes.The encastes with the highest serum dopamine concentrations were Murube, Núñez and Domecq.We observed statistically significant differences in the dopamine concentrations among breeds (Figure 3B). Serum Testosterone Concentrations We observed significant differences among the three groups studied (Figure 4A).A significant increase in testosterone concentrations was found in bulls (22.70 ± 0.52 ng/mL) and calves (9.74 ± 0.28 ng/mL) compared to the control group (4.22 ± 0.73 ng/mL).Bulls showed significantly higher testosterone levels than calves.However, the testosterone levels of the different encastes studied did not differ significantly (Figure 4B). Correlation between Hormonal Concentrations Measured in Fighting Bulls and Aggressiveness Scores during Their Fights A negative correlation was obtained between serum serotonin concentration and aggressiveness score, with a Pearson s coefficient of −0.78 (p < 0.001; Figure 5A).Also, a stronger negative correlation was found in the case of dopamine concentrations, with a Pearson s coefficient of −0.91 (p < 0.001; Figure 5B).These results suggest that the higher the serotonin and dopamine concentrations, the lower the aggressiveness score during the Correlation between Hormonal Concentrations Measured in Fighting Bulls and Aggressiveness Scores during Their Fights A negative correlation was obtained between serum serotonin concentration and aggressiveness score, with a Pearson's coefficient of −0.78 (p < 0.001; Figure 5A).Also, a stronger negative correlation was found in the case of dopamine concentrations, with a Pearson's coefficient of −0.91 (p < 0.001; Figure 5B).These results suggest that the higher the serotonin and dopamine concentrations, the lower the aggressiveness score during the fight.Nevertheless, a positive correlation was obtained between serum testosterone concentrations and the aggressiveness score, with a Pearson s coefficient of 0.3675 (p < 0.01; Figure 5C).This indicates that there is a tendency that the higher the testosterone level, the higher the aggressiveness; however, the value dispersion was high. Determination of the Hormonal Threshold Value in Fighting Bulls to Differentiate Subgroups of Animals with Different Behaviours Given the correlations observed between hormonal concentrations and the grade of aggressive behaviour assigned to each bull, we wanted to determine whether serotonin, dopamine and testosterone concentrations are valid parameters that allow us to separate the population studied into subpopulations of animals showing different degrees of aggressiveness during fights. Two possible classifications were considered (Figure 6).The first possible classification would be to separate the bulls into two groups (aggressive and non-aggressive), according to the score regarding their aggressive behaviour during their fight: − Aggressive: Animals that scored between 3 (inclusive) and 5.These are animals that clearly manifested aggressive actions during all parts of the fight.− Non-aggressive: Animals that scored between 1 and 3.These are the animals excluded from the previous group. Another possible classification would be to separate the bulls by the lack of manifestation of aggressive behaviour during fights.In this case, the animals would be separated into combative and non-combative groups: − Combative: Animals with scores between 2.5 (inclusive) and 5.These animals showed behaviour of combativity and zeal, clearly showing aggressive behaviour or not.− Non-combative: Animals with scores between 1 and 2.5.These are the animals excluded from the previous group.Nevertheless, a positive correlation was obtained between serum testosterone concentrations and the aggressiveness score, with a Pearson's coefficient of 0.3675 (p < 0.01; Figure 5C).This indicates that there is a tendency that the higher the testosterone level, the higher the aggressiveness; however, the value dispersion was high. Based on these correlations, a linear regression equation was obtained, which allowed us to estimate the expected aggressive behaviour score for an animal by interpolating its serum hormone concentration.For the different hormones, the linear regression equations were as follows: Serotonin: y= −0.002262 x + 3.851.Dopamine: y= −0.003145 x + 3.65.Testosterone: y= 0.02814 x + 1.619. Determination of the Hormonal Threshold Value in Fighting Bulls to Differentiate Subgroups of Animals with Different Behaviours Given the correlations observed between hormonal concentrations and the grade of aggressive behaviour assigned to each bull, we wanted to determine whether serotonin, dopamine and testosterone concentrations are valid parameters that allow us to separate the population studied into subpopulations of animals showing different degrees of aggressiveness during fights. Two possible classifications were considered (Figure 6).The first possible classification would be to separate the bulls into two groups (aggressive and non-aggressive), according to the score regarding their aggressive behaviour during their fight: -Aggressive: Animals that scored between 3 (inclusive) and 5.These are animals that clearly manifested aggressive actions during all parts of the fight.-Non-aggressive: Animals that scored between 1 and 3.These are the animals excluded from the previous group.When the animals were sorted into the two different groups, they were further divided considering their serotonin and dopamine concentrations (Figure 7A,B).Non-combative and non-aggressive bulls presented significantly higher levels of dopamine and serotonin compared to combative and aggressive bulls.Due to the significant differences found in the two possible classifications for both hormones, we performed ROC curve analysis for each hormone to determine which classification was the most appropriate for each hormone. After performing the ROC curve analysis for serotonin concentrations, the animals that were classified as combative and non-combative showed an area under the curve of Another possible classification would be to separate the bulls by the lack of manifestation of aggressive behaviour during fights.In this case, the animals would be separated into combative and non-combative groups: -Combative: Animals with scores between 2.5 (inclusive) and 5.These animals showed behaviour of combativity and zeal, clearly showing aggressive behaviour or not.-Non-combative: Animals with scores between 1 and 2.5.These are the animals excluded from the previous group. When the animals were sorted into the two different groups, they were further divided considering their serotonin and dopamine concentrations (Figure 7A,B).Non-combative and non-aggressive bulls presented significantly higher levels of dopamine and serotonin compared to combative and aggressive bulls.When the animals were sorted into the two different groups, they were further divided considering their serotonin and dopamine concentrations (Figure 7A,B).Non-combative and non-aggressive bulls presented significantly higher levels of dopamine and serotonin compared to combative and aggressive bulls.Due to the significant differences found in the two possible classifications for both hormones, we performed ROC curve analysis for each hormone to determine which classification was the most appropriate for each hormone. After performing the ROC curve analysis for serotonin concentrations, the animals that were classified as combative and non-combative showed an area under the curve of Due to the significant differences found in the two possible classifications for both hormones, we performed ROC curve analysis for each hormone to determine which classification was the most appropriate for each hormone. After performing the ROC curve analysis for serotonin concentrations, the animals that were classified as combative and non-combative showed an area under the curve of 0.9478 (p < 0.005).When the animals were classified into aggressive and non-aggressive groups, the area under the curve was 0.8878 (p < 0.005; Figure 8A). In the case of the ROC curve analysis for dopamine concentrations, the area under the curve was 0.9216 (p < 0.001) when the animals were classified as non-combative and combative, and 0.9034 (p < 0.001) when the animals were classified as aggressive and nonaggressive (Figure 8B). According to these results, it was decided to use the classification of combative/noncombative for both hormones as the most appropriate one since it presented higher areas under the ROC curve.This classification led us to determine the cut-off point in serotonin and dopamine concentrations that would allow us to classify the animals according to their behaviour.The cut-off points chosen for serotonin and dopamine concentrations and their specificity and sensitivity values are summarised in Table 2.For serotonin, a cut-off point of 708.5 ng/mL was chosen, with a specificity of 90.63% and a sensitivity of 80.49%.In the case of dopamine, a cut-off point of 12.24 ng/mL was selected, with a specificity of 91.23% and a sensitivity of 82.50%.These results indicate that those animals with serum serotonin and dopamine concentrations below 708.5 and 12.24 ng/mL, respectively, can be classified as combative.Those animals with serum serotonin and dopamine concentrations greater than 708.5 and 12.24 ng/mL, respectively, were classified as non-combative animals. The resulting specificity percentages (90.63% for serotonin and 91.23% for dopamine) indicated that those percentages of animals that were classified as non-combative were correctly classified, and the remaining ones (9.37% for serotonin and 8.67% for dopamine) were combative animals wrongly classified as non-combative ones.In the case of the ROC curve analysis for dopamine concentrations, the area under the curve was 0.9216 (p < 0.001) when the animals were classified as non-combative and combative, and 0.9034 (p < 0.001) when the animals were classified as aggressive and non-aggressive (Figure 8B). According to these results, it was decided to use the classification of combative/noncombative for both hormones as the most appropriate one since it presented higher areas under the ROC curve.This classification led us to determine the cut-off point in serotonin and dopamine concentrations that would allow us to classify the animals according to their behaviour. The cut-off points chosen for serotonin and dopamine concentrations and their specificity and sensitivity values are summarised in Table 2.For serotonin, a cut-off point of 708.5 ng/mL was chosen, with a specificity of 90.63% and a sensitivity of 80.49%.In the case of dopamine, a cut-off point of 12.24 ng/mL was selected, with a specificity of 91.23% and a sensitivity of 82.50%.These results indicate that those animals with serum serotonin and dopamine concentrations below 708.5 and 12.24 ng/mL, respectively, can be classified as combative.Those animals with serum serotonin and dopamine concentrations greater than 708.5 and 12.24 ng/mL, respectively, were classified as non-combative animals.The resulting specificity percentages (90.63% for serotonin and 91.23% for dopamine) indicated that those percentages of animals that were classified as non-combative were correctly classified, and the remaining ones (9.37% for serotonin and 8.67% for dopamine) were combative animals wrongly classified as non-combative ones. Similar to the specificity percentages, the sensitivity values indicate that 80.49% and 82.50% of animals, for serotonin and dopamine, respectively, were detected with the chosen cut-off point, whereas the remaining ones (19.51% for serotonin and 17.5% for dopamine) were non-combative animals that were not identified correctly. Regarding the testosterone concentrations, classification with respect to the aggressiveness scores of the bulls during the fights was carried out as in the case of the serotonin and dopamine concentrations (Figure 6). When the animals were partitioned into the two different classes, the distribution of animals considering testosterone concentrations was as follows.In both classes, the two populations overlap (Figure 9A), resulting in a non-significant t-test analysis in both classifications (non-combative vs. combative, p = 0.1876; aggressive vs. non-aggressive, p = 0.0853).Due to this similarity between the different populations, it would be difficult to establish a threshold in testosterone concentration at which non-combative or aggressive behaviour can be discriminated.After performing the analysis of ROC curves for testosterone, an area under the curve of 0.5794 (p = 0.29) was obtained for the animals classified as non-combative and combative, and an area under the curve of 0.6698 (p = 0.04) was calculated for the animals classified as aggressive and non-aggressive (Figure 10B).After performing the analysis of ROC curves for testosterone, an area under the curve of 0.5794 (p = 0.29) was obtained for the animals classified as non-combative and combative, and an area under the curve of 0.6698 (p = 0.04) was calculated for the animals classified as aggressive and non-aggressive (Figure 10B).Because the area under the curve obtained in both cases was significantly distant from 1, it was not considered appropriate to use testosterone as an indicator of aggressive behaviour; therefore, a threshold value was not determined from the ROC curve analysis. Relationship between Serotonin, Dopamine and Testosterone Concentrations in Calves during Shoeing and Aggressive Behaviour during Their Subsequent Fights The calves studied at the time of shoeing were followed up to evaluate their aggressive behaviour in the arena during their fights.Thus, we verified whether the concentrations of serotonin, dopamine and testosterone measured in the blood during the shoeing of these animals remained constant until the moment of their fights and whether they could be related to the manifestation of aggressive behaviour or not during their subsequent fights. To estimate the expected aggressiveness score in calves, the linear regression equations and cut-off points previously calculated for each hormone in bulls were considered.Assuming these two results, the expected score was obtained by interpolating the calf serum hormone concentration in the linear regression equation. Once the expected aggressive behaviour scores of calves were calculated, they were correlated with the hormonal values analysed in the serum at the time of shoeing (Figure 10).A clear negative correlation was obtained between serotonin and dopamine serum concentrations and the expected aggressiveness scores, with Spearman coefficients of −0.6653 and −0.8915, respectively.This indicates that the higher the serum serotonin and dopamine concentrations, the less aggressive the expected behaviour (p < 0.05). Contrarily, a weak positive correlation was obtained between serum testosterone concentrations and the aggressiveness scores, with a Spearman coefficient of 0.3675, revealing a tendency towards more aggressive behaviour during the bullfight when higher serum testosterone concentrations were found (p = 0.2608). Comparing the expected and observed aggressiveness scores with the results of the χ 2 statistical analyses for serotonin and dopamine (Figure 11), values of p = 0.8035 and p = 8352, respectively, were found, indicating no significant differences between the expected and the observed scores.Thus, the value expected at the time of shoeing could be similar to that observed during the fight.Regarding the comparison of expected and observed scores for testosterone, an analysis could not be performed due to the lack of variability in the expected population.χ 2 statistical analyses for serotonin and dopamine (Figure 11), values of p = 0.8035 and p = 8352, respectively, were found, indicating no significant differences between the expected and the observed scores.Thus, the value expected at the time of shoeing could be similar to that observed during the fight.Regarding the comparison of expected and observed scores for testosterone, an analysis could not be performed due to the lack of variability in the expected population. Discussion Although there are numerous scales that record the intensity of violent responses in humans [16,33], most of them refer to general aggression.Studies measuring different types of aggression in laboratory animals are also common [34].However, behavioural assessment is complicated by not reflecting a unitary and clear motivational induction, since aggressive acts can express highly different feelings (anger, attack, defence, predation), triggered by different incitements (impulsivity or premeditation) and by complex factors (genetics or environments unfamiliar to each the subject) [9,35,36]. In addition to the described qualitative components, aggression has a complex physiological component in which different mechanisms and neurotransmitters play roles in promoting aggressive behaviours that vary among species [37].Animal species such as the fighting bull have been described to be aggressive animals by nature [38], although this peculiarity of the fighting bull has been poorly studied.Therefore, this study aims to determine the physiological component that underlies the aggressiveness of fighting bulls in an observational manner, analysing their behaviour in fights and determining its correlation with the hormone levels related with aggressiveness in calves during their fights. Discussion Although there are numerous scales that record the intensity of violent responses in humans [16,33], most of them refer to general aggression.Studies measuring different types of aggression in laboratory animals are also common [34].However, behavioural assessment is complicated by not reflecting a unitary and clear motivational induction, since aggressive acts can express highly different feelings (anger, attack, defence, predation), triggered by different incitements (impulsivity or premeditation) and by complex factors (genetics or environments unfamiliar to each the subject) [9,35,36]. In addition to the described qualitative components, aggression has a complex physiological component in which different mechanisms and neurotransmitters play roles in promoting aggressive behaviours that vary among species [37].Animal species such as the fighting bull have been described to be aggressive animals by nature [38], although this peculiarity of the fighting bull has been poorly studied.Therefore, this study aims to determine the physiological component that underlies the aggressiveness of fighting bulls in an observational manner, analysing their behaviour in fights and determining its correlation with the hormone levels related with aggressiveness in calves during their fights. Bullfighting is, moreover, an activity with unique characteristics that requires different characterisation due to the aggressiveness of the fighting bull.In this regard, Gaudioso et al. (1993) designed a table to record the bravery of bulls during fights [39].Since the design of this behaviour table, few studies have developed other models to measure bull behaviour during fights. The bravery of a bull is a complex term to define objectively, and often, breeders do not agree on the characterisation of this bravery.Therefore, Gaudioso's table is complex, with numerous parameters to be considered.The study carried out in fighting bulls by Calvo (2010) reflects this difficulty in defining the concept of bravery in an objective manner and shows the numerous definitions that different studies have presented for this concept [40].One of the most widely used definitions of bravery for fighting bulls is the one that considers bravery as the bull's capacity to fight until death [41]. Therefore, in this study, we designed an alternative observational table that allowed us to focus our attention in a simple way on the aggressive behaviour of bulls.In the observed distribution of marks obtained during fights by the bulls analysed, the majority of the bulls presented marks between 3.5 and 4, with the most frequent mark being 4 and with a mean of 2.55.The clearly non-aggressive animals (grade = 1) represented 5.96% of the total animals analysed, whereas the lowest percentage represented corresponded to the clearly aggressive bulls (grades between 4.5 and 5). Since the observational measurements provided qualitative data of bull aggressiveness, we proceeded to assess whether these data correlated with the serum concentrations of hormones involved in aggressive behaviour, namely serotonin, dopamine and testosterone. The involvement of the neurotransmitter serotonin in aggression inhibition, body temperature regulation, mood, sleep, sexuality and appetite has been demonstrated [42].Serotonergic hypoactivity is related to antecedents of aggressive behaviour in humans and non-human primates [43][44][45], as well as in other animals [10,20].Serotonin is considered to inhibit most forms of aggression, and predominantly that of impulsive (explosive and uncontrollable) characteristics rather than premeditated ones.Thus, serotonin deficiency, rather than increasing aggression itself, would hinder the control of impulsivity [9]. In this study, the serum serotonin concentrations in the different groups studied (calves, bulls and controls) did not vary significantly among them, although we did observe high intragroup variability. The dopaminergic system is also involved in behavioural activation, motivated behaviour and reward processing [46][47][48].It also plays an active role in modulating aggressive behaviours.In animal studies, hyperactivity in the dopamine system is associated with increased impulsive aggression [49,50].In humans, the dopaminergic system has been linked to the recognition and experience of aggression.There is evidence that impulsive behaviour can be enhanced by an increase in dopaminergic function [51,52]. Based on the results obtained for serum dopamine concentrations, the serum dopamine levels in bulls during a fight were higher, but not significantly higher, than those of the control group, indicating that during a fight, the dopaminergic system is activated.Similarly, different studies on aggressive behaviours in rodents showed that elevated dopamine levels were observed before, during and after aggressive fights [53,54]. The serotonergic system has strong anatomical and functional interactions with the dopaminergic system [55].More specifically, a reciprocal interaction between these two systems has been proposed [17].Lunge-and withdrawal-related behaviours are thought to be determined by the balance between dopamine and serotonin activity [11], with dopamine increasing combative behaviours [56] and serotonin increasing more aggressive behaviours; the withdrawal of both leads to uncontrollable aggressive behaviours [17]. According to a previous study, there is an inverse association between dopamine and serotonin levels during aggression [57].In this study, during fights, serotonin levels slightly decreased, while dopamine levels increased.This has also been observed in another study, which demonstrated that the serotonin levels in rats decreased by 80% from the basal level during and after fights, while the dopamine levels increased by 120% after fights [25]. Since the selection of fighting bulls in each herd is carried out according to the different criteria of each breeder [58], we wanted to study the tendency to show aggressive behaviours not only at a general population level but also separately considering the different breeds to which the herds studied belong. There are encastes that traditionally show more manageable or more aggressive behaviours, depending on the cattle selection criteria (Rodríguez Montesinos, 2002).Although all breeders share the search for brave bulls, some put more effort into obtaining a more "docile" bull [2,41], whereas others emphasise obtaining a more "aggressive" bull [59]. When we analysed the serum serotonin and dopamine concentrations of the bulls according to their breed, significant differences in the concentrations of both hormones were found among the encastes.Those with the highest serum serotonin concentrations and the lowest dopamine concentrations were those that have traditionally been considered "more docile" encastes, such as Murube, Domecq and Núñez. Overall, the differences in serotonin and dopamine concentrations found during the fights and among the encastes, and the stability of these hormonal levels throughout the lives of these animals, suggest that the serum serotonin and dopamine concentrations are good indicators of the level of aggressiveness manifested by bulls during fights.These results can be useful for breeders when choosing which encaste to breed. To demonstrate the relationship between both hormones and aggressiveness, a correlation between serotonin and dopamine levels was performed, along with scoring the aggressiveness of the animals.We observed a strong negative correlation of both serotonin and dopamine concentrations with the obtained observational scores.Other studies reported similar results in other animal species [60,61]. Once this was established, the distribution of the aggressiveness scores was studied.The grouping of aggressiveness scores in the middle zone of the scale made it difficult to choose a cut-off point that would simplify the interpretation of the behavioural results and allow us to separate the animals into two groups: aggressive and non-aggressive. Within the category of "aggressive" were included those animals that, during the course of a fight, clearly showed aggressive behaviour.This aggressiveness is easily perceived by the parties involved in the bullfight (the bullfighter, public and stockbreeder).The remaining bulls were included in the category of "non-aggressive" bulls. However, with this first classification, the "non-aggressive" group contained animals that, although they did not show such manifested aggressive behaviour, showed certain combativeness and aggression.This type of aggressiveness is not particularly evident to the less trained observer, although it is a desirable selection trait for the breeder.Thus, bulls were divided into "combative" and "non-combative" groups. This classification allows us to distinguish between "non-aggressive and non-combative" bulls, "combative and aggressive" bulls and "combative and non-aggressive" bulls.Noncombative bulls are those that do not show any combative or aggressive characteristics during a fight.In contrast, aggressive and combative bulls are those that are difficult to handle and complicated in a fight.Finally, there are combative and non-aggressive bulls, which are those most desired by bullfighters, breeders and the public because they show courage and bravery but are more manageable, allowing for a more aesthetic bullfight. Once this classification of the bulls based on their aggressive behaviour during the fights was established, the determination of serotonin and dopamine levels as good parameters for discriminating combative and non-combative bulls, as well as aggressive and non-aggressive bulls, was performed.Both hormones allow the classification of bulls into aggressive or non-aggressive groups and into combative or non-combative groups.However, considering the differences between both classifications, for both hormones, serotonin and dopamine concentrations are more suitable for classifying bulls into combative and non-combative groups. The interactions between the dopamine and serotonin systems provide a framework for understanding the mechanisms underlying aggression.Considering the functional regulation of serotonin over the dopamine system, impaired serotonergic function may result in hyperactivity of the dopamine system and the emergence of impulsive aggressive behaviours [11].In addition, this interaction between dopamine and serotonin also has a genetic component, where dopamine activates impulsivity in rats that was increased by serotonin depletion or deletion of the serotonin-1B receptor gene [52,62].Therefore, these interactions between dopamine and serotonin systems underlie aggressive behaviour that must be verified behaviourally.There are some studies in humans that correlate low levels of 5-HIAA (serotonin metabolite) and high levels of homovanillic acid (dopamine metabolite) with high scores in interpersonal and behavioural elements of psychopathy [63]. According to these results, that both dopamine and serotonin concentrations highly correlate with aggressiveness scores, both can be considered good parameters to classify bulls into combative or non-combative groups.However, it is difficult to classify an animal without knowing the threshold between being non-combative and combative.Therefore, cut-off values for serotonin and dopamine were calculated to enable a good classification.These values make it easy to classify bulls depending on serotonin and dopamine concentrations. Predicting the behaviour of an animal based on a hormonal study and classifying it as combative or non-combative is highly complex.Therefore, observing the behaviour of the animal is necessary to establish this classification.In fact, it is easy to have animals previously classified as combative that, due to multiple factors, especially those related to the fight, are modified in their behaviour and therefore present a lower score of aggressiveness than what would correspond to them.However, this model is good when classifying non-combative bulls since it is more difficult to find an individual that, previously classified as non-aggressive due to its serotonin and dopamine concentrations, shows extremely aggressive behaviour during a fight. Another hormone that is related to aggressive behaviour is testosterone.This androgen modulates sexual and aggressive behaviour, increasing the likelihood of aggressive behavioural responses in the presence of specific stimuli [64,65]. In the present study, testosterone levels significantly increased during fights in bulls compared to the control group, suggesting that testosterone is involved in the development of aggressive behaviour.In relation to this, studies with human participants also demonstrated a positive correlation between testosterone levels and the development of aggressive behaviours [66].Although our results also found a positive correlation between testosterone levels and the aggressiveness score, this correlation was not statistically significant; therefore, our findings do not corroborate that testosterone levels are correlated with aggressiveness during fights in fighting bulls, in contrast to previous findings [64,65].The increase found in testosterone levels during fights could be due to the physical exercise of the bulls and the intensity of the exercise, according to other reports [67]. When evaluating whether testosterone is a good parameter to predict aggressive behaviour, the testosterone concentrations were not useful in discriminating among bulls according to their behaviour.The area under the ROC curve was significantly far from 1, both in the case of bulls divided into aggressive and non-aggressive groups and those classified into combative and non-combative groups.Therefore, it was not possible to find a cut-off point for testosterone concentration that would allow us to classify bulls with good sensitivity and specificity values. So far, no studies have investigated the physiological parameters involved in the aggressive behaviour of fighting bulls.Taking the results of this study together, serotonin and dopamine concentrations may be good parameters to detect animals with predictable non-combative behaviour, which would help breeders in the selection of fighting bulls.However, testosterone level does not seem to be an adequate parameter to detect more or less aggressive behaviour in bulls during their fights.Nevertheless, serum testosterone concentration can be used as an indicator of adaptation to physical exercise [67]. One goal of this study was the follow-up of the animals from the time of the shoeing of calves, when the animal is not yet sexually mature, until the time they reached sexual maturity and were bulls destined for the fight.This was useful to demonstrate that serotonin and dopamine concentrations remain stable throughout the animal's life.In the present study, both serotonin and dopamine concentrations varied slightly between calves and control bulls, whereas other authors have reported a moderate decrease with age [68,69]. However, the serum testosterone concentration in the sexually mature control bull group was significantly lower than that in the calf group.As the testosterone concentration varies throughout an animal's life, specifically at the stage of sexual maturation [70], the high levels obtained in the calf group may be due to the acute stress to which calves are subjected before and during shoeing [67]. Considering that the serotonin and dopamine concentrations did not present significant variations throughout the bulls' lives and that both hormones were good aggressiveness parameters, the measurement of serotonin and dopamine levels in calves may allow us to calculate the expected aggressiveness score.Thus, using linear regression equations for serotonin and dopamine and interpolating the serum serotonin and dopamine concentrations measured in calves at the time of shoeing resulted in an expected aggressive behaviour score for each calf.When comparing the expected score with that recorded in the bullfighting of each animal, 4 or 5 years later, we found that there were no statistically significant differences between the expected and observed scores, confirming the validity of serum serotonin and dopamine concentrations as indicators of the tendency to manifest aggressive behaviour during a fight. It is necessary to emphasise the importance that these results can have as an accessory tool to provide guidance in the selection of fighting bulls, for the selection of both the cattle used in fights and those used for reproduction.This approach is useful in the detection of those animals that, by presenting serum serotonin and dopamine concentra-tions above the established thresholds, have a tendency to manifest non-aggressive and non-combative behaviours. Conclusions Serum serotonin and dopamine concentrations are good indicators of a bull's tendency to manifest aggressive behaviour during fights.Both hormones are markers that can be used in the selection of fighting bulls and are especially useful in the identification of those animals with a predisposition to be non-combative.However, serum testosterone concentration is not a good indicator of aggressiveness manifested by a bull during a fight. The serum concentrations of serotonin and dopamine measured in calves during their shoeing allowed us to assign an expected behavioural score to each animal, which was confirmed 5 years later during the fights. 22 Figure 1 . Figure 1.Distribution of aggressive behaviour scores in the different encastes studied, with different superscripts denoting significant differences (p < 0.05). Figure 1 . Figure 1.Distribution of aggressive behaviour scores in the different encastes studied, with different superscripts denoting significant differences (p < 0.05). Figure 2 . Figure 2. (A) Serum serotonin concentrations of the different groups studied.(B) Serum seroton concentrations of the different encastes studied, with different superscripts denoting significant d ferences (p < 0.05). Figure 2 . Figure 2. (A) Serum serotonin concentrations of the different groups studied.(B) Serum serotonin concentrations of the different encastes studied, with different superscripts denoting significant differences (p < 0.05).The serum serotonin concentrations of animals belonging to the different encastes differed significantly (Figure2B).The lowest serotonin values were found in the animals belonging to the Urcola, Contreras and Albaserrada encastes.The encastes with the highest serum serotonin concentrations were Murube, Núñez and Domecq. Figure 3 . Figure 3. (A) Serum dopamine concentrations in the different groups studied.(B) Concentrations dopamine in the serum of the different encastes studied, with different superscripts denoting si nificant differences (p < 0.05). Figure 3 . Figure 3. (A) Serum dopamine concentrations in the different groups studied.(B) Concentrations of dopamine in the serum of the different encastes studied, with different superscripts denoting significant differences (p < 0.05). Figure 4 . Figure 4. (A) Serum testosterone concentrations in the different groups studied.(B) Serum testosterone concentrations of the different encastes studied, with different superscripts denoting significant differences (p < 0.05). Figure 5 . Figure 5. Correlations between serum serotonin (A), dopamine (B) and testosterone (C) concentrations analysed in bulls and the corresponding aggressiveness scores obtained during the fights. Figure 6 . Figure 6.Classification of bulls according to their aggressive behaviour scores during fights. Figure 7 . Figure 7. Serum serotonin (A) and dopamine (B) concentrations in the bulls divided according to their aggressive behaviour scores. Figure 6 . Figure 6.Classification of bulls according to their aggressive behaviour scores during fights. 22 Figure 6 . Figure 6.Classification of bulls according to their aggressive behaviour scores during fights. Figure 7 . Figure 7. Serum serotonin (A) and dopamine (B) concentrations in the bulls divided according to their aggressive behaviour scores. Figure 7 . Figure 7. Serum serotonin (A) and dopamine (B) concentrations in the bulls divided according to their aggressive behaviour scores. 22 Figure 9 . Figure 9. (A) Serum testosterone concentrations in the bulls classified according to their aggressive behaviour scores.(B) ROC curves for testosterone-aggressive behaviour. Figure 9 . Figure 9. (A) Serum testosterone concentrations in the bulls classified according to their aggressive behaviour scores.(B) ROC curves for testosterone-aggressive behaviour. Figure 10 . Figure 10.Correlation between serum serotonin (A), dopamine (B) and testosterone (C) concentrations analysed in calves and the corresponding previously calculated expected aggressiveness scores. Figure 11 . Figure 11.Expected and observed aggressiveness scores in calves as a function of serum serotonin (A) and dopamine (B) concentrations. Figure 11 . Figure 11.Expected and observed aggressiveness scores in calves as a function of serum serotonin (A) and dopamine (B) concentrations. Table 1 . Distribution of aggressiveness scores of the total population studied during fights. Table 2 . Cut-off point values for serotonin and dopamine and the corresponding specificity and sensitivity percentages for each hormone.
2024-04-24T15:06:33.538Z
2024-04-01T00:00:00.000
{ "year": 2024, "sha1": "65a3ff272b2a4156afc369e8e95b72fadf2ceef4", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2306-7381/11/4/182/pdf?version=1713774762", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "bdb9c0aaffa346cd7e2ddb719097c7b6b721a3e8", "s2fieldsofstudy": [ "Agricultural and Food Sciences" ], "extfieldsofstudy": [ "Medicine" ] }
239422558
pes2o/s2orc
v3-fos-license
Performance Pressure of Listed Companies and Environmental Information Disclosure: An Empirical Research on Chinese Enterprise Groups As a non-productive activity, environmental information disclosure is not only a prerequisite for environmental governance and sustainable development of listed companies, but also an effective means for executives to relieve pressure on business performance. Taking Shanghai and Shenzhen A-share listed companies as research samples, the authors have carried out an empirical study to test the relationship between subsidiary performance pressure and environmental information disclosure in enterprise groups, and examines the moderating effect of the parent company’s shareholding on the main effect, as well as the differentiation of the moderating effect between high and low degree of executives’ synergy allocation level in parent-subsidiary corporations. The results show that: firstly, the performance pressure of listed companies has a positive impact on environmental information disclosure; secondly, the parent company’s shareholding will weaken the positive impact of listed company’s performance pressure on environmental information disclosure. The higher the parent company’s shareholding ratio, the weaker the positive impact of subsidiary company’s performance pressure on environmental information disclosure. Thirdly, when the degree of executives’ synergy allocation level in parent-subsidiary corporations is low, the negative moderating effect of parent’s shareholding ratio is stronger. enterprises to implement group management has become increasingly obvious, and has gradually become the dominant force in the development of the national economy. This study investigates the regularity of environmental information disclosure when the subsidiary company's operating performance is poor, which can further clarify the governance decisionmaking tendency and behavior logic of the subsidiary company in the governance framework of the parent subsidiary company, and provide reference for the optimization of the governance structure and system design of the parent subsidiary company in practice. Secondly, this study further enriches the research on the antecedent variables of environmental information disclosure of listed companies, tries to explore the motivation and rationality of environmental information disclosure from the perspective of how to alleviate the performance pressure of the senior management of subsidiary companies, and has a deeper understanding of the coping means of the senior management when facing the performance pressure, which can provide reference and support for relevant theories. Theoretical Analysis and Hypothesis Development The impact mechanism of performance pressure of listed companies on environmental information disclosure. According to the theory of pressure, within the enterprise, the management will face the performance pressure brought by the growth speed of the enterprise. When the growth pressure faced by the management is small, the management will have more power to make the operation decision of the enterprise. When the performance pressure faced by the management is large, the power of their efforts will be reduced, and they may choose to achieve the goal of rapid growth of the enterprise in a non-productive way [16]. The principal-agent theory holds that there is information asymmetry and interest inconsistency between the principal and the agent of an enterprise. In such a situation, the executives of listed companies as agents may make decisions that are beneficial to their own interests based on their own information advantages [17]. Performance pressure reflects how difficult it is for listed companies to achieve performance goals [18]. The principal-agent theory believes that executives of listed companies as agents will make decisions that are beneficial to their own interests based on their own information advantages. Therefore, when the operating status of a listed company is highly related to the personal economic benefits and job stability of the executives, performance pressure will affect the idea and the behavior of the executives. That is, the executives of listed companies will first perceive and evaluate whether the performance goals can be achieved, and will take a series of more Introduction Ecological and environmental issues have always been one of the hotspots in China's economic development. From the 18 th National Congress of the Communist Party of China (CPC) to the 19 th National Congress of the Communist Party of China (CPC), ecological civilization has been proposed as a millennium plan for the sustainable development of the Chinese nation. The public's understanding of environmental protection concept is deepening day by day, and their expectation of environmental governance of enterprises is also increasing [1][2]. Many studies have shown that, as a non-productive method, environmental information disclosure can convey to the outside world the information that companies have the courage to assume social responsibilities, increase investor favorability [3], shape a good corporate image [4] and enhance corporate value [5][6] In this context, full disclosure of environmental information has become a prerequisite and an important way for listed companies' environmental governance. In recent years, academic research on corporate environmental information disclosure has been increasing, and fruitful results have also been achieved. The analysis on the influencing factors of environmental information disclosure mainly focuses on the company value [7][8], environmental performance [9][10], profitability [11][12], etc. Clarkson et al. (2013) believe that there is a positive correlation between the quality of environmental information disclosure and the overall value of the company [13]; Meng et al. (2014) found that companies with better and poorer environmental performance tend to disclose more environmental information, the difference is that companies with good environmental performance disclose more substantive information [14]; Nor et al. (2016) argue that corporate profitability is positively related to environmental information disclosure. The stronger the corporate profitability, the higher the level of environmental information disclosure [15]. In short, the existing literature mainly expands and deepens the exploration of the relationship between different factors and environmental information disclosure from the perspectives of legality theory, stakeholder theory, and voluntary information disclosure theory, which enriches the research system of pre-dependent variables of environmental information disclosure. Based on the existing research, the authors analyze the relationship between the performance pressure and disclosure of environmental information of listed companies within the framework of the group, and examines the moderating effect of parent company's shareholding, as well as the difference of the moderating effect in the situation of high and low degree of executives' synergy allocation of parent and subsidiary corporations. The research contributions of this paper are as follows: First, in recent years, the trend of Chinese effective and beneficial actions after weighing the pros and cons [19]. If the performance pressure of a listed company is small, that is, the realization of the expected goal is relatively simple for the executives, and even if the performance pressure is not enough to affect the daily operation of the company, the executives can achieve the expected goal without too much effort. The executives of the listed company will tend to improve operating efficiency by improving internal governance mechanisms and reducing short-term expenditures under the expectations of all parties to achieve the expected goals of executives. However, if the executives of listed companies perceive that it is very difficult to achieve the expected goal, the expected goal is far from the current expected goal, and they cannot improve the performance through the daily management of business in the short term to achieve the goal and relieve the pressure, they will pay less attention to the daily operation and management behavior, that is, they will not pay too much effort for it, but will be more inclined to choose Non-productive activities to transfer the pressure of assessment and evaluation caused by poor performance [20]. In view of the importance of environmental protection and governance in the current corporate development, and the executives have greater discretion and decision-making power on environmental information disclosure [21], they can control the amount, details, and intensity of environmental information disclosure. Therefore, environmental information disclosure provides an opportunity for listed company executives to alleviate the pressure of performance. Executives of listed companies may regard environmental information disclosure as an export of the company's operating performance. Under the circumstances of greater performance pressure, in order to prevent the company from Environmental information disclosure for decline in value and loss of reputation. Based on this, we suggest the following hypothesis: Hypothesis 1. The performance pressure of listed companies has a positive impact on the level of environmental information disclosure. The moderating effect of parent company shareholding on the relationship between listed companies' performance pressure and environmental information disclosure. In the framework of the group, as the controlling shareholder of the listed company, the parent company has the means and motivation to supervise the listed company's business activities [22][23]. When the parent company's shareholding ratio is relatively high, the effect of converging interests with the listed company becomes more obvious, and the awareness and motivation of the supervision of the listed company are also stronger, and compared to other small and medium shareholders' "voting with their feet" and "freeriding" psychology [24][25], the parent company has a deeper understanding of the operating conditions of the listed company, and pays more attention to the longterm interests of the listed company [26]. It also has a stronger ability to identify and control the strategic decisions of listed company executives. The degree of information asymmetry between parent and subsidiary companies is weakened. The operation decisions and implementation goals of listed company are easier to identify and known by the parent company [27]. At this time, the effect of non-productive transfer of pressure by executives of listed companies will be reduced, and the difficulty in achieving the expected effect will lead to the weakening of the motivation of executives to transfer pressure, instead, they will focus more on how to use productive activities to relieve performance pressure. Therefore, based on the above discussion, we propose the following hypothesis: Hypothesis 2. There is a moderating effect of parent company shareholding on the relationship between listed companies' performance pressure and environmental information disclosure. The specific performance is: the higher the parent company shareholding ratio, the weaker the positive impact of listed company performance pressure on environmental information disclosure. The influence of executives' synergy allocation level in parent-subsidiary corporations on the parent company shareholding. Executives' synergy allocation level in parentsubsidiary corporations is a governance mechanism for the parent company to coordinate and centrally configure the listed company executives through concurrent senior management, and it is also an effective way for the parent company to control and interfere with the management decisions of listed companies [28]. The low level of the executives' synergy allocation level means that there are fewer concurrent executives from the parent company in listed companies. Due to the limitation of dual duties, it is difficult to identify and supervise the operating conditions of listed companies in a timely and effective manner [29][30], and it is even more difficult to accurately identify the motivation of the transfer of pressure by the executives of listed companies and the information asymmetry between parent and listed companies is relatively high [31]. At this time, if the shareholding ratio of the parent company is high, the parent company will improve the supervision consciousness and motivation of listed companies in view of the concern about ownership income, especially when the parent company knows that it has a small number of concurrent executives in listed companies, it will also improve the recognition ability of executives' decision-making of listed companies for the sake of stabilizing its own control rights and improving the earnings value of listed companies. Correspondingly, the executives of listed companies have been intensified under the supervision of the parent company, and the effect of transferring performance pressure through the non-productive activity of environmental information disclosure is not good. On the contrary, it will improve the standardization of decision-making behavior and tend to relieve performance pressure through formal productive behavior. Based on the above analysis, the following hypothesis is put forward: Hypothesis 3. Executives' synergy allocation level in parent-subsidiary corporations will influence the moderating effect of parent company's shareholding, specifically as follows: under the situation of low level of executives' synergy of parent and subsidiary companies, the stronger the weakening effect of parent company's shareholding on the positive correlation between performance pressure of listed companies and environmental information disclosure. The overall study model is shown in Fig. 1. Methodology In this paper, the quantitative regression analysis method is adopted, and the data are analyzed by using stata15.1 software. This paper takes the listed manufacturing companies in Shanghai and Shenzhen stock markets as the initial sample, and further selects them through the following steps: First, they belong to enterprise groups; Second, during the sample observation period, "2014-2017" did not have major restructuring phenomena such as changes in controlling shareholders; Third, the sample enterprises with serious lack of ST (Special Treatment) and data are eliminated. Finally, 3092 groups of samples were obtained, includes data of 773 listed manufacturing companies in China from 2014 to 2017, and other relevant data used in the empirical analysis of this paper mainly came from the CSMAR database. (CSMAR database is an economic and financial database developed from the needs of academic research). Variable Definitions (1) Environmental Information Disclosure (EDI). This paper draws on the research of Wiseman (1982) and assigns values to environmental information disclosure indicators [32]. The value is evaluated according to three aspects: "whether the environmental information is disclosed in accordance with the GRI Sustainability Reporting Guidelines", "whether to disclose environmental and sustainable development information", and "whether to disclose social responsibility system construction and improvement measures", which meets one condition Recorded as "1", otherwise as "0". (2) Performance pressure (PP). This article refers to the method of Gomes (1998) and uses the reverse index and uses the following method to calculate [33]: where P i is the return on total assets (ROA) of the i company this year, HA i is the historical performance of the i company, and SA i is the average performance of the industry in which the i company belongs this year, and α is the weight. Based on the previous research [34], it is assigned a value of 0.5. The smaller the value, the more serious performance pressure the company are facing. (3) The parent company shareholding ratio (EC). The parent company's shareholding ratio refers to the previous research [35] and is measured by the largest shareholder's shareholding ratio. (4) Executives' synergy allocation of parent and subsidiary corporations. Executives' synergy allocation of parent and subsidiary corporations reflects the governance mechanism adopted by the corporate group operation to achieve unified coordination and centralized configuration of the senior management within the group. Based on the previous research [36] and the operation environment of China, this paper uses the ratio of the number of subsidiary executives concurrently serving as executives in parent company to the total number of subsidiary executives to measure and divides them into two groups according to the median of the calculated ratio. The group above the median is considered as a high degree of synergy allocation, while the group below the median is considered as a low degree of synergy allocation. (5) Control variables. Combined with the existing research, this paper also selects the following factors that reflect the characteristics of listed companies as control variables: company size, leadership structure of board of directors (because of the difference of data magnitude, we take logarithm of the original data of the indicator in empirical analysis. Board leadership structure is measured as follows: Parttime situation of chairman and CEO. When two positions were held by one person, this indicator was marked as "1", otherwise this indicator was marked as "0".), environmental protection investment in the province, audit fees, shareholding ratio of board of directors, and high salary level of directors [37][38]. Definition and measurement of variables are shown in Table 1. Models To investigate the hypothesis of this paper, the following multiple regression model is designed: Control is the control variable group, c is the intercept term, ε represents the random disturbance Table 1. Definition and measurement of variables. Variables Code Index Environmental information disclosure EDI Whether to disclose environmental information according to GRI Guidelines for Sustainable Development Report, whether to disclose environmental and sustainable development information, and whether to disclose the construction and improvement measures of social responsibility system. If meets one condition, it is recorded as 1, otherwise it is recorded as 0, and finally forms 0-3 discrete data. Performance pressure PP The performance of the listed company this year is subtracted from the average performance and historical performance of the industry after assignment. Shareholding ratio of parent company EC The number of shares held by the parent company accounts for the proportion of the total equity of the listed company. Executives' synergy allocation of parent and subsidiary corporations Synergy Measured by the ratio of the number of subsidiary executives concurrently serving as executives in parent company to the total number of subsidiary executives, and divides them into two groups according to the median of the calculated ratio. Company size Size The natural logarithm of the total assets of a listed company at the end of the year. Board leadership structure BLS Measured as follows: Part-time situation of chairman and CEO. When two positions were held by one person, this indicator was marked as "1", otherwise this indicator was marked as "0". Environmental protection investment in the province Province Measured according to the amount of environmental protection expenditure of each province. Board independence IND The number of shares held by the parent company accounts for the proportion of the total equity of the listed company. Audit fee Audit The audit fee divided by the total assets at the end of the period. Board shareholding ratio BS The number of shares held by the board of directors' accounts for the proportion of the total share capital of the listed company. Remuneration levels of directors, supervisors, and executives Payment The total remuneration of the board of directors, the board of supervisors and the senior management is divided by the operating cost of the listed company that year. Year (2015) Y1 Observation year belongs to this year, recorded as 1, not 0. Year (2016) Y2 Observation year belongs to this year, recorded as 1, not 0. Year (2017) Y3 Observation year belongs to this year, recorded as 1, not 0. term, j is the number of each control variable, and b j represents the regression coefficient of each control variable. Model 1 is a regression model of the performance pressure of listed companies on environmental information disclosure, which can test hypothesis H1; On the basis of model 1, model 2 adds the interactive terms of performance pressure and parent company's shareholding ratio, which is used to analyze the moderating effect of parent company's shareholding on the impact of listed company's performance pressure on environmental information disclosure, and can verify hypothesis H2; Model 3 and Model 4 respectively test the moderating effect of parent company's shareholding on the main effect in the context of high and low executives' synergy allocation of parent and subsidiary corporations, and compare the regression coefficients and significance of the interaction between parent company's shareholding ratio and performance pressure in the two models, so as to judge the influence of executives' synergy allocation of parent and subsidiary corporations on parent company's shareholding and verify hypothesis H3. Data Analysis and Results Discussion Descriptive Statistics Table 2 reports the mean, median, standard deviation, minimum and maximum values of the main variables. The average value of environmental information disclosure is less than 0.5, the median is 0, and the standard deviation is large, it can be seen that that due to the lack of standardization and compulsory environmental information disclosure of listed companies, the level of information disclosure among samples varies, and the environmental information of some listed companies The level of information disclosure is low; there is a significant gap between the minimum and maximum performance pressure indicators, indicating that the performance pressures of different companies in the same industry are quite different; the average and median of the parent company's shareholding ratio are both about 0.3 and the standard deviation It is also small, reflecting that the gap in the shareholding ratio of the parent company is relatively small; the mean value of the dummy variable of the chairman and general manager concurrently is small, and the standard deviation is large, which shows that there are relatively few cases of combining two positions in the sample companies; The large input standard deviation indicates that there is a clear gap in the level of local environmental protection expenditures between provinces. Multiple Regression Results According to the model designed above, use stata14.0 software to perform regression analysis. The specific calculation results are shown in Table 3. The regression analysis result of model M1 shows that after controlling for various factors that may affect environmental information disclosure, the regression coefficient of listed company performance pressure is negative (β = -0.094), which is significant at the 10% level. Due to performance pressure adopts reverse indicators, it shows that the performance pressure of listed companies has a positive impact on the level of environmental information disclosure. That is, the greater the performance pressure of listed companies, the more likely it is to disclose environmental information at a higher level. Thus, H1 is verified. The analysis result of the model M2 shows that the regression coefficient of the parent company's shareholding ratio and the performance pressure interaction term of the listed company is positive (β = 0.137), and it is significant at the basic level of 5%, this shows that the parent company's shareholding has a significant moderating role in the relationship between the performance pressure of listed companies and environmental information disclosure. Combined with the coincidence of main effect and interactive term coefficient, it can be judged that the higher the parent company's shareholding ratio, the weaker the positive impact of listed companies' performance pressure on environmental information disclosure. Hypothesis H2 is verified. From the analysis results of models M3 and M4, it can be seen that in the case of a low degree of executives' synergy allocation of parent and subsidiary corporations, the regression coefficient of the parent company's shareholding ratio and the performance pressure interaction term of the listed company is positive (β = 0.343), and it is at 1% the level is significant, indicating that in the context of a lower degree of executives' synergy allocation of parent and subsidiary corporations, the parent company's shareholding ratio has a stronger weakening effect on the main effect. Hypothesis H3 has been verified; In the case of a high degree of executives' synergy allocation of parent and subsidiary corporations, although the regression coefficient of the interaction term between the parent company's shareholding ratio and the listed company's performance pressure is positive (β = 0.001), the significance test is not passed. The reason for the analysis may be: At this time, concurrently serving as executives has a strong role in decision-making and supervision of listed company's executives, and can serve as an information communication role to timely feedback the operating status of the listed company to the parent company, so the parent company does not need to pay more attention to it to handle the affairs of listed companies. Year ( Note: Because the environmental information of the explained variable is disclosed as a discrete data variable, the ologit regression analysis method is adopted; ***, **, * indicate the significance level of 1%, 5%, and 10%, respectively, and the z-values in parentheses. As the panel ologit model cannot display R square, Akaike Information Criterion (AIC) is adopted to evaluate the fitting degree of this model. Random Sample Considering that the sample size may affect the research results, this paper randomly selects 80% of the samples for testing. The testing results are shown in Table 4. In Model 1, the regression coefficient between performance pressure and environmental information disclosure is -0.100, which is significantly negative based on 10%, proving Hypothesis 1. The results of model 2 show that the regression coefficient of the interaction term between parent's shareholding ratio and performance pressure is 0.166, which is significantly positive based on 10%. hypothesis 2 is verified. The results of model 3 and model 4 show that in the context of a lower degree of executives' synergy allocation of parent and subsidiary corporations, the regression coefficient of the interaction between parent's shareholding ratio and performance pressure is 0.316, which is significantly positive based on 5%, and hypothesis 3 is verified. The empirical results are consistent with the above regression conclusions, indicating that the research results are robust after selecting random samples. Add Control Variables In addition, considering that the asset-liability ratio (Lev) that reflects the financial status of the enterprise and the property rights (State-owned) of the listed companies may affect the robustness of the experimental Note: Because the environmental information of the explained variable is disclosed as a discrete data variable, the ologit regression analysis method is adopted; ***, **, * indicate the significance level of 1%, 5%, and 10%, respectively, and the z-values in parentheses. results, this paper adds two control variables, assetliability ratio, and property rights of the enterprise, to test the robustness of the conclusions. As can be seen from Table 5, in Model 1, the regression coefficient between performance pressure and environmental information disclosure is -0.099, and is significantly negative based on 10%, which proves Hypothesis 1. The results of model 2 show that the regression coefficient of the interaction term between parent's shareholding ratio and performance pressure is 0.133, and is significantly positive based on 5%. Hypothesis 2 is verified. The results of model 3 and model 4 show that in the context of a lower degree of executives' synergy allocation of parent and subsidiary corporations, the regression coefficient of the interaction term between parent company's shareholding ratio and performance pressure is 0.348, and is significantly positive based on 1%. Hypothesis 3 is also verified. This shows that the conclusion of this paper is still robust after considering the financial status and property rights of the enterprise. Note: Because the environmental information of the explained variable is disclosed as a discrete data variable, the ologit regression analysis method is adopted; ***, **, * indicate the significance level of 1%, 5%, and 10%, respectively, and the z-values in parentheses. Research Conclusions Based on the background that Chinese society is paying more and more attention to environmental protection, this article explores the attitudes of the executives of listed companies to environmental information disclosure when facing performance pressures within the framework of enterprise groups. The empirical results show: Firstly, the performance pressure of listed companies has a positive impact on environmental information disclosure. That is, the greater the performance pressure of listed companies, the stronger the motivation of executives to disclose environmental information in order to prevent the decline in performance from threatening personal income and job stability, and to meet the current public's demands for environmental protection and governance. Secondly, there is a moderating effect of parent company ownership on the relationship between performance pressure and environmental information disclosure, the specific performance is: the higher the shareholding ratio of the parent company, the weaker the positive impact of performance pressure on environmental information disclosure. That is to say, the higher the shareholding ratio of the parent company, the stronger the supervision consciousness and motivation to the listed companies, and the more obvious the convergence effect of interests, which strengthens the supervision motivation and ability to the executives of the listed companies, promotes the listed companies to face the pressure of performance, and weakens the tendency to engage in non-productive activities. Thirdly, group the executives' synergy allocation of parent and subsidiary corporations according to the degree of synergy to test the differentiation of the parent company's shareholding adjustment effect. The results show that the executives' synergy allocation of parent and subsidiary corporations will affect the moderating effect of the parent company' s shareholding. The specific manifestation is: in the context of a lower degree of executives' synergy allocation of parent and subsidiary corporations, the parent company's shareholding has a stronger effect on weakening the positive correlation between listed companies' performance pressure and environmental information disclosure. Theoretical Contribution First, this study is like most literature in the definition and measurement of performance pressure, but it is different from the performance dilemma that the existing literature focuses on factors such as R&D tendency, investment level, workplace deception, and organizational unethical behavior. The research explains the response of listed companies in the performance dilemma through a pressure perspective, and analyzes the decision-making tendency of listed company executives based on the demand of shifting pressure, which is a supplement to the pressure theory and behavioral agency theory. Secondly, this research has a more in-depth analysis and understanding of the motivations of listed companies for environmental information disclosure. The research proves that listed companies use environmental information disclosure to relieve pressure when facing performance pressure, and proves environmental information from the side. Disclosure is a non-productive method. From the perspective of listed company executives, strengthening environmental information disclosure also meets the current public and other core stakeholders' concerns and demands on the environmental governance of listed companies. Finally, based on the enterprise group, which is an intermediate organization between market and enterprise, this study further discusses the role of parent company's shareholding in identifying and supervising the motivation of transferring pressure of listed company executives, and introduces the special governance element of executives' synergy allocation of parent and subsidiary corporations, which not only clarifies the internal logic of listed company executives' performance pressure and environmental information disclosure, but also enriches the relevant literature in the field of enterprise group governance. Managerial Implications Firstly, the study clarified that the executives of listed companies will have the motivation to engage in unproductive actions when facing performance pressure, and executives may use this to relieve performance pressure. In other words, if a listed company cannot achieve its expected performance goals in the short term, and its operating performance will affect its personal income and job stability, under pressure, it will choose to transfer the pressure of evaluation through environmental information disclosure, this kind of decision-making with a tendency to avoid is bound to be detrimental to the long-term development of the enterprise. In fact, no matter whether the pressure comes from the company itself or external uncontrollable factors, the executives of listed companies should continuously improve their own management and management capabilities and quality, and find ways to relieve them in the face of performance pressure. For example, managers can make changes to the enterprise to improve the current situation [39], and use nonproductive activities to transfer assessment goals and relieve pressure that does not meet the requirements of sustainable development [40]. At the same time, parent companies and external investors should moderately slow down their pursuit of short-term interests, and the assessment of listed companies and executives should shift to long-term and non-economic goals. Executives of listed companies should be encouraged to take a long-term perspective of corporate development and it can reduce the worries caused by the performance pressure, and have more confidence and courage to make suggestions for the long-term development of the company [41]. Secondly, it can be concluded from the conclusion that the parent company's shareholding and executives' synergy allocation of parent and subsidiary corporations within the framework of enterprise groups can supervise the decision-making of listed companies, and enable listed company executives to weigh the effects of shifting performance pressure, thereby affecting the tendency to implement environmental information disclosure. As a special governance mechanism for corporate groups, executives' synergy allocation of parent and subsidiary corporations can supervise the executives of the listed company to a certain extent, concurrent executives can report the actual operating conditions of the listed company to the parent company. The existence of this kind of information channel will restrict the motives of listed company executives to implement non-productive behaviors, prompting listed company executives to face up to the status quo of business operations and use their own management capabilities to use productive methods to alleviate their difficulties. This also further shows that the special arrangement of the executives' synergy allocation of parent and subsidiary corporations has a certain driving effect on improving the corporate governance efficiency and promoting the healthy development of the corporate group. It is necessary for the corporate group to appropriately promote this governance mechanism. Research Limitations and Future Prospects First, this paper only uses the data of listed companies in China's manufacturing industry for analysis, and classifies them according to the disclosure content of CSMAR. In the future, more comprehensive classification methods can be explored, and a more scientific and richer index system can be constructed to provide more reliable data support for future research. Secondly, due to the time lag in the release of corporate environmental information disclosure data in CSMAR database, this paper uses the data from 2014 to 2017 for analysis. Future research can further improve the timeliness of environmental information disclosure data. Finally, this paper uses parent-subsidiary corporate executive collaboration as a moderating variable. Given the availability of relevant data, this paper only takes 773 companies to conduct research. Future research can further improve the sample size and make a more scientific measurement of the data of executive collaboration.
2021-10-21T15:49:10.364Z
2021-09-22T00:00:00.000
{ "year": 2021, "sha1": "bd9ae02e44557f661f21e3630cce1d22a53b1b9a", "oa_license": null, "oa_url": "http://www.pjoes.com/pdf-134542-68620?filename=Performance%20Pressure%20of.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "7505419ea20f03261d9d599cbf075c27efab129e", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [] }
52122885
pes2o/s2orc
v3-fos-license
Proinsulin in the identification and risk stratification of gestational diabetes mellitus: study protocol for a prospective, longitudinal cohort study Introduction Gestational diabetes mellitus (GDM) is a common metabolic disorder occurring in up to 10% of pregnancies in the western world. Most women with GDM are asymptomatic; therefore, it is important to screen, diagnose and manage the condition as it is associated with an increased risk of maternal and perinatal complications. Diagnosis of GDM is made in the late second trimester or early third trimester because accurate diagnosis or risk stratification in the first trimester is still lacking. An increase in serum proinsulin may be seen earlier in pregnancy and before a change in glycaemic control can be identified. This study will aim to establish if fasting proinsulin concentrations at 16–18 weeks gestation will help to identify or risk stratify high-risk pregnant women with GDM. Methods and analysis This is a prospective, longitudinal cohort study. Two oral glucose tolerance tests will be carried out at 16–18 and 24–28 weeks gestation in 200 pregnant women with at least one risk factor for GDM (body mass index>30 kg/m2, previous macrosomic baby (>4.5 kg), previous gestational diabetes, first degree relative with type 2 diabetes mellitus) recruited from antenatal clinics. Blood samples will be taken fasting and at 30 min, 1 and 2 hours following the 75 g glucose load. In addition, a fasting blood sample will be taken 6-weeks post delivery. All samples will be analysed for glucose, insulin, C peptide and proinsulin. Recruitment began in November 2017. Optimal cut-off points for proinsulin to diagnose gestational diabetes according to National Institute for Health and Care Excellence (2015) criteria will be established by the receiver operating characteristic plot and sensitivity and specificity will be calculated to assess the diagnostic accuracy of proinsulin at 16–18 weeks gestation. Ethics and dissemination This study received ethical approval from the Wales Research Ethics Committee (Panel 6) (Ref. 17/WA/0194). Data will be presented at international conferences and published in peer-reviewed journals. Trial registration number ISRCTN16416602; Pre-results. AbstrACt Introduction Gestational diabetes mellitus (GDM) is a common metabolic disorder occurring in up to 10% of pregnancies in the western world. Most women with GDM are asymptomatic; therefore, it is important to screen, diagnose and manage the condition as it is associated with an increased risk of maternal and perinatal complications. Diagnosis of GDM is made in the late second trimester or early third trimester because accurate diagnosis or risk stratification in the first trimester is still lacking. An increase in serum proinsulin may be seen earlier in pregnancy and before a change in glycaemic control can be identified. This study will aim to establish if fasting proinsulin concentrations at 16-18 weeks gestation will help to identify or risk stratify high-risk pregnant women with GDM. Methods and analysis This is a prospective, longitudinal cohort study. Two oral glucose tolerance tests will be carried out at [16][17][18] and 24-28 weeks gestation in 200 pregnant women with at least one risk factor for GDM (body mass index>30 kg/m 2 , previous macrosomic baby (>4.5 kg), previous gestational diabetes, first degree relative with type 2 diabetes mellitus) recruited from antenatal clinics. Blood samples will be taken fasting and at 30 min, 1 and 2 hours following the 75 g glucose load. In addition, a fasting blood sample will be taken 6-weeks post delivery. All samples will be analysed for glucose, insulin, C peptide and proinsulin. Recruitment began in November 2017. Optimal cut-off points for proinsulin to diagnose gestational diabetes according to National Institute for Health and Care Excellence (2015) criteria will be established by the receiver operating characteristic plot and sensitivity and specificity will be calculated to assess the diagnostic accuracy of proinsulin at 16 IntroduCtIon Gestational diabetes mellitus (GDM) is a common metabolic disorder occurring in up to 10% of pregnancies in the Western world. 1 GDM is defined as any degree of carbohydrate intolerance with onset or first recognition during pregnancy. Most women with GDM are asymptomatic; hence, it is important to screen, diagnose and manage the condition as it is associated with an increased risk of maternal and perinatal complications such as pre-eclampsia, macrosomia, shoulder dystocia and neonatal hypoglycaemia. Diagnosis of GDM is made in the late second trimester or early third trimester because accurate diagnosis or risk stratification in the first trimester is still lacking, and in the UK women with a high risk of GDM are currently offered an oral glucose tolerance test (OGTT) at 24-28 weeks gestation. 2 The risk factors that predispose to the development of GDM include body mass index (BMI) >30 kg/m 2 , previous macrosomic baby strengths and limitations of this study ► This is a prospective, longitudinal cohort study recruiting at a single site. ► It is the first study to assess the use of proinsulin as a biomarker for the identification and risk stratification of gestational diabetes mellitus in early pregnancy in women with at least one risk factor for gestational diabetes mellitus. ► Proinsulin will be measured using sensitive and highly specific immunoassays, eliminating the influence of other beta-cell products observed with previous proinsulin assays. ► The recruitment of 200 pregnant women is planned. The study has been designed to be sufficiently powered to compare the early screening approach with the detection of gestational diabetes mellitus at 24-28 weeks of gestation. ► The use of proinsulin will be evaluated in a cohort of women at high risk of developing gestational diabetes mellitus and the study will be underpowered as a screening tool for all pregnant women. Open access (>4.5 kg), previous gestational diabetes, family history of type 2 diabetes mellitus (T2DM) (first degree relative with diabetes) and certain ethnic groups. Current diagnostic criteria are a fasting plasma glucose concentration of ≥5.6 mmol/L or a 2 hour value of ≥7.8 mmol/L. 2 Glycosylated haemoglobin (HbA1 c ) cannot be used to diagnose GDM as HbA1 c is insufficiently sensitive to substitute for OGTT as a screening test. 3 Women who develop GDM have a high risk of future-compromised glycaemic control. A review of published studies indicated a 7.43-fold increase risk of postpartum diabetes in women with GDM compared with women with healthy glycaemic control during pregnancy 4 and the incidence of postpartum diabetes in North America and Europe has indicated a prevalence rate of between 30% and 50% up to 15 years' follow-up. [5][6][7] Normal pregnancy is accompanied by a progressive increase in insulin resistance that begins midway through pregnancy and progresses through the second and third trimesters with resultant increase in insulin secretion to compensate for the acquired resistance. The levels of insulin resistance are not too dissimilar to that seen in individuals with T2DM. 8 GDM then develops when the insulin supply is no longer adequate to maintain normal blood glucose regulation. Proinsulin is a precursor molecule for insulin and is synthesised by the pancreatic beta cells. Proinsulin is an 86 amino acid peptide, incorporating the A and B chains of insulin in addition to C peptide between amino acid residues 31 and 65. Under normal circumstances, virtually all proinsulin is cleaved at residues 32-33 and 65-66 to produce C peptide and insulin, although a small amount of intact proinsulin may also be released into the circulation along with des 31-32 split proinsulin and 32-33 split proinsulin. In the presence of insulin resistance, pancreatic beta-cell function is affected with disproportionately more proinsulin (both intact and split) being secreted compared with insulin as seen in subjects with T2DM. 9 Previous studies investigating serum proinsulin measurements in pregnancy have studied pregnant women irrespective of individual risk which may account for varied findings. [10][11][12] However, studies comparing the proinsulin concentrations of healthy pregnant women and those who have gestational diabetes have found that proinsulin concentrations at fasting are significantly elevated in GDM compared with control subjects. 13 14 These studies however did not assess the use of proinsulin as a biomarker for GDM; rather they compare women already diagnosed with GDM with proinsulin measured only at 24-28 weeks. Therefore it is possible that an increase in serum proinsulin may be seen earlier in pregnancy and before a change in glycaemic control can be identified. Specifically targeting pregnant women that are high risk may make serum proinsulin measurements more sensitive and specific to identify those that will develop GDM at an earlier stage in their pregnancy and subsequently suitable for earlier intervention. AIMs And objECtIvEs This study will aim to establish if fasting proinsulin concentrations at 16-18 weeks gestation will help to identify or risk stratify high-risk pregnant women with GDM diagnosed according to National Institute for Health and Care Excellence (2015) criteria. 2 It will also seek to establish if 30 min and/or 1 and 2 hour post oral glucose load proinsulin measurements at 16-18 weeks gestation can discriminate women with gestational diabetes and predict which women will need insulin to control hyperglycaemia. The relationship of the various risk factors to plasma proinsulin levels will also be evaluated. The primary objective of the study is to test the hypothesis that fasting intact proinsulin measurements at 16-18 weeks gestation will discriminate or risk stratify gestational diabetes (diagnosed from an OGTT at 24-28 weeks) from women with normal glucose tolerance. The secondary objectives are: ► To test the hypothesis that 30 min, 1 and 2 hour post 75 g oral glucose load, proinsulin measurements at 16-18 weeks gestation can predict those women subsequently diagnosed with gestational diabetes at 24-28 weeks. ► To test the hypothesis that fasting, 30 min and/or 1 and 2 hour post oral glucose load, proinsulin measurements at 16-18 weeks gestation can predict those women with gestational diabetes that will need insulin during pregnancy. ► To study the relationship of various risk factors to plasma proinsulin concentrations and gestational diabetes in the second and third trimesters of pregnancy. MEthods And AnAlysIs study design This is a prospective, longitudinal cohort study (table 1). Study recruitment started on 14 November 2017 and the study is expected to last until December 2019. Two OGTTs will be carried out at 16-18 and 24-28 weeks gestation in 200 pregnant women with at least one risk factor for GDM (BMI >30 kg/m 2 , previous macrosomic baby (>4.5 kg), previous gestational diabetes, first degree relative with T2DM) recruited from antenatal clinics within Abertawe Bro Morgannwg University Health Board, Wales. Women of ethnic origin considered to be high risk will need to have another mentioned risk factor to be eligible for inclusion in the study. Blood samples will be taken fasting and at 30 min, 1 and 2 hours following the 75 g glucose load. In addition, a fasting blood sample will be taken at 6-weeks post delivery. All samples will be analysed for glucose, insulin, C peptide and proinsulin. Both intact proinsulin and total proinsulin (the sum of Open access intact proinsulin, 32-33 split proinsulin and des 31-32 split proinsulin) will be assayed. setting and site selection Recruitment of pregnant women will be via a study poster placed at antenatal clinics within Abertawe Bro Morgannwg University Health Board, Wales. Study procedures will be carried out at a single site (Joint Clinical Research Facility, Swansea University). Informed consent Informed consent for each subject will be obtained prior to initiating any trial procedures. Potential participants eligible to take part in the study will receive an invitation letter from their hospital consultant or midwife, along with a participant information leaflet and be given an oral explanation about the study from a research professional (usually a research nurse). Written informed consent is given by the participant by signing and dating a consent form which will be countersigned and dated by either a study nurse or principal investigator to confirm that the participant has the opportunity to ask questions and fully understands the nature of the study. Thereafter, a copy of the consent form will be given to each of the participants. Research professionals can facilitate the consent process for the study if authorised to do so on the site delegation log following appropriate training including good clinical practice (GCP). The consent process makes clear that the participant can withdraw from the trial whenever they wish without giving a reason and without affecting their future care in any way. The reasons for withdrawal will be documented, if known, and site staff will be encouraged to trace participants lost to follow-up and document the reasons for their loss whenever possible. Each participant will receive an identification number to ensure confidentiality and any samples will be identified by only using only the identification number. Data will be recorded in a case record form. Patient population Pregnant women at 16-18 weeks gestation with at least one of the following risk factors for GDM will be studied. Those recruited will meet the following inclusion/exclusion criteria: Inclusion criteria ► BMI >30 kg/m 2 . ► Previous macrosomic baby (>4.5 kg). ► Previous GDM. ► Family history of T2DM (first degree relative with diabetes). Exclusion criteria ► Subjects unable or unwilling to sign informed consent. ► Known previous diabetes mellitus or on treatment with metformin. ► Known chronic infection like hepatitis or HIV or chronic kidney, liver or heart disease. ► Previous bariatric surgery. Open access Women who will be diagnosed with gestational diabetes at the first visit (16-18 weeks) will be withdrawn from the study and referred back to routine antenatal care. study visits Participants will be seen on three different occasions. The first two visits will be at 16-18 weeks and 24-28 weeks gestation. Participants will be asked to attend having fasted for 10 hours. On arrival at the clinical unit, a fasting blood sample will be taken. Patients will then be given a drink containing 75 g glucose. At 30 min, 1 and 2 hours following the drink further blood samples will be taken. The 30 min sample has been included to allow capture of the peak plasma insulin response and for robust estimation of insulin sensitivity. The third visit will be 6-week post delivery where participants will be asked again to attend having fasted for 10 hours. On this occasion, a fasting blood sample only will be taken. Additional data including birth weight of the baby, Apgar score at birth and medications including insulin doses just prior to birth will be collected. laboratory measurements Laboratory measurements for glucose, insulin, C peptide and proinsulin (total and intact) will be carried out in the Good Clinical and Laboratory Practice Accredited Diabetes Research Unit Cymru Laboratory based at Swansea University. Glucose samples will be taken into fluoride oxalate tubes and will be measured using a glucose oxidase assay (YSI 2300 Stat Plus, Fleet, Hampshire, UK). Insulin, C peptide, total and intact proinsulin samples will be taken into EDTA tubes and measured using specific immunoassays using chemiluminescent labels (Invitron, Monmouth, UK). safety evaluations and data monitoring The data monitoring committee (DMC) will monitor the overall conduct of the trial, safeguarding the interests of the trial participants and assessing the safety and efficacy of the intervention. Patients identified as glucose intolerant at 16 weeks will be considered to have had pre-existing glucose intolerance and will be referred back to their antenatal team and excluded from further participation in the study. All serious adverse events (AEs) that occur during the study will be recorded and reported in accordance with local requirements and will be reported to the DMC. All AEs will be recorded on a case report form and reviewed as part of central data monitoring. statistical analysis plan To address the primary objective and the first secondary objective, glucose measurements following an OGTT at 24-28 weeks will be used as the reference standard to classify GDM status. Optimal cut-off points for proinsulin will be established by the receiver operating characteristic plot which plotted the proportion of true positives against the proportion of false positives. 95% CIs for the area under curve, sensitivity and specificity will be calculated to assess the diagnostic accuracy of total and intact proinsulin at 16-18 weeks gestation. The same approach will be used to address the second secondary objective, but with the use of insulin during pregnancy as the reference standard. Logistic regression will be used to analyse the relationship of proinsulin concentrations at 16-18 weeks gestation and other individual risk factors for the development of GDM. All the analysis and data preparation will be done using statistical analyses which will be performed by a statistician using SPSS V.22 which is validated statistical software for clinical trial studies. Values will be checked for normality, applying suitable transformations as necessary. All statistical hypothesis tests will be performed at a 5% significance level. All available data from withdrawn subjects will be included in the analysis up to the time of withdrawal where possible. sample size Sample size was estimated with Buderer's formula 15 which used prevalence, level of clinically acceptable precision and a hypothesised level of sensitivity and specificity as the parameters for estimating sample size. There were no direct estimates of GDM prevalence in the high-risk population. Prevalence of GDM in the Western world was estimated to be ~10% 1 and the relative risk of developing GDM in women with one of the risk factors (BMI >30 kg/m 2 ) was 2.74. 16 There are also reports of women with a family history of diabetes having a >6-fold risk of developing gestational diabetes. 17 Therefore, we have estimated the prevalence of GDM in our target population to be ~30%. We took the conservative approach of basing our sample size estimation on hypothesised sensitivity rather than specificity. Since the estimated prevalence was lower than 50%, this approach would give a higher estimate of the number to be included in the study. As a meaningful screening test, a sensitivity of 90% for the circulating concentration of total and intact proinsulin at 16-18 weeks gestation for GDM is expected as established by glucose measurements following an OGTT at 24-28 weeks. The minimal clinically acceptable precision for estimate of sensitivity is 6%. We also expect loss to follow-up to be less than 10%. Adjusted for possible loss to follow-up, a sample size of 200 will be required to estimate with precision of ±0.06 for an expected sensitivity of 0.9 with the proinsulin test, at the significance level of 0.05, if the prevalence of GDM is not lower than 30%. Patient and public involvement In preparation for this study, the Public Reference Group of the Diabetes Research Unit Cymru was consulted and their opinions sought on the concept of the study and the study design. Open access EthICs And dIssEMInAtIon research governance The study conforms with the Research Governance Frameworks for England and Wales, and the principles of GCP outlined by the International Conference on Harmonisation (http://www. ich. org/). The participating National Health Service (NHS) Health Board has given NHS permission and will be responsible for auditing the study. Publication In accordance with good practice, we have registered the GDM study in a public registry at International Standard Randomised Control Trial Number (ISRCTN 16416602). This is a diagnostic accuracy study and findings will be reported according to the Standards for Reporting of Diagnostic Accuracy Studies (STARD) guideline. 18 We shall present study findings at national and international conferences and publish as widely as possible in open access, peer-reviewed journals. Any published results will be made available to study participants from their study nurse on request (this is made clear in the patient information sheet) and any results will also be made available on the Diabetes Research Unit Cymru website (www. diabeteswales. org. uk) with a direct link to any publication and with a summary in lay language. dIsCussIon A recent systematic review and meta-analysis 19 have shown that there is no good evidence for any of the diagnostic criteria for early onset GDM and have suggested the use of a fasting glucose of 6.1-6.9 mmol/L in the first trimester. Others 20 have tried to develop a prediction model for obese women at high risk of GDM to facilitate targeted interventions. Current practice is that pregnant women with at least one risk factor for GDM have an OGTT at 24-28 weeks and a fasting glucose measurement 6 weeks post delivery. It is not unusual for some patients having an OGTT at 28 weeks to have glucose levels above 10 mmol/L indicating that they have been hyperglycaemic for some time and therefore it would be useful if we could reasonably identify at an earlier stage those women who will develop GDM and also those that would require insulin to control hyperglycaemia. This would help manage scarce resources, as those that are unlikely to require insulin could be followed up by experienced allied healthcare professionals like specialist midwives while those that require insulin could be followed up by diabetologists and/or diabetes specialist nurses earlier in their pregnancy. It will also be useful to revisit the pathophysiology of GDM with respect to the onset of insulin resistance and corresponding insulin response at the various stages of pregnancy. Moreover, proinsulin assays have become more specific; thus, proinsulin can now be reliably measured by well-characterised immunoassays. This paper summarises the current approved protocol.
2018-09-15T22:01:13.853Z
2018-08-01T00:00:00.000
{ "year": 2018, "sha1": "f0b7c6bd2e6b70766531386c2f906e464c1b057b", "oa_license": "CCBYNC", "oa_url": "https://bmjopen.bmj.com/content/bmjopen/8/8/e022571.full.pdf", "oa_status": "GOLD", "pdf_src": "Highwire", "pdf_hash": "d18ed80b660f54d7b21473497506330ba59afce7", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
219605283
pes2o/s2orc
v3-fos-license
An Update on the Molecular Basis of Phosphoantigen Recognition by Vγ9Vδ2 T Cells About 1–5% of human blood T cells are Vγ9Vδ2 T cells. Their hallmark is the expression of T cell antigen receptors (TCR) whose γ-chains contain a rearrangement of Vγ9 with JP (TRGV9JP or Vγ2Jγ1.2) and are paired with Vδ2 (TRDV2)-containing δ-chains. These TCRs respond to phosphoantigens (PAg) such as (E)-4-hydroxy-3-methyl-but-2-enyl pyrophosphate (HMBPP), which is found in many pathogens, and isopentenyl pyrophosphate (IPP), which accumulates in certain tumors or cells treated with aminobisphosphonates such as zoledronate. Until recently, these cells were believed to be restricted to primates, while no such cells are found in rodents. The identification of three genes pivotal for PAg recognition encoding for Vγ9, Vδ2, and butyrophilin (BTN) 3 in various non-primate species identified candidate species possessing PAg-reactive Vγ9Vδ2 T cells. Here, we review the current knowledge of the molecular basis of PAg recognition. This not only includes human Vγ9Vδ2 T cells and the recent discovery of BTN2A1 as Vγ9-binding protein mandatory for the PAg response but also insights gained from the identification of functional PAg-reactive Vγ9Vδ2 T cells and BTN3 in the alpaca and phylogenetic comparisons. Finally, we discuss models of the molecular basis of PAg recognition and implications for the development of transgenic mouse models for PAg-reactive Vγ9Vδ2 T cells. Introduction Vγ9Vδ2 T cells sense phosphorylated isoprenoid metabolites (Phosphoantigens: PAgs) [1], which are accumulated in the host cell either as a consequence of aberrant or drug-manipulated host cell metabolism or as a result of microbial infection. The subject of this review is recent advances in identifying the molecular mechanism underlying this TCR-mediated sensing and the pivotal role of butyrophilins (BTNs) in this process. A better understanding of these processes can be expected to provide insights into the physiological role of these cells and help to harness them for therapeutic applications as discussed elsewhere. In this review, we outline the current knowledge on the molecular mechanisms underlying phosphoantigen-mediated Vγ9Vδ2 T cell activation, including insights gained by species comparison. In addition, we also aimed to discuss controversial issues and open questions. αβ T Cells and γδ T Cells Jawed vertebrates (Gnathostomata) [2] possess three lineages of lymphocytes, which are defined by their antigen receptors. These antigen receptors are pivotal for the development of antigen receptor-bearing lymphocytes and the adaptive immune response exerted by them. Antigens are sensed by heterodimeric Ig domain-containing antigen receptors whose diversity is generated by a recombination-activating gene (RAG) 1 and 2-dependent somatic recombination of variable (V), diversity (D), and joining (J) gene segments encoding the antigen-binding variable domain. Different cell lineages can be defined by antigen receptor usage, specifically T cells of the αβ or γδ T cell lineage that express T cell receptors (TCRs) and B cells that produce immunoglobulins (Ig), which serve as B cell receptors (BCRs). Secreted Ig can neutralize antigens, trigger the effector functions of other cells, and activate the humoral components (complement) of the innate immune system. The antigen-binding variable domains (V-domain) of TCRs are encoded by V, D, and J genes of the TCRβ (TRB) and TCRδ (TRD) loci, and by the V and J genes of the TCRα (TRA) and TCRγ (TRG) loci [3]. Each V-domain contains three hypervariable loops (HV) also designated as complementarity determining regions (CDR). The CDR1 and CDR2 are encoded by the V genes of the antigen receptor loci. Hence, their diversity is limited by the number of V gene segments, while the CDR3 is located at the junction of recombined V, (D) and J genes and is therefore highly variable. The RAG-dependent recombination mechanism generates CDR3 variability not only by combining V(D)J genes but also by junctional diversity ensured by excision or insertion of nucleotides at the recombination sites and insertions of N-nucleotides by the terminal deoxynucleotidyl transferase (TdT). Human TCR δ-chains contain up to three D gene-encoded segments, which increases not only the variability of CDR3δ lengths but also massively impacts sequence diversity, since up to four sites of recombination can be incorporated in a CDR3 of a TCR δ-chain [2,4]. For very few species, somatic hypermutations have been described for TCR loci such as the TRA, TRG, and TRD of nurse sharks [5,6] and TRG and TRD of dromedary [7,8]. Many Gnathostomata, but not mice and humans, possess additional types of RAG-recombined Ig domain-containing antigen receptors that can be considered functional analogues to TCRs or BCRs, respectively [4,9]. Conventional vs. Unconventional T Cells T cells expressing αβ TCRs, which bind to complexes of polymorphic major histocompatibility (MHC) molecules and peptide antigens (MHC-restricted T cells), are carriers of adaptive cellular immunity. Likewise, T cells with diversified TCR repertoires recognizing antigens in the context of MHC class I-like molecules such as certain types of CD1-or MR1-restricted T cells or even γδ T cells may also exert features of adaptive T cells. The final composition of TCR specificities (repertoire) of MHC-restricted T cells is shaped by intra-thymic positive and negative selection guided by the anatomically controlled presentation of peptide-MHC complexes and the avidity of emerging TCRs to those complexes [10]. A highly conserved but not absolute feature in Gnathostomata is the division of mature T cells into MHC class I-restricted CD8 T cells that exert killer functions and MHC class II-restricted CD4 T cells, which promote and modulate immune functions. Despite a likely co-evolution of the peptide-presenting MHC molecules with TRA and TRB genes, they cannot be correlated with MHC class restriction or the functional properties of MHC-restricted cells [11]. T cells that are not MHC-restricted are commonly described as non-conventional or "unconventional" T cells and can stem from the αβ or γδ T cell lineage. They are also often referred to as "innate T cells" since many of them share features with natural killer (NK) lymphocytes with respect to their susceptibility to antigen-independent signals, especially cytokines, and their expression of NK cell receptors. They differ from conventional T cells in their intra-thymic development and in contrast to MHC-restricted T cells, their TCRs show restrictions in V gene usage and unique, characteristic TCR gene rearrangements. Such unique TCR combinations can be used to characterize unconventional T cell populations since they determine, or at least correlate with, a cell type-specific mode of development, functionality, and homing. The best understood populations of non-conventional αβ T cells are CD1d-restricted invariant natural killer T cells (iNKT) cells and MR1-restricted mucosal-associated T cells (MAIT cells). Their α-chains largely carry "invariant" VαJα (TRAV-TRAJ) rearrangements and pair with β-chains of limited TRBV gene usage. They are specific for certain metabolites bound to the non-polymorphic MHC class I-like molecules CD1d and MR1, respectively [12][13][14]. With regard to γδ T cells, butyrophilins (BTN) [15] or butyrophilin-like molecules, such as SKINT1 [16] in the case of dendritic epidermal cells (DETC), steer the development and activation of certain γδ T cell populations. For some of them, binding in a superantigen-like mode to TRGV-encoded parts of the TCR has been demonstrated [17][18][19][20][21]. Vγ9Vδ2 T Cells: TCR and Phosphoantigen Reactivity The vast majority of human blood γδ T cells are Vγ9Vδ2 T cells (1-5% of blood T cells in healthy individuals), which respond to so-called "phosphoantigens" (PAgs). Their TCRs share a characteristic Vγ9JP (TRGV9JP) rearrangement, alternatively designated as Vγ2Jγ1.2 and Vδ2-containing TCR δ-chains. Unless explicitly mentioned, PAg-reactive T cells and Vγ9Vδ2 T cells will be used synonymously in this article. The nomenclature of TCR genes follows that of IMGT [3] and sequence homology to human genes. Freshly isolated Vγ9Vδ2 T cells share functional features with CD8 T cells and NK cells [22] but under some pathological conditions, TH17-like cells have been observed [23]. Furthermore, at least in vitro, they exhibit a remarkable degree of plasticity and multifunctionality such as differentiation into professional antigen-presenting or phagocytosing cells and promoting and regulating immune responses by crosstalk with B cells, dendritic cells, NK cells, and monocytes (reviewed in [24] and schematically shown in Figure 1). Furthermore, the antigen-dependent activation of Vγ9Vδ2 T cells is strongly modulated by additional receptors including inhibitory and activating NK cell receptors [25,26] In the case of NKG2D, even a direct triggering of some effector functions is possible [27]. PAgs are products of isoprenoid synthesis that specifically activate Vγ9Vδ2 T cells. The building blocks of isoprenoid synthesis are isopentenyl pyrophosphate (IPP) and its isomer dimethylallyl pyrophosphate, which are both weak PAgs. The naturally occurring PAg (E)-4-hydroxy-3-methyl-but-2-enyl pyrophosphate (HMBPP) stimulates Vγ9Vδ2 T cells about 10,000-fold more efficiently than IPP [28][29][30]. It differs from IPP only in a single hydroxy group and is the immediate precursor of IPP in the non-mevalonate pathway, which is also known as the 2-C-methyl-D-erythritol 4-phosphate/1-deoxy-D-xylulose 5-phosphate or Rohmer pathway. The non-mevalonate pathway is restricted to eubacteria, cyanobacteria, plants, and apicomplexan protozoa. HMBPP is the driving force of a massive Vγ9Vδ2 T cell expansion in infections with HMBPP-producing parasites or bacteria, which can lead to an increase of Vγ9Vδ2 T cells from 1-5% of blood T cells to more than 50%. With the exception of apicomplexan parasites such as Plasmodium spp. or Toxoplasma gondii, all animals synthesize IPP exclusively via the mevalonate pathway [29]. Reduction of the activity of the IPP-metabolizing enzyme farnesyl-diphosphate-synthase (FPPS), e.g., by inhibitors such as aminobisphosphonates (ABP) and zoledronate (Zol) [31,32] or shRNA-mediated knock-down of FPPS expression [33], reduces the metabolization of IPP and leads to a concomitant increase of IPP levels. Human or primate antigen-presenting cells (APC) or target cells with increased IPP levels are sensed by the Vγ9Vδ2 TCR and Vγ9Vδ2 T cells are activated. The activation by triphosphate nucleotide derivatives, naturally occurring as a consequence of increased IPP and HMBPP levels, has also been reported [34]. Some tumors such as the human B cell lymphoma Daudi spontaneously activate Vγ9Vδ2 T cells [35]. This activation depends on the intracellular accumulation of IPP and can be abolished by statins, which inhibit the 3-hydroxy-3-methylglutaryl-CoA reductase (HMG-CoA) reductase and consequently also IPP synthesis [32]. However, it is unclear how PAg action relates to the reported binding of the Vγ9Vδ2 TCR to some cellular proteins, as in the case of TCR G115 binding to ectopically expressed F1-ATPase [36] or the binding of other Vγ9Vδ2 TCRs to stress-associated molecules such as ULBP4 [37], MutS homologue 2 [38] or hsp60/GroEl [39]. In summary, Vγ9Vδ2 T cells can sense the metabolic changes of transformed [32], infected [40], [29] or drug-treated host cells via their TCR [31,32]. This reactivity can be harnessed clinically as the remission of certain tumor entities after Vγ9Vδ2 T cell activation has been observed (reviewed in [41,42]). Furthermore, Vγ9Vδ2 T cells can support anti-bacterial immunity in preclinical mouse and primate models, including the possibility of PAg-based vaccines against tuberculosis as shown for non-human primates [43,44]. Butyrophilin 3 (BTN3) as PAg Sensor: Identification and Functional Analysis 3.1. Murine Reporter Cells Identify the Crucial Role of Human Chromosome 6 (Chr:6) in Vγ9Vδ2 T Cell Activation by PAg PAgs activate Vγ9Vδ2 T cells very rapidly as documented by the acidification of culture medium already nine seconds after the addition of the synthetic PAg BrHPP to a cell culture with a Vγ9Vδ2 T cell clone [45]. This finding has originally been taken as a hint for a direct binding of the PAg to the TCR, but attempts to demonstrate such an interaction have not been successful, despite the identification of positively charged hypothetical binding sites for PAg in the G115 Vγ9Vδ2 TCR crystal that resemble binding sites originally identified in phospholipids antibodies [46]. The idea of PAg directly binding to the TCR was in conflict with the increasing experimental evidence for a presentation of PAgs to the Vγ9Vδ2 T cell by cells of human or primate origin [47], leading to a concept of non-polymorphic species-specific molecules acting as direct PAg-presenting molecules or being otherwise mandatory for PAg stimulation, e.g., by providing a γδ T cell-specific co-stimulus [48][49][50]. Essentially, all cell types, including human γδ T cells and the widely used Jurkat T cell lymphoma, have the capacity of PAg presentation [47]. Furthermore, the activation of primary γδ T cells is massively modulated by activating and inhibiting cell surface receptors, leading to an interference that should be avoided [25][26][27]. We hypothesized that rodents would lack PAg-presenting molecules and that rodent reporter cell lines would not be reactive to xenogeneic NK cell receptor ligands. Therefore, we generated reporter cells of murine origin (mouse 58C or mouse-rat T cell hybridoma 53/4) transduced with a Vγ9Vδ2 TCR of proven PAg reactivity to assess the presence of a putative PAg-presenting or Vγ9Vδ2 T cell-(co)stimulating molecule for cells of various tissue and species origin ( Figure 2). Surprisingly, although interleukin (IL)-2 production by the TCR transductants could be triggered by anti-CD3 or anti-TCR monoclonal antibodies (mAbs), initial attempts to generate an IL-2 response to PAg in co-cultures with cells of human or primate origin failed [51]. However, this lack of response could be rescued by providing a co-stimulatory signal through the overexpression of a rat/mouse CD28 construct in the reporter cell and the use of CD80/CD86-expressing stimulatory cells; in some cases, this was achieved by CD80 transduction [51,52]. The generation of these reporter cell lines allowed to interrogate multiple cell types for their capacity to present PAgs. To reduce the chance of false-negative results, the same type of reporter cells was also tested for an αβ TCR response to a peptide antigen presented by APCs transduced with rat MHC class II molecules [53,54]. All types of cells of human or primate origin induced a PAg-dependent IL-2 production, but no response was observed with cells from mice, rats, hamster, dog, and cow (Kreiss, Li, Herrmann, Karunakaran, unpublished data). This drawback could be overcome by using murine reporter T cells, which lack an endogenous PAg response. Murine T cell antigen receptors (TCR)-negative 53/4 RT1B-restricted hybridoma T cells were transduced with human TRGV9JPC1 (Vγ9JPC1) and TRDV2J1C (Vδ2Jδ1C) constructs encoding Vγ9 and Vδ2 TCR chains, respectively. For co-stimulation, CD28 was overexpressed in hybridoma T cells, and endogenous CD3 enabled TCR complex formation. Thus, generated 53/4 Vγ9Vδ2 TCR hybridoma cells could be activated in the presence of PAg when co-cultured with CD80-transduced antigen-presenting cells (APCs) of human origin or other species, provided they are expressing the molecules necessary for PAg presentation. Mouse interleukin (IL)-2 produced by the T cell hybridoma in overnight co-cultures was measured as read-out for reporter T cell activation. The Human Butyrophilin 3 (BTN3A) Family Game-changing for understanding the molecular basis of Vγ9Vδ2 T cell activation by PAgs was the identification of human BTN3A molecules as key compounds in PAg-induced Vγ9Vδ2 T cell activation [53]. In humans, the BTN3A gene family consists of BTN3A1, BTN3A2, and BTN3A3, which are part of a BTN gene cluster at the telomeric end of the MHC complex on Chr:6 [55]. Antibodies raised against the BTN3A1 (CD277) extracellular domain (ED) are available but cross-react with other members of the BTN3A family [56,57]. BTNs are named after BTN1A, which is a membrane protein that is involved in fat droplet formation in milk and displays an immunomodulatory potential similar to many other BTNs and BTN-like molecules (BTNL) [58]. BTN3A1, similar to most BTNs, carries an extracellular domain with strong structural similarity to B7 receptor family molecules consisting of an N-terminal IgV-like domain (V domain) followed by an IgC-like domain (C domain), a transmembrane domain (TM), and an intracellular domain (ID) [59]. The ID contains a juxtamembrane domain (JM) followed by a B30.2 or PRY/SPRY domain and a tail consisting of variable numbers of amino acids (aa) with unknown function (Figure 3). B30.2 [55,58] is found in many molecules involved in innate immunity such as tripartite motif (TRIM)-containing proteins [60]. Some BTN family members such as myelin-oligodendrocyte glycoprotein (MOG), the BG molecules of chicken or the selection and upkeep of intraepithelilial T-lymphocyte protein (SKINT) and SKINT-like molecules share B7-like extracellular domains (or parts of it) but vary considerably in their transmembrane/intracellular parts [59]. The EDs of BTN3A1, A2 and A3, are highly homologous. Their V domains differ by a single, conservative substitution (R37K), and their C domains are also quite similar, while the TMs and IDs of BTN3As differ considerably [59]. Interestingly, the B30.2 of BTN3A1 and BTN3A3 are more similar to each other than their TM and JM. The ID of BTN3A2 lacks the B30.2 and a part of the JM [59] (aligned in Figure 3). BTN3-specific antibodies act as co-stimulators for γδ T cells, but BTN3 has also been implicated in the negative and positive regulation of NK cell and monocyte responses [56,[61][62][63]. Identical amino acids (dots), gaps (dashes), and numbers (right side) are indicated. Amino acid numbers include the signal peptide and are written in standard letters. For amino acid numbers used in the main text, the leader peptide was left out. They are given in bold. The domains of BTN3 (protein domain) and corresponding exons (exon number) are shown according to human BTN3A1 [57,64,65]. Some features of the secondary structure of BTN3A1 are visualized (according to [57,65] and the Protein Data Base BTN3A1 B30.2 protein structure 5HM7). Predicted coiled-coil structures in the juxtamembrane domain (JM) of all BTN3s are shown (red) together with predicted anchor residues (light green) as proposed by Wang et al. [65]. Proposed PAg-binding residues in the BTN3A1 V domain (BTN3-V) are highlighted in blue [66] and residues lining the PAg binding pocket of B30.2 are shown in orange [67]. Contact regions of sc-Fv 103.2 (pink line) and 20.1 (dark green line) in the BTN3A1 V domain are indicated [57]. Blue arrows indicate the residues undergoing chemical shift perturbations (CSP) in a nuclear magnetic resonance (NMR) study of the BTN3A1-BTN2A1 interaction [68]. The residues with the most pronounced CSPs are marked with dark blue arrows, whereas lower CSPs are indicated by light blue arrows. BTN3A1 is Mandatory for Vγ9Vδ2 T Cell Activation The key finding in the identification of BTN3A1 as a mandatory component for PAg-mediated Vγ9Vδ2 T cell activation was the induction of cell proliferation by the agonistic BTN3A-specific mAb 20.1 and the inhibition of PAg-mediated activation by mAb 103.2. Both mAbs bind to different sites of the V domain of BTN3A molecules: mAb 20.1 to the C, C´, and C strands and the B-C, C´-C´´, and D-E connecting loops ( Figure 3) [57]. In their landmark paper, Harly et al. described not only the general importance of BTN3A molecules for the PAg response but also that BTN3A family members differentially support PAg-dependent Vγ9Vδ2 T cell activation [53]. The nearly ubiquitous expression of BTN3A, which includes γδ T cells, the stimulatory properties of mAb 20.1 for γδ T cells, and the B7-like nature of BTN3A-EDs, raised the question of whether BTN3 molecules expressed by γδ T cells themselves or those expressed by the "presenting" cells mediate activation. This was addressed by pulsing antigen-presenting or target cells with mAb 20.1 or with the ABP pamidronate and testing mAb 20.1-or PAg-mediated activation with our murine Vγ9Vδ2 TCR-expressing reporter cells, which lack BTN3 molecules and do not "present" PAg [53]. Since mAb 20.1 and 103.2 bind to all three BTN3A molecules, it was important to test the contribution of the different BTN3A molecules to PAg reactivity. To this end, primary Vγ9Vδ2 T cell lines [56,57] were stimulated with pamidronate-pulsed BTN3A isoform-specific knock-down cells and BTN3A knock-down cell lines transfected with single BTN3A genes. These experiments [53] revealed an essential role of BTN3A1 in APB-induced stimulation but not of BTN3A2 and BTN3A3 [53,67,69,70], whose important contribution to the magnitude of PAg responses was demonstrated later by others [20,71,72]. The structural characterization of soluble and crystalized recombinant BTN3A-EDs showed only small differences among the three BTN3As but identified two types of BTN3A1 homodimers: one dimer with a symmetric parallel V-shaped structure with interacting C domains and another asymmetric head-to-tail conformation with contact of the V and C domain ( Figure 4) [57]. The agonistic mAb prevents the formation of the head-to-tail conformation, arguing for the V-shaped conformation as the biologically active form and the head-to-tail conformation as the inactive one [57]. Another finding was an ABP-induced immobilization of BTN3A1, but not of the non-stimulatory BTN3A2 or BTN3A3, which led to the notion that this type of immobilization might be crucial for BTN3A-mediated stimulation [53]. However, this view was revised after the demonstration that the introduction of a disulfide bridge between the C domains of two BTN3A1s, which prevents the head-to-tail conformation, diminished ABP-and mAb 20.1-induced stimulation but not BTN3A1 surface immobilization [73]. In accordance with this, Yang et al. proposed the head-to-tail conformation of BTN3A1-ED as the biologically active one [74]. Nevertheless, the conformation of the activating state of BTN3A1 is still a matter of debate as illustrated by an editorial on the same study depicting a model with an "activated" BTN3A1 in a V-shaped conformation [75]. To our knowledge, the possibility that a reversible switch between the two conformations or co-existence of both is required for achieving the activating state of BTN3A or BTN3A-associated complexes has not been discussed yet. Interacting residues are highlighted in red (V domain) and blue (C domain). Contact residues of either domain [57] are according to the PDB protein structures 4F9L and 4F80 and were modified with USCF Chimera1.1. BTN3A1 Acts as a PAg Sensor The importance of the intracellular domain of BTN3A1 for PAg sensing was already demonstrated by Harly et al. who showed that in contrast to BTN3A3, a BTN3A3 containing the BTN3A1-ID induced ABP-mediated Vγ9Vδ2 T cell stimulation [53]. PAg binding to the B30.2 domain has been determined by a number of methods: Isothermal titration calorimetry (ITC) [20,64,67,71,76], nuclear magnetic resonance (NMR) spectroscopy [64,76,77], small-angle X-ray scattering (SAXS) [64], and fluorescence polarization [78]. Affinities for HMBPP were in the micromolar range and in the millimolar range for IPP. Crystallography originally identified a shallow binding groove in the apo-form [71] or the formaldehyde-fixed BTN3A1-B30.2 domain-PAg complex [67], which could accomplish PAg binding and later in complexes of B30.2 domains with various PAgs [74]. NMR studies revealed chemical shift perturbations (CSP) within and close to the PAg binding site [76], but also in more distant areas of B30.2 and in the JM domain [64,76,77]. As in the case of the ED, two types of B30.2 dimers have been observed: a symmetric one in which the PAg binding sites are positioned distant from each other, which was designated as dimer II by Gu et al. [73] or pattern A by Yang et al. [74], and another one with one binding site pointing to the interface of both domains and the other binding site to the outside (dimer I [73] or pattern B [74], respectively) ( Figure 5). A key residue is histidine H351, which changes its conformation as a consequence of PAg binding [74]. Its importance is demonstrated by the fact that after R351H substitution, BTN3A3 becomes an efficient PAg sensor [67]. Interestingly, the 1-OH group of the exogenous PAg HMBPP, which is missing in the much weaker IPP, forms additional H-bonds with H351 and tyrosine Y352 and might explain the differential binding and biological activity of both PAgs [73,74]. Furthermore, HMBPP binding favors the interaction of H351, W391, and W350 with the juxtamembrane region in pattern B [73,74] and mutagenesis of W350, which is part of the dimer I/pattern B interface that completely abrogated the PAg response of Vγ9Vδ2 T cells but did not affect PAg binding to B30.2 [74]. A final argument in support of the importance of dimer B for T cell activation stems from the analysis of HMBPP derivatives where the methyl group of HMBPP was replaced by other residues, including a bulky 4-methylbenzyl (HMBPP-08). This molecule binds more efficiently to the B30.2 domain than HMBPP but is much less active (EC 50 198 nM versus 0.016 nM). This lower biological activity is unlikely to reflect impaired transport across the plasma membrane since the hydrophobic nature of HMBPP-08 would favor passing the membrane as previously shown for other highly active HMBPP derivatives [74]. Furthermore, the permeabilization of cells with monensin did not change activation. Therefore, the authors argue that the loss of activation probably results from the bulky nature of the 4-methylbenzyl, which inhibits the formation of a regular pattern B dimer and leads to a new B30.2 conformation identified in HMBPP-08-B30.2 co-crystals [74]. It is unclear whether and how changes in B30.2 and JM translate to the ED and to the interaction between BTN3 and TCR. Yang et al. [74] addressed this by atomic force microscopy and measured mechanical forces of the cell-cell interaction of ABP-pulsed or unpulsed pancreatic tumor and Vγ9Vδ2 T cells. This force increased five minutes after contact of the ABP-pulsed presenting cell and was interpreted as evidence for BTN3A1-TCR binding [74]. However, this interpretation should be met with caution, as increased mechanical force could also reflect a general increase of cell-cell adhesion, e.g., as a consequence of TCR-mediated inside-out integrin activation [75]. HMBPP-bound B30.2 domain forms asymmetric dimers where HMBPP and its interacting residues of a monomer are positioned at the interface of another monomer whose HMBPP interface is oriented to the opposite side. (c) Islet of pattern B representing HMBPP-interacting residues of the B30.2 monomer (left) and a changed orientation to obtain the most planar view (right). The JM is shown as an orange helix, positively charged residues surrounding the HMBPP pocket are shown in red, and HMBPP is shown in blue. For this figure, the PDB protein structure 5ZXK [74] was modified with UCSF Chimera 1.1. The Importance of the Juxtamembrane Domain The BTN3A1-JM is not only important for PAg sensing but also crucial for the formation of complexes with other molecules such as periplakin and the small GTPase RhoB [71,79]. Periplakin is a large protein (1756 aa) whose cytosolic domain binds to membrane proteins with its N-terminal plakin domain and links them with its C-terminal domain to intermediate filaments. It was co-immunoprecipitated from lysates of BTN3A1 and of BTN1A1 transductants whose BTNs share a 7 aa peptide motif encoded by exon 5 [71]. Deletion of this domain or mutation of a local dileucine abolished ABP-mediated stimulation [71]. Nevertheless, the role of periplakin in the activation is unclear, since knock-down experiments for periplakin led to no conclusive results [71,80]. Furthermore, the periplakin binding motif is missing in alpaca BTN3, which efficiently mediates PAg stimulation [20]. The other BTN3A1-binding molecule proposed to participate in PAg stimulation and to bind to the JM is the small GTPase RhoB. RhoB became a candidate molecule for controlling ABP responses after analysis of Interferon (IFN)-γ production in Vγ9Vδ2 T cells in cultures with pamidronate and Epstein Barr Virus (EBV)-transformed B cell lines obtained from several family pedigrees [79], where 33 cell lines induced activation and seven did not. One of the three candidate genes identified by its vicinity to single nucleotide polymorphic markers was the small GTPase RhoB. ABPs are known to increase the levels of the GTP-bound form of Rho-GTPases. The modulators of Rho activity modulated Vγ9Vδ2 T cell stimulation and led to changes in the ABP-induced reduction of BTN3A1 mobility [79]. The specificity for RhoB was shown by experiments with constitutively active or dominant-negative RhoB mutants and CRISPR/Cas9-mediated knock-out cells for RhoA, -B, and -C. Only RhoB deficiency reduced APB-mediated IFN-γ stimulation, suggesting a connection between RhoB and Vγ9Vδ2 T cell stimulation [79]. A mechanistic explanation for this could be that activated RhoB links BTN3A1 to the cytoskeleton, which would fit to the ABP treatment-correlated reduction of surface mobility observed for BTN3A1. Furthermore, RhoB binds to recombinant BTN3A1-ID but not to B30.2 alone, suggesting the involvement of the JM in this process. The authors show by biolayer interferometry that in a second step and as a consequence of PAg binding, BTN3A1 is released from RhoB. Accompanied with ABP treatment is also a new BTN3A1-ED conformation demonstrated by Förster resonance transfer (FRET) with antibodies against the BTN3A1 V domain and 4,4-difluoro-4-bora-3a,4a-diaza-s-indacene (BODIPY)-labeled cells, which showed an increased distance between V domain and plasma membrane as expected for the adaptation of a V-shaped conformation of the ED [79]. Although the importance of the JM for BTN3A1 action is undisputed, its conformation and conformational changes upon PAg binding are not understood. The Morita group performed extensive molecular modeling-based analysis, complemented with mutational analysis and a comparison of BTN3 sequences of species possessing (putatively) PAg-reactive Vγ9Vδ2 T cells and favor a coiled-coil structure of the JM of the BTN3 dimer [65]. In this model, the JM would constantly maintain a certain distance between the B30.2 domain and the cell membrane which would demand a head-to-tail dimer of the ED suggested by Yang et al. [74]. Alternatively, small-angle X-ray scattering of recombinant BTN3A1-IDs suggests a more globular structure, which becomes more compact upon PAg binding. As in the case of the ED dimers, there is still no consensus regarding which "active" formation is adopted after PAg binding. As discussed for the ED, it might well be that phosphoantigen sensing requires a reversibility or co-existence of conformations. In this case, all means (e.g., mutations) of fixing one conformation would negatively impact PAg-mediated activation. Another important aspect is the role of the JM as a possible interface for BTN molecules or other, not yet defined, binding partners which may also be affected by or modify the JM conformation. Cooperation of BTN3 Isoforms While the mandatory role of BTN3A1 for PAg sensing was readily confirmed in all knock-down studies, the knock-down experiments of BTN3A2 and BTN3A3 led to variable results. The first clear negative effects were reported on ABP and HMBPP responses by Rhodes et al. who found a complete loss of stimulation by BTN3A1 knock-down but also clear effects of BTN3A2 and BTN3A3 knock-down [71]. These were confirmed and extended by CRISPR-Cas9 knock-out 293T cell lines by Vantourout and colleagues [72]. Various combinations of BTN3A knock-out and re-expression were tested for their capacity of Zol-induced stimulation of Vγ9Vδ2 T cell lines. Knock-out of BTN3A2 and BTN3A3 abolished the ABP-mediated stimulation of Vγ9Vδ2 T cell lines. Knock-out of BTN3A2 alone reduced it significantly, while the effects of BTN3A3 knock-out were less pronounced [72], which is in line with our unpublished data obtained by testing HMBPP responses of γδ TCR-MOP-transduced murine reporter cells, except for BTN3A3 knock-out, which also exhibited a remarkable reduction in the PAg activation of reporter cells. Another notable difference concerning BTN3A3 was that Vantourout et al. found a residual response to cells expressing only BTN3A3 or BTN3A knock-out lines reconstituted with BTN3A3. However, in our hands, the knock-out of all BTN3A isoforms (BTN3AKO) and transduction with BTN3A3 did not restore any HMBPP-mediated activation of murine TCR-MOP transductants (Karunakaran et al. unpublished data). Whether this reflects a peculiarity of the different assay systems or responses to Zol versus HMBPP remains to be tested. Altogether, these studies confirmed the pivotal function of BTN3A1 in PAg sensing but also demonstrated a clear positive effect of the other two BTN3A proteins on BTN3A1 action, with BTN3A2 being more efficient in this respect. BTN3A1 and BTN3A2 probably interact directly, as concluded from a 1:1 stoichiometry in co-immunoprecipitation experiments, common intracellular localization, and trafficking. Both molecules were retained in the ER, although at a different degree, but the formation of BTN3A1-BTN3A2 heterodimers released them from this retention and led to an increased cell surface expression while requiring the C domain of BTN3A2 [72]. Nevertheless, this overexpression does not explain the increased Zol-or HMBPP-induced stimulation, since mutations leading to increased cell surface expression do not necessarily result in increased activation [72]. Mutational analysis of an ER retention motif within the JM and of putative interaction sites between both molecules within the JM allow the conclusion that trafficking, as well as the control of ER retention is instrumental for efficient Zol-induced stimulation. An important technical aspect when interpreting such experiments are diverging effects of tagging BTN3A proteins and the generation of BTN3A-fusion constructs on cellular trafficking and function [72]. Examples are the fusion with C-terminal EGFP which, although leading to increased cell surface expression of BTN3A1, reduced stimulation effects while small N-terminal tags (FLAG and hemagglutinin) had no effect [72]. In addition, no negative effects on stimulation were seen for the fusion proteins of human BTN3A1 or alpaca BTN3 and C-terminal mCherry protein [20]. The preferential formation of BTN3 heterodimers over BTN3A1 homodimers fits well with the findings of Gu et al. who expressed and co-purified full-length recombinant BTN3A1 and BTN3A2 molecules and isolated only BTN3A1-BTN3A2 heterodimers [73]. In line with this was our unpublished data that also favors BTN3A heterodimers rather than BTN3A1 homodimers (Karunakaran et al. unpublished data). In summary, the data discussed so far highlight the JM as a candidate region for explaining the higher efficiency of PAg sensing by BTN3 heteromers over BTN3A1 alone, which affects intracellular trafficking and surface expression and likely contributes to yet unknown events such as the recruiting of additional molecules. Thus, the JM region of the BTN3A1 molecule is crucial for PAg sensing but in addition, the JM of BTN3A2 and BTN3A3 in BTN3 heteromers enhance this capacity. Interestingly, at least one function of BTN3A1 is exclusive to BTN3A1 but without obvious links to Vγ9Vδ2 T cell stimulation [81]. BTN3A1, but not BTN3A2 or BTN3A3, controls the induction of type I interferon responses by cytosolic nucleic acids and viruses via its ID [81], whereby the intracellular domain forms a complex with TANK-binding kinase (TBK1) and microtubule-associated protein 4 (MAP4). After stimulation with nucleic acids, dynein displaces MAP4 and redirects the BTN3A1-TBK1 complex within the cell to a perinucleic region. Finally, BTN3A1 mediates the interaction of TBK1 with interferon-responsive factor 3 (IRF3), leading to the phosphorylation of IRF3, its translocation in the nucleus, and the positive regulation of type I interferon production. The Role of TCR Clonotypes in the Response to mAb 20.1 and PAg Stimulation of human peripheral blood mononuclear cells (PBMCs) with mAb 20.1 or its sc-Fv induces a robust activation of Vγ9Vδ2 T cells or Jurkat cells transduced with the γδ TCR-G115 [57]. Furthermore, mAb 20.1 and derivatives thereof can trigger anti-tumor responses in preclinical models of Vγ9Vδ2 T cell-mediated tumor therapy [82,83]. We also observed the activation of TCR-MOP transductants by mAb 20.1 in the presence of the human B cell lymphoma RAJI, although not to the same degree of PAg stimulation [53]. This seems to be in conflict with the observation that Vγ9Vδ2 γδ TCR-D1C55 transgenic mouse cells were stimulated in cultures with human cells and HMBPP but not with mAb 20.1, which even inhibited the PAg response. For a better understanding of these results, we compared the activation of the different TCRs using the same reporter system (the murine reporter cells introduced above) and human RAJI cells as APCs and found that all Vγ9Vδ2 TCR-transduced reporter cells responded very similar to HMBPP [54]. However, in contrast to TCR-MOP, the response of TCR-G115 and TCR-D1C55 transductants to mAb 20.1 was very weak or not detectable, and similar results were obtained with sc-Fv fragments of both mAb. Other key findings were that TCRs containing γand δ-chains of TCR-MOP and TCR-D1C55 induced some reduction in the PAg response and a strong reduction in the mAb 20.1 response of reporter cells. The residual response could be largely attributed to the TCR-MOP γ-chain. Titration experiments with the different TCR transductants and different concentration of PAg and mAb 20.1 as well as 20.1 sc-Fv showed that both mAb 20.1 and 20.1 sc-Fv inhibited the PAg response. In the presence of saturating concentrations of mAb 20.1, PAg-induced stimulation was never below that induced by the mAb alone. To explain in brief, stimulation with a saturating concentration of mAb 20.1 alone led to the production of 100 ng/mL IL-2 by one specific TCR transductant cell line. In this case, mAb 20.1 reduced the stimulation by a high PAg concentration from 1000 to 100 ng/mL (90% inhibition) and in the presence of a low PAg concentration from 200 to 100 ng/mL (50% inhibition). This explains why TCR-D1C55, which hardly induced a mAb 20.1 response, resulted in a nearly complete inhibition of the PAg response [54,66], while the 20.1 mAb-responsive TCR-MOP transduced cell line showed a significant residual PAg-independent response [54]. Finally, we also tested the mAb response of transduced Jurkat cells and could reproduce the clonotypic differences for 20.1 but not for HMBPP-or Zol-induced CD69 activation. However, differences were much less pronounced, which may reflect a lower activation threshold either of this reporter cell type or of the assay [54]. In the future, it will be interesting to study how much the clonotypic differences affect the response of primary cells. A mechanistic explanation for the differential activity of mAb 20.1 is still missing, but we suggest two mutually non-exclusive possibilities. The first one would be a function of mAb 20.1 as a partial agonist, which would "freeze" the activating stage at a certain point and by this prevent complete activation, while the other one would be that the 20.1 binding surface formed by the BTN3A V domain partially overlaps with the site relevant to BTN3A action [54] for PAg reactivity, as it will be discussed later. Vγ9Vδ2 TCRs and BTN3s in Different Species Species comparison can open new perspectives for understanding the minimal molecular requirements of body functions, e.g., the detection and elimination of pathogens, by comparing molecules among different species. In the case of PAg-specific T cells, we aimed to identify those cells in non-primate species hoping to find minimal molecular signatures of the PAg sensing system [84][85][86]. To this end, genomes of species originally sequenced as part of the 24-mammalian genome project and other accessible genomes were tested for genes encoding the triad of Vγ9 (TRGV9), Vδ2 (TRDV2), and the ED of BTN3. Apart from Old and New world monkeys, which were already known to respond to PAg similar to humans, only a small number of species from very different phylogenetic groups carrying all three genes were tested for open reading frames (ORFs). In some of these species such as cows and horses, those genes contained single stop codons or substitutions, which are likely to result in the loss of function, e.g., by disruption of the Ig domain disulfide bridge. In other species, no homologues of TRGV9, TRDV2, and BTN3-ED genes were found [84][85][86]. This is consistent with lacking reports on PAg-reactive cells in rodents and our own failed attempts to find such responses in mice and rats [84,85]. The BTN3-ID genes were not assessed at this stage due to a high number of B30.2-containing genes and possible homologues The non-primate species that were finally found to contain functional ORFs of the PAg-sensing triad of TRGV9, TRDV2, and BTN3 genes were camelids and whales. Important for the phylogeny of Vγ9Vδ2 T cells was the conservation of TRGV9-, TRDV2-, and BTN3-like genes in armadillo and sloth. Both belong to the order of Xenarthra, which is part of the Eutheria magnaorder Atlantogenata, while other species possessing TRGV9-, TRDV2-, and BTN3-like genes belong to the Boreoeutheria, which is the other magnaorder of Eutheria. This implies that the common ancestor of Eutheria and of all placental mammals possessed those genes. An argument for a functional relationship and probable co-evolution is that nearly all species with an ORF for BTN3 also possess ORFs of TRGV9and TRVD2-like genes [84][85][86]. Armadillo: A Witness of Vγ9Vδ2 T Cell Evolution The analysis of the armadillo genome revealed ORFs for TRGV9, TRDV2, and BTN3-ED, which was of interest not only for phylogenetic reasons. First of all, armadillos serve as a natural reservoir and model organism for infections with Mycobacterium leprae [87], and secondly, human Vγ9Vδ2 T cells have been implicated in the defense to mycobacterial diseases [88]. In addition, it was reported that M. leprae-infected animals showed an increased frequency of circulating lymphocytes stained by an antibody specific for human TCR δ-chains [89]. To our disappointment, the analysis of cDNA from armadillo PBMCs revealed neither transcripts for BTN3 nor TRGV9, although fragments of the genes could be amplified from genomic DNA and analysis of genomic sequences gave no apparent reasons for loss of functionality [90]. A closer inspection of the armadillo genome identified three BTN3 genes. For none of them exons for a leader peptide were found, while all possessed exons for the V and C domain, although not fully translatable. Exon 4, which encodes the connecting peptide between the C domain and TM in human BTN3A and vpBTN3, was found in two BTN3-like genes. Exons encoding sequences of the JM were mostly lost, but the respective introns were identified by homology to human introns. The most complete BTN3 gene contained an exon for B30.2 but with an internal stop codon. Nevertheless, codons for amino acids that form the PAg binding site of huBTN3A1 and vpBTN3 B30.2 domains were conserved. Consequently, the common ancestor of placental mammals probably possessed BTN3(s) with PAg-binding properties [90]. Interestingly, TCR δ-chains with TRDV2J4 gene rearrangements could be cloned, expressed, and paired with a human TRGV9JP-containing TCR γ-chain. This was demonstrated by the cell surface staining of mouse CD3ε and TRGV9 after co-transduction of a mouse T cell hybridoma with genes for armadillo TRDV2-containing TCR δ-chains and a human Vγ9 TCR chain. These cells produced IL-2 after stimulation with immobilized anti-mouse CD3 and anti-Vγ9 mAb but not in co-cultures with HMBPP and human APCs [90]. In contrast, the dolphin genome possesses two BTN3-like genes. One non-functional gene covering the BTN3-ED exon and one full-length BTN3 ORF, which showed the highest aa identity with human BTN3A3 and vpBTN3 and also a full conservation of the amino acids of the PAg binding site [90]. This, together with the detection of in-frame TRGV9JP rearrangements and TRDV2 genes, renders the bottlenose dolphin a prime candidate for PAg-reactive Vγ9Vδ2 T cells [91]. Alpaca: The First Non-Primate Species with PAg-Reactive Cells Among the non-primate candidate species for PAg-reactive Vγ9Vδ2 T cells, alpaca was the only species accessible to us. Analysis of PBMCs confirmed the genomic sequences in the database and the expression of the ORFs. Comparable to the armadillo, nearly all TRDV2 sequences were rearranged with TRDJ4 gene segments whereas, in humans, TRDV2 rearrangements were dominated by TRDJ1 gene segments, and TRDJ4 gene segments are rare [92,93]. Nearly all TRGV9 were rearranged with TRGJP, and the frequency of productive in-frame rearrangements was similar to humans. Three, probably allelic, TRGJP sequences (JPA, JPB, JPC) were identified. Interestingly, vpBTN3 occurs as a singleton and had higher sequence similarity to BTN3A3 compared to BTN3A1 but complete identity in the amino acids defining the PAg binding pocket of BTN3A1 [85,86]. ITC studies of recombinant wild-type and mutant BTN3 B30.2 domains showed very similar binding characteristics for HMBPP and IPP to huBTN3A1 and vpBTN3 [20]. To test for a BTN3-dependent PAg response of alpaca Vγ9Vδ2 T cells [20], we generated monoclonal antibodies against alpaca Vγ9Vδ2 TCRs and vpBTN3 using transduced mouse cells as immunogens. Two monoclonal antibodies were characterized in greater detail: the TCR-specific mAb WTH-4 and the BTN3-specific mAb WTH-5. WTH-4 bound to transductants expressing alpaca Vγ9Vδ2 TCRs and to TCRs composed of human Vγ9 and alpaca Vδ2 TCR chains but not to human Vγ9Vδ2 TCRs, suggesting the localization of the epitope on the alpaca Vδ2 chain. Furthermore, WTH-4 was specific for transductants expressing alpaca Vγ9Vδ2 TCRs with a TRDV2J4 but not TRDV2J2 rearrangements. Frequencies of WTH-4-positive cells ranged from 0.2% to 1.2% of CD3-positive lymphocytes among individual animals but also sometimes varied between time points of blood sampling [20]. The mAb WTH-5 stained vpBTN3-transduced murine and human cell lines. Given the high similarity of alpaca and human BTN3-EDs, both BTN3s were compared for binding of the mAbs originally raised against vpBTN3 (WTH-5) and BTN3A1 (20.1 and 103.2) by staining transduced CHO cells and primary blood cells. WTH-5 stained vpBTN3-transduced CHO cells very efficiently and showed some degree of cross-reactivity to CHO or 293T cells transduced with human BTN3A1 at high antibody concentrations. mAb 103.2 showed a converse staining pattern: a good staining of human BTN3A1 transductants and poor staining of vpBTN3-expressing cell lines. When tested on PBMCs, mAb 103.2 stained only human cells and mAb WTH-5 stained only alpaca cells. Phycoerythrin (PE)-labeled mAb 20.1 was weakly cross-reactive for vpBTN3-transduced CHO cells but when tested on PBMCs, binding to alpaca lymphocytes and monocytes reached more than half of the intensity (geometric mean fluorescence intensity) of the respective human cells. Surprising was also the staining pattern of mAb 103.2 on human PBMCs. MAb stained human lymphocytes well, but the epitope was barely detectable on monocytes although PE-labeled mAb 20.1 bound to both cell types similarly. Given that mAb 103.2 efficiently antagonizes PAg stimulation, this observation warrants further analysis and might indicate a cell type-specific conformation or modification of the molecule hiding the 103.2 epitope [20]. The new monoclonal antibodies allowed us to directly trace HMBPP reactivity in primary cell cultures. WTH-4-positive cells expanded in a dose-dependent fashion in cultures with HMBPP and IL-2, and this expansion was blocked by the BTN3-specific mAb WTH-5. Single-cell PCR of HMBPP cultures showed that essentially all WTH-4-positive cells expressed TRDV2J4 rearrangements. Analysis of cloned RT-PCR products and sequencing of clones with TRVor TRC-specific primers revealed no specific sequence motifs for CDR3s of PAg-responsive cells but a homogenization of TRGV9JP CDR3 lengths, similar to human PAg-reactive Vγ9Vδ2 T cells. Notably, a dominance of single clones with a TRDVJ2 rearrangement among HMBPP-stimulated WTH-4-negative cells was observed. To directly prove the PAg reactivity of Vγ9Vδ2 TCRs, TCR sequences were cloned and transduced in murine reporter cells (described above). TCRs cloned from single WTH-4-positive cells had TRGV9JP and TRDV2J4 rearrangements, while WTH-4-negative HMBPP-expanded cells possessed TRGV9JP and TRDV2J2 sequences. These TCR transductants were compared for their response to PAg and mAb 20.1 with reporter cells expressing human TCRs (TCR-MOP) and TCRs containing γ− and δ-chains cloned from total alpaca PBMCs. TCR-MOP-expressing cells responded well to HMBPP and mAb 20.1 in the presence of 293T cells but not in the presence of 293T cells where all three BTN3A genes were inactivated by CRISPR-Cas9-induced mutations (BTN3KO 293T cells) transduced with vpBTN3. The alpaca TCRs (vpTCRs) cloned from HMBPP-expanded WTH-4-positive and -negative single cells induced a reporter cell response to HMBPP presented by 293T or vpBTN3-expressing BTN3KO 293T cells. Previously studied TCRs, containing randomly cloned sequences of Vγ9 and Vδ2 chains of unstimulated PBMCs [84] resulted in no PAg response at all, emphasizing the importance of the right combinations of Vγ9 and Vδ2 TCR chains for an efficient PAg response. Despite the reactivity of the alpaca Vγ9Vδ2 TCR-transduced cells to PAg presented by wild-type 293T cells, stimulation by mAb 20.1 was not observed, and this was also not the case in primary alpaca cell cultures. Since culture conditions of primary cells were not optimal (alpaca cells died after 6-7 days of culture), and since mAb 20.1 responses largely depended on the CDR3s of the Vγ9Vδ2 TCR [94], it is so far not possible to state whether this unresponsiveness is a feature of alpaca Vγ9Vδ2 TCRs in general or of the tested clonotypes [20]. The differential specificity of alpaca Vγ9Vδ2 TCR transductants that recognize PAg in the context of alpaca as well as human BTN3 and human TCR-MOP transductant, which responds exclusively to cells expressing human BTN3, will give an opportunity to identify BTN3 and TCR regions controlling PAg-mediated activation by comparing inter-species chimeras or single amino acids mutants and identify aa sequences that are crucial for activation [20]. Alpaca BTN3: An All-in-One Solution In contrast to humans, alpaca possesses a BTN cluster with single copies of BTN1, BTN2, and BTN3. The singleton nature of vpBTN3 implies that the capacity of PAg sensing and γδ T cell activation is merged in vpBTN3 while humans require the cooperation of two to three/several BTN3As. To identify which parts of the BTN3 molecules "help" during PAg sensing, chimeras of human BTN3A1 and alpaca BTN3 were expressed in BTN3KO (293T cells with CRISPR-Cas9 inactivated BTN3A1, A2 and A3 genes) or BTN3A1 KO 293T cells (293T cells with CRISPR-Cas9 inactivated BTN3A1 gene only) and tested for PAg-dependent activation of human and alpaca TCR transductants, drawing a complex picture of synergism and the interference of the different BTN3 molecules. At first, we confirmed that the expression of BTN3A1 in BTN3KO cells rescued PAg reactivity only poorly, while the response of human TCRs to BTN3A1 KO cells transduced with BTN3A1 was fully reconstituted. Despite a massive overexpression of BTN3A1, compared to endogenous expression in wild-type 293T cells, PAg stimulation and the degree of Vγ9Vδ2 T cell activation was the same. The transduction of BTN3KO cells with alpaca BTN3 induced a PAg response of vpTCR (alpaca TCR) but not of huTCR (human TCR-MOP) transductants, indicating some species specificity of the TCR-BTN3 interaction. Surprisingly, BTN3A1 KO cells transduced with vpBTN3 stimulated neither human nor alpaca TCR-transduced cells. This may indicate an interference of human BTN3A2 and BTN3A3 molecules with molecules mandatory for effective PAg sensing, e.g., human BTN2A1, whose function will be discussed in the next paragraph. BTN3KO as well as BTN3A1 KO cells transduced with a human BTN3A1-ED/vpBTN3-TM/ID chimera was very similar and stimulated human and alpaca TCR transductants even better than wild-type 293T cells, suggesting that the vpBTN3-TM/ID substitutes the "help" by BTN3A2/3 and that the presence of endogenous BTN3A2/BTN3A3 had no effect. Quite dramatic differences were seen for chimeras of alpaca BTN3-ED and human BTN3A1-TM/ID. If transduced in BTN3KO cells, they stimulated neither human nor alpaca TCR-expressing reporter cells, but after expression in BTN3A1KO cells, they activated human as well as alpaca TCR transductants. Our interpretation is that PAg binding to the human ID of this chimera does not lead to the changes required to induce the "activating" conformation of the alpaca BTN3-ED, but this capacity somehow translates to BTN3A2 and/or BTN3A3, which can then be sensed by either the human or the alpaca TCR [20]. As shown in Figure 3, the amino acid sequence comparison of vpBTN3 with human BTN3A1/A2 or A3 revealed a very similar degree of identical amino acids for the three molecules (without leader sequence). This similarity included the V domain, C domain, and the B30.2 of BTN3A1 and BTN3A3, respectively. Only for vpBTN3-TM the similarity to BTN3A1 (88%) was clearly higher than for BTN3A2 and BTN3A3 (both 76%), while the respective percentage of identity for the JM domain was 64% for BTN3A3 and 49% for BTN3A1. We hypothesize that the BTN3A3-like nature of the alpaca JM domain enables efficient PAg sensing by a single molecule. BTN2A1, a New Player in the Game: Identification of BTN2A1 as a Prerequisite of PAg Recognition BTN3A1 is a key compound in PAg sensing and γδ T cell stimulation. Nevertheless, Harly et al. already discussed that ABP-pulsed BTN3A1-transduced mouse cells failed to stimulate Vγ9Vδ2 T cells [53]. However, the interpretation of such negative results is difficult, since they might reflect reduced co-stimulatory signals or cell-cell adhesion as a consequence of the species barrier between stimulating and responding cells. The use of the murine reporter system described above allowed an approximation by demonstrating that APC-reporter cell interaction was at least sufficient to induce another type of TCR-mediated response, namely the induction of a peptide-specific response via an αβ TCR [53]. More precise was the demonstration of a mAb 20.1-induced response by TCR-MOP transductants to rodent APCs (mouse or hamster cells) transduced with BTN3A1 and CD80, while no activation by PAg could be observed [95]. Importantly, CD80-transduced hamster cells containing human chromosome 6 (CHO-Chr:6) stimulated reporter cells in the presence of HMBPP or Zol. Furthermore, after Zol pulsing, the same CHO-Chr:6 cells induced CD69 upregulation of human peripheral blood Vγ9Vδ2 T cells, which was increased by BTN3A1 expression, while BTN3A1-transduced CHO cells without Chr:6 had no effect. This allowed the interpretation that not only BTN3A1 but also other genes on Chr:6 were necessary for PAg-induced Vγ9Vδ2 T cell activation [95]. Obvious candidates were BTN3A2 and BTN3A3, but their transfer to CHO cells was not sufficient to induce a PAg-specific response, albeit higher "background" stimulation was found (Paletta, Fichtner, Karunakaran, Herrmann, unpublished data). An interesting aspect is that PAg stimulation required only minimal expression of BTN3A which could be below the detection limit of mAb 103.2 in flow cytometry while stimulation by mAb 20.1 correlated with the intensity of BTN3A expression [96]. In order to identify the additional gene(s) controlling the species-specific PAg-mediated activation, we generated radiation hybrids (RH) and tested them for the phenotype "PAg-dependent stimulation" as we did for the analysis of the CHO-Chr:6 monosomal line [84,95]. Radiation hybrids are somatic cell hybrids of irradiated (human) cells and a hypoxanthine-aminopterin-thymidine (HAT)-sensitive (rodent) cell line [97,98]. The chromosomes of the donor genome are fragmented by the irradiation and parts of the fragmented genome become integrated into the recipient genome in the hybrid cell (e.g., parts of human Chr:6 in CHO-Chr:6 cells). The proportion of integrated genome ranges from 10% to 50% and the chromosomal fragment size decreases with the increase of the irradiation dose. We generated RHs with BTN3A1 + CD80-transduced HAT-sensitive hamster or mouse cells as fusion partners for irradiated CHO-Chr:6 cells. This allowed testing the BTN3A1-dependent stimulatory capacity of the RHs with mAb 20.1. The transduction of BTN3A1 also prevented the "rediscovery" of BTN3A1 as a mandatory gene for PAg stimulation and increased the chance of obtaining stimulatory RH. As described in the study [68], we characterized six stimulatory RHs in greater detail. Their genomic content was deduced from comparing their transcriptomes to those of both fusion partners with CHO-Chr:6 as positive control and the rodent fusion partner as negative control. This comparison identified a candidate region of 580 kB on Chr:6, which contained a high number of histone genes, tRNA, and other non-membrane protein genes. The only membrane protein-encoding genes were those of the BTN cluster and the highly conserved MHC class I-like iron transporter HFE. HFE and all BTN genes with the exception of BTN1A were expressed by all stimulatory RHs. Since BTN3As alone do not reconstitute the PAg response, BTN2A1 and BTN2A2 were prime candidates for the missing Chr:6 encoded gene(s), contributing the PAg presentation phenotype. The functional deletion of BTN2A1 and BTN2A2 in 293T cells as well as of BTN2A1 alone abolished the capacity of Zol-pulsed cells to stimulate human γδ T cells and reporter cells in a Zol-, HMBPP-, or mAb 20.1-dependent manner. The knock-out of BTN2A2 had no such effect [68]. No negative effects of the loss of BTN2A2 for ABP-induced stimulation were observed in BTN3A1 + BTN3A2-reconstituted BTN3A knockout cells, which lack the BTN2A2 gene as a consequence of the knock-out strategy [72]. Independently, Rigau et al. identified BTN2A1 as a mandatory component of PAg presentation by screening cells transfected with an shRNA library for the reduction of Vγ9Vδ2 TCR tetramer binding and identified signal peptide peptidase-like 3 (SPPL3) and BTN2A1 as prime candidates. Knock-out of BTN2A1 in different human cells massively reduced the binding of TCR tetramers and newly generated BTN2-specific mAbs inhibited tetramer binding as well as stimulation with ABP [19]. BTN2A1: Interaction with TCR and BTN3A1 Both groups found that rodent cells transduced with BTN2A1 and BTN3A1 reconstituted the PAg response and TCR tetramer binding. Importantly, TCR tetramer binding relied only on BTN2A1 expression, while BTN3A1 showed no direct TCR interaction. The treatment of BTN3A1 + BTN2A1 transductants with ABP did not increase TCR tetramer binding, suggesting that BTN2A1 binds the TCR autonomously, while both BTNs were required for the induction of a PAg-dependent Vγ9Vδ2 T cell response [19,68]. The direct binding of BTN2A1 to the Vγ9 TCR was demonstrated by plasmon resonance studies. Similar K D s of about 50 µM were demonstrated and were independent of the CDR3s of Vγ9 and the paired δ-chain. Molecular modeling and mutagenesis studies mapped the surface formed by the C-F-G strands of BTN2A1 (C-F-G surface) as the contact region of the TCR. Interestingly, this area was previously mapped by Willcox et al. for the binding of BTNL3 to the human Vγ4 chain [18]. An important contribution of the germline-encoded HV4 of Vγ9 was found by testing the binding in plasmon resonance studies and in functional assays for PAg responses [19,68]. Yet, the interaction of BTN2A2 and the TCR remains unclear. In plasmon resonance studies, the affinity of recombinant BTN2A2 to the Vγ9Vδ2 TCR was even higher than that of BTN2A1, while TCR tetramers showed nearly no binding to BTN2A2 transductants. In part, this difference may be explained by the different degree of cell surface expression in these experiments [68]. Another observation of unclear significance was a disulfide bridge that stabilized BTN2A1 as a homodimer formed by a cysteine at position 247, which is unique to BTN2A1. All other BTNs and BTNL molecules carry a tryptophan at this position, and a C247W substitution of BTN2A1 had no negative effect on PAg stimulation or TCR tetramer binding. The interaction of the Vγ9-HV4 with the C-F-G surface was confirmed by mutational analysis and tested for effects on PAg or ABP stimulation. A mutation of a negatively charged glutamic acid to an alanine in the Vγ9-HV4 (E70A) reduced the PAg response of TCR-MOP transductants [68] and the Zol-induced CD69 expression of TCR-transduced Jurkat cells [19]. Furthermore, mutations of the positively charged amino acids arginine or lysine largely (E70R) or completely (E70K) abolished the PAg-induced stimulation [68]. Interestingly, the extent of reduction in TCR binding as measured by binding of BTN2A1 tetramers to TCR mutant cells was much more dramatic than the reduction of the TCR-mediated IL-2 production in response to HMBPP or Zol-induced CD69 expression. Importantly, mutating the Vγ9 CDR3 and Vδ2 CDR2 and CDR3 had no effect on BTN2A1 binding, but stimulation by Zol or HMBPP was lost [19,68]. BTN2A1 interaction with BTN3A1 was also demonstrated by confocal microscopy and FRET [19], by immunoprecipitation after cross-linking cells with a water-soluble membrane-impermeable chemical crosslinker, and by NMR studies [68]. Similar to the BTN2A1-TCR interaction, interaction between the V domains of both molecules at the surface formed by their C-F-G surfaces could be predicted from NMR studies [68]. Such C-F-G-interface interaction between V domains of members of the Ig-superfamily and explicitly of the B7 family is not uncommon, as shown for cis-interaction of PD-L1 (CD274) and B7.1 (CD80) [99]. Importantly, mutations of BTN3A1 and BTN3A2 in this area reduce PAg stimulation, which is a result that was considered evidence for a potential TCR-binding site but may now warrant new interpretation, since BTN3A-BTN2A1 might also be affected. The same may apply for the inhibition of PAg responses by mAb 20.1, which binds to the surface formed by C-C´-C´´strands and may also affect the neighboring the C-F-G surface ( Figure 3) [18,68]. It is conceivable that mAb 20.1 binding interferes with BTN2A1-BTN3A1 interactions and consequently Vγ9Vδ2 T cell responses but also that a conformational change as a consequence of mAb 20.1 binding might propagate TCR binding and the activation of certain Vγ9Vδ2 TCR clonotypes while interfering with a stimulatory state after PAg binding to BTN3-ID. In line with this idea, single-chain variable fragment (sc-Fv) of the non-competing mAb 103.2 show no inhibitory effects [54], despite a high affinity for recombinant BTN3A1, 2 and 3 (K D 8-15 nM). Nonetheless, the whole 103.2 antibody is a very efficient inhibitor [57] perhaps by fixing a conformation of the BTN3A molecules, which prevents interaction with TCR, BTN2A1, or other not yet defined molecules. A Composite Ligand Model of PAg Recognition When discussing binding partners of the TCR, it is of interest that BTN2A1-as well as BTN2A1 + BTN3A1-transduced murine or hamster cells showed a robust and statistically significant background stimulation of 1-10% of the maximum response to HMBPP. This was completely abolished by mutations of the Vγ9-HV4 but also by mutations/deletions in the CDR2δ and CDR3δ [68]. Although these regions of the TCR δ-chain were not involved in BTN2A1 binding [19,20], both are well established to contribute to the HMBPP or Zol response [100]. We suggest that these TCR regions may interact or bind to other proteins beyond BTN2A1, which are required for full activation of the Vγ9Vδ2 T cell ( Figure 6) and hypothesize a composite ligand model of PAg recognition. This interaction might provide tonic signals necessary for the surveillance and survival of T cells without the involvement of the Vδ2 chain. Simultaneously, BTN3A tends to remain associated with BTN2A members on the APC via their V domains with unlikely interaction of BTN3A1 with the TCR (b) Action of PAg: In the presence of stress or PAgs, a PAg-dependent activation complex would be assembled on the APC. This complex could be inclusive of mandate BTN2A1 and BTN3A1 members, and other unidentified putative membrane proteins or ligands presumably recruited by BTN2/BTN3 members as a consequence of PAg interaction with the B30.2 domain of BTN3A1. Such an activation complex interacts with Vγ9Vδ2 TCR, which is certainly dependent on all CDRs of Vγ9 and Vδ2 chains (right). Picture modified after [68]. Such additional binding would be sterically possible and may also contribute to PAg sensing or in certain cases activate Vγ9Vδ2 T cells in a PAg-independent manner. Some of these ligands may have an affinity to the Vγ9Vδ2 TCR that is too low to trigger a response on their own and may need to associate with BTN2 or BTN2-BTN3 complexes to be sensed by the Vγ9Vδ2 TCR. Others may have a higher intrinsic affinity for specific TCR clonotypes and may be able to interact with the TCR even in the absence of PAg-induced cellular changes [68] but in some cases may still require "help" by the superantigen-like binding of BTN2A1 [21]. In this case, the CDR3s of the TCR γand δ-chain would be of importance to generate a potent PAg response but also for the reaction to protein antigens. Such additional ligands could explain the differential PAg responses to certain tumor cells as reported by the Kuball group [101]. It could also explain the reactivity of Vγ9Vδ2 TCR-G115 to mitochondrial F1-ATPase, which is ectopically expressed on the cell surface of certain tumors, for which binding to the TCR-G155 has been reported [36]. In this case, TCR-G115 could have a weak intrinsic reactivity to F1-ATPase, which might be enhanced by endogenous PAg and BTN2A1 and/or BTN3A1. Under these circumstances, BTN3A1 would not necessarily need to be a TCR-binding molecule but could act as chaperone positioning the additional protein ligand [102]. BTN3A1-Expressing Rodent Cells as Mediators of mAb 20.1-Induced Vγ9Vδ2 T Cell Activation An open question is why mAb 20.1 induces an activation of Vγ9Vδ2 TCR transductants by BTN3A1-transduced rodent cells, which apparently lack BTN2A1 [54,68]. The easiest explanation would be that rodent BTN2A2 can to some extent replace human BTN2A1 in mAb 20.1-dependent stimulations. This is not unlikely, since both molecules show a high degree of similarity of their ED-V domain but differ in their TM and ID, which may explain why BTN2A2 cannot support the PAg-sensing mechanism of BTN3A1 in the same manner as human BTN2A1. These and other open questions on the functionality of BTNs could be addressed by the analysis of BTN2A2-deficient rodent cells, BTN2 mutants, and chimeras. The importance of the BTN2A1-ID for PAg sensing has already been highlighted by the work of Rigau et al. who demonstrated that deletion of the BTN2A1-ID leads to a loss of PAg-and mAb 20.1-induced activation but not of TCR tetramer-binding [19]. The transduction of rodent cells (mouse and hamster) with BTN2A1 and BTN3A1 allows efficient PAg-dependent activation. The additional transduction of BTN3A2 had no effect in the case of mouse 3T3 cells and only modest effects in the case of transduced hamster cells, especially if compared to the dramatic difference seen between BTN3A1-or BTN3A1 + BTN3A2-reconstituted BTN3KO 293T cells [19]. One possible explanation for the lack of demand of BTN3A cooperation in rodents could be that in human cells, BTN3A1 as a mediator of PAg sensing might be inhibited by an additional factor that provides strict control on Vγ9Vδ2 T cell stimulation and which might be functionally neutralized by BTN3A2 or BTN3A3. In rodents that lack the PAg-sensing triad (BTN3, TRGV9 and TRDV2), such tight control may be superfluous. BTN3A-TCR Interaction: Yes, No or Yes and No? BTN3A1 is without any doubt a key player in PAg sensing and Vγ9Vδ2 T cell activation, yet, its interaction with the TCR is still an open question. Attempts of several groups to measure the binding of Vγ9Vδ2 TCR to BTN3A1 by ITC [67], NMR [67], or TCR tetramers [68] were without success, although other weak interactions such as that of BTN3A1-ID with PAg [67] or the TCR with BTN2A1 [68] or of BTN2A1 with BTN3A1 [68] were detected. The only study reporting evidence of binding of recombinant BTN3A1-ED to the TCR and HMBPP is heavily refuted [66]. Partial evidence for a BTN3A1-TCR interaction is a crystal structure of the BTN3A1 V domain with HMBPP and IPP generated by the De Libero group [66]. These data have been criticized as a misinterpretation of electron dense material in the "binding groove", which in fact may by polyethylene glycol used for crystal generation [67]. There are also other experiments suggesting the binding of PAg to recombinant BTN3A1 V domains, but whether this binding is specific and directly related to PAg-mediated stimulation is difficult to judge without crystallographic data and reproduction of some functional data on the activation of Vγ9Vδ2 TCR-transgenic cells by immobilized BTN3A1. The same paper describes BTN3A1-TCR interactions by plasmon resonance analysis for Vγ9Vδ2 TCR dextramers and immobilized BTN3A1-ED as well as by surface-enhanced Raman scattering. Both approaches suggest a very low affinity for Vγ9Vδ2 TCRs, while no binding was found for other γδ or αβ TCRs. The presence of IPP increased this interaction by reducing the K D three-fold [66]. These data of the Rossjohn laboratory may indicate a very low "intrinsic" affinity of BTN3A1 to the TCR, which could still contribute to Vγ9Vδ2 T cell activation [66]. Therefore, it would be of considerable interest to test whether the mutations of BTN3A1/2 ED, which are reported to affect BTN3A1/2-mediated PAg stimulation, also affect this weak BTN3-TCR interaction. Nevertheless, all published experiments of other groups speak against BTN3A molecules as antigen-presenting molecules. Despite everything, even if, as we hypothesize, BTN3 might serve as a chaperone that brings another ligand (e.g., evolutionary highly conserved proteins) to the cell surface or positions it to the TCR, this would not exclude a direct binding of BTN3A to the Vγ9Vδ2 TCR in a certain conformation or in complex with other molecules. Future Directions Despite the considerable efforts made to solve the puzzle of PAg-mediated Vγ9Vδ2 T cell activation, which led to the identification of compounds involved in this process and the detailed analysis of the structure-activity relationship of phosphoantigens and BTN3s, several basic questions are not yet answered. This is true for the BTN3-TCR interaction, the identification of the "active" form of BTN3A1, and the mechanism underlying the partial mimicking of PAg by agonistic BTN3-specific antibodies. For some of those open questions, experimental approaches using biochemical and biophysical methods are likely to provide clear answers such as the structural basis of interaction between BTN2A and TCR or between the extracellular domains of BTN3 and BTN2. Other problems such as the identification of the hypothesized CDR3 and TCR δ-chain ligand will be more difficult to answer. However, species comparison studies can provide new perspectives as demonstrated by the analysis of alpaca BTN3 as a PAg sensor or the identification of BTN2A1 as new players through methods exploiting species differences as in the case of radiation hybrids. As discussed in the review, species differences can be used for further molecular analysis e.g., by analyzing chimeric molecules or as a guide for mutational analysis and for identifying features of BTN2 molecules that might be conserved in species with functional TRGV9 genes. Mice lack TRGV9 genes [85] and have no functional BTN2A1 ortholog but a functional Btn2a2 gene [103]. For the Btn2a2-/mouse enhanced CD4 and CD8 T cell responses were reported and attributed to APCs rather than T cells or non-hematopoietic cells [104]. γδ T cells were not specifically addressed in this study but mouse and human BTNs have been shown to modulate the immune response for in vitro and in vivo models [58,103,105]. In conclusion, BTN family members control and support γδ T cell development and activation on the one hand and on the other hand modulate immune responses in a more general manner. It will be interesting and feasible to dissect these dual functions on a molecular level, and species comparisons might be helpful for this purpose. Another important task is establishing mouse models for Vγ9Vδ2 T cells. A Vγ9Vδ2 TCR transgenic mouse has been described by the De Libero group [66], but this model organism seems to have a block in thymic development, which can be overcome by the application of anti-CD3 antibodies that may mimic positively selecting ligands [106]. Therefore, it is tempting to speculate that BTN3As and/or BTN2A1 either on their own or together with highly conserved proteins might serve as such ligands and also maintain the post-thymic expansion of Vγ9Vδ2 T cells. This would be similar to the role of BTN heterodimers such as BTNL3/8 and BTNL1/6 for subsets of human or mouse γδ T cells for the maintenance and expansion of intestinal T cells [15]. The generation of mice transgenic for pairs of human BTNs and the respective BTN-binding TCRs might pave the way to study the development and physiological function not only of PAg-reactive but also other human γδ T cells in a small animal model and to test their potential in combatting tumors and infectious diseases. In addition, those mice could be used as a preclinical model for γδ T cells activating or modulating agents such as BTN-specific monoclonal antibodies. Conflicts of Interest: The authors declare no conflict of interest.
2020-06-11T09:09:08.646Z
2020-06-01T00:00:00.000
{ "year": 2020, "sha1": "daee1018ffe263a8c783fdef89fd54d07dc09898", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2073-4409/9/6/1433/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "14e6cc35a935f2c713ff305a50b4933c7e0e795a", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
79875889
pes2o/s2orc
v3-fos-license
Influence of formative assessment on summative assessment in undergraduate medical students This cross sectional descriptive study was carried out to determine the students' view about the influence of formative assessment on summative assessment The study was carried out from July 2009 to June 2010 over 300 intern doctors of Medicine and Paediatrics department of two government and two private medical colleges. Data were collected through self administered questionnaire. The questionnaire included different opinion about the influence of formative assessment on summative assessment and were rated using the 5 point Likert's scale. This study revealed that feedback from formative assessment to the students is important to supplement and modify teaching by the teachers. Students' fear is for summative assessment is reduced by formative assessment Written test, VIVA/SOE and OSCE/OSPE of formative assessment greatly improves the results of summative assessment Students opined that to improve the formative assessment the number of teachers should be increased, teachers should be trained up, teachers should give more time to the students and optimum feedback should be provided to the students. Frequency of formative assessment should remain as it is. Twenty to twenty five percent marks from formative assessment should be added to the summative assessment. Influence of formative assessment on summative assessment in undergraduate medical students 1 2 3 Dr. Nazma Begum , Prof. Sakhawat Hossain , Prof. Dr. Md. Humayun Kabir Talukder Bangladesh Journal of Medical Education ISSN: 2306-0654 16 Bangladesh Journal of Medical Education 2013;4(1):16-19. © 2013 Begum et al., publisher and licensee Association for Medical Education. This is an Open Access article which permits unrestricted non-commercial use, provided the original work is properly cited. assessment is the assessment of for learning and summative Introduction assessment is the assessment of learning. The MBBS Assessment is important in all forms of learning. Michael curriculum 2002 in Bangladesh has introduced MCQs, SOE Serven coined the terms formative and summative & OSPEs/OSCEs based. This curriculum includes formative evaluation and emphasized their differences both in terms of assessrment that is item test, card final, term final, block the goals of the information they seek and how the 1 posting etc and summative examinations that is final information is used. Bloom B just a year later made professional examination. Provision for adding 10% marks 2 formative assessment a keystone of learning for Mastery. from formative assessrment to summative assessment has 3 3 He, along with Thomas Hasting and George Madaus been introduced. So, it was high time to evaluate the actual produced the Handbook of Formative and Summative effect of formative assessment on summative assessment Evaluation and showed how formative assessments could be under present curriculum in Bangladesh. The aim of this linked to instructional units in a variety of content areas. study was to determine the students view about the influence According to Kellough and Kellough "Teaching and learning of formative assessment on summative assessment. are reciprocal processes that depend on and affect one another. Thus, the assessment component deals with how Methods well the students are learning and how well the teacher is 4 This cross sectional type of descriptive study was carried out teaching". Formative assessment is the assessment that takes in 2 public medical colleges (Dhaka Medical College and Sir place during a course or programming of study, as an integral Salimullah Medical College) and 2 private medical colleges part of the learning process. It is often informal. Summative (Bangladesh Medical College and Holy Family Red assessment is normally carried out at or towards the end of a Crescent Medical College) in Dhaka city from July 2009 to course it is always a formal process, and it is used to see if June 2010. A structured questionnaire reflecting the learners have acquired the skills, knowledge, behavior or influence of formative assessment on summative assessment understanding that the course set out to provide them with. It was prepared. Pre-testing was done on 20 intern doctors gives an overall picture of performance. Formative from Shaheed Suhrawardy Medical College and accordingly data was collected and modified. Permission from the respective college authorities (principles) were formally sought beforehand. A questionnaire for students were prepared to pass their comment in the different aspect of 'the influence' of formative assessment on summative assessment. The questionnaire included different statement about the influence of formative assessment on summative and rated using 5 point Likert scale. Questionnaires were distributed to all the intern doctors of the Medicine department of those respective institutes. Three hundred intern doctors working in the department of Medicine of Assistant Professor (Paediatrics), Shaheed Suhrawardy Medical College and Hospital, Dhaka Professor and Head, Dept of Neuromedicine, Sir Salimullah Medical College and Mitford Hospital, Dhaka. Professor, Curriculum Development & Evaluation Center for Medical Education, Dhaka Address of correspondence: Dr. Nazma Begum Asst. Prof., (Paediatrics), Shaheed Suhrawardy Medical College and Hospital, Dhaka E-mail: nazmabegum29@ymail.com Cell: 01819242374 Introduction assessment is the assessment of learning. The MBBS Assessment is important in all forms of learning. Michael curriculum 2002 in Bangladesh has introduced MCQs, SOE Serven coined the terms formative and summative & OSPEs/OSCEs based. This curriculum includes formative evaluation and emphasized their differences both in terms of assessrment that is item test, card final, term final, block the goals of the information they seek and how the 1 posting etc and summative examinations that is final information is used. Bloom B just a year later made professional examination. Provision for adding 10% marks 2 formative assessment a keystone of learning for Mastery. from formative assessrment to summative assessment has 3 3 He, along with Thomas Hasting and George Madaus been introduced. So, it was high time to evaluate the actual produced the Handbook of Formative and Summative effect of formative assessment on summative assessment Evaluation and showed how formative assessments could be under present curriculum in Bangladesh. The aim of this linked to instructional units in a variety of content areas. study was to determine the students view about the influence According to Kellough and Kellough "Teaching and learning of formative assessment on summative assessment. are reciprocal processes that depend on and affect one another. Thus, the assessment component deals with how Methods well the students are learning and how well the teacher is 4 This cross sectional type of descriptive study was carried out teaching". Formative assessment is the assessment that takes in 2 public medical colleges (Dhaka Medical College and Sir place during a course or programming of study, as an integral Salimullah Medical College) and 2 private medical colleges part of the learning process. It is often informal. Summative (Bangladesh Medical College and Holy Family Red assessment is normally carried out at or towards the end of a Crescent Medical College) in Dhaka city from July 2009 to course it is always a formal process, and it is used to see if June 2010. A structured questionnaire reflecting the learners have acquired the skills, knowledge, behavior or influence of formative assessment on summative assessment understanding that the course set out to provide them with. It was prepared. Pre-testing was done on 20 intern doctors gives an overall picture of performance. Formative from Shaheed Suhrawardy Medical College and accordingly data was collected and modified. Permission from the respective college authorities (principles) were formally sought beforehand. A questionnaire for students were prepared to pass their comment in the different aspect of 'the influence' of formative assessment on summative assessment. The questionnaire included different statement about the influence of formative assessment on summative and rated using 5 point Likert scale. Questionnaires were distributed to all the intern doctors of the Medicine department of those respective institutes. Three hundred intern doctors working in the department of Medicine of 1 those selected medical colleges who were willing to the statement that the teachers' impression about a student Purpose of the study was duly explained to the respondent. from formative assessment influences the results of Information and identity of the respondent were kept summative assessment confidential. Completed data and questionnaire were collected, edited, processed and analyzed by using SPSS computer package. If necessary some data were handled manually with the help of calculator and Microsoft Excel program. Result Among the respondent 199 (49%) intern doctors were from government medical colleges and 101(24.9%) intern doctors were from non-government medical colleges. Table I shows that 86 (28.8 %) intern doctors strongly agreed and 151(50.5%) intern doctors agreed that formative assessment greatly influence the results of summative assessment. Seventy six (25.3%), 149(49.7%) intern doctors strongly agreed and agreed respectively in their opinion regarding Table 3: Distribution of respondents by their opinions about the teachers impression about a student from formative the statement that students fear for summative exam is assessment influences the results of summative assessment. reduced by formative exam On the contrary 26(8.7%) intern doctors disagreed in this aspect (Table II). Seventy percent of the intern doctors (strongly agreed 25%, agreed 44.3%) considered that students fear for Summative exam is reduced by formative assessment (Table III). Among the respondents 75(25%), 152(50.6%) intern doctors strongly agreed and agreed respectively in their opinion that teachers feedback to students of formative exam is helpful for better performance in summative exam ( Table IV). Among the respondents 97(32.4 %) strongly agreed that written test of formative assessment greatly improves the results of summative assessment where as 120 (40%) intern doctors agreed and 21(7%) intern doctors disagreed (Table V). One hundred nine (36.3%) and 117(39 %) intern doctors strongly agreed that Viva/SOE and OSCE/OSPE of formative assessment greatly improves the results of summative assessment Table 4: Distribution of the respondents by their opinion (Table VI and VII). Respondents made suggestion in about the statement that teachers' feedback to students of different aspects to improve formative assessment to make a formative exam is helpful for better performance in positive impact on summative assessment (Table VIII). summative exam Figure I shows most intern doctors want to add less than 25% marks (Mean ±SD 22.38 ±15.84) to be added from formative assessment to summative assessment. Table 8: Distribution of the respondents by their opinion about the statement that how the formative assessment can be improved to make a positive impact on summative assessment Discussion Conclusion Among 300 intern doctors 86(28.8%) and 151 (50.3%) This study was designed to explore the students' view about the influence of formative assessment on summative strongly agreed and agreed respectively that the results of assessment in undergraduate course. The study revealed that formative assessment greatly influences the results of formative assessment has got significant effect on summative assessment. A study conducted by Olson & Mcsummative assessment in various aspects. Feedback from Donald found that students scored higher in summative 5 formative assessment to both students and teachers plays an examination who took part in formative examinations . important role in teaching-learning processes. The student's Seventy five percent (strongly agreed 25.3% and agreed fear for summative exam is markedly reduced by facing 49.7%) of intern doctors considered that teachers' formative assessment. Teachers' impression about a student impression about a student from formative assessment from formative assessment affects the results of summative influences the result of summative assessment. Majority of exam. Teachers should be trained up and numbers should be intern doctors that is 70 % (strongly agreed 25% and agreed increased. Teachers should be motivated to give more time to 44.3%) were in agreement with the issue that students fear students and proper feedback should be given to the teachers for summative exam is reduced by formative exam. This for adopting adequate measures to supplement teaching. 6 study consistent with the finding of Khan KS Majority of the intern doctors that is 76% (strongly agreed 25% and enhance students' performance . Majority of the research 3. Hastings T. and Madaus, G. Handbook of formative and reports on formative assessment have mentioned that the summative evaluation of student learning, New York: feedback is most important issues in formative assessment, McGraw-Hill, 1971. in other word without feedback formative assessment are 8,9,10 useless. Majority
2018-12-01T05:02:31.202Z
2017-04-14T00:00:00.000
{ "year": 2017, "sha1": "53461f167d3758de42540343571ad1c012247eda", "oa_license": null, "oa_url": "https://www.banglajol.info/index.php/BJME/article/download/32191/21720", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "53461f167d3758de42540343571ad1c012247eda", "s2fieldsofstudy": [ "Education" ], "extfieldsofstudy": [ "Medicine" ] }
256786714
pes2o/s2orc
v3-fos-license
Dutrowite, Na(Fe 2 + 2 . 5 Ti 0 . 5 )Al 6 (Si 6 O 18 )(BO 3 ) 3 (OH) 3 O, a new mineral from the Apuan Alps (Tuscany, Italy): the first member of the tourmaline supergroup with Ti as a species-forming chemical constituent . The new tourmaline supergroup mineral dutrowite, Na(Fe 2 + 2 . 5 Ti 0 . 5 )Al 6 (Si 6 O 18 )(BO 3 ) 3 (OH) 3 O, has been discovered in an outcrop of a Permian metarhyolite near the hamlet of Fornovolasco, Apuan Alps, Tuscany, Italy. It occurs as chemically homogeneous domains Introduction The mineralogy of the Apuan Alps (Tuscany, Italy) has been studied for more than 150 years, leading to the identification of more than 300 different mineral species, among which 44 have here their type locality (Table 1). The first new mineral species to be discovered was the Cu-Pb sulfosalt meneghinite, described by Bechi (1852) from the Pb-Zn-Ag ore deposit exploited at the Bottino mine. Since then, another 24 minerals belonging to this mineral group have been identified following the mineralogical studies done on the small hydrothermal ore deposits occurring in the Apuan Alps. In these same localities, new oxide minerals were discovered, as well as a suite of secondary species related to the alteration of primary sulfides (mainly pyrite -e.g. Biagioni et al., 2020). Conversely, silicate minerals associated with these ore deposits were usually neglected, the only exception being allanite-(La) by Orlandi and Pasero (2006). It is worth noting that several ore deposits embedded in the Paleozoic basement of the Apuan Alps are spatially associated with tourmalinite bodies (e.g. Benvenuti et al., 1989) and tourmalinebearing Permian metarhyolite rocks (Vezzoni et al., 2018). However, few data on tourmaline supergroup minerals from the Apuan Alps are available (Benvenuti et al., 1991;Mauro et al., 2022). In the framework of an ongoing study of the crystal chemistry of non-pegmatitic tourmaline supergroup minerals from Tuscany (Italy), tourmaline samples from Permian metarhyolite exposed in the Fornovolasco area (southern Apuan Alps) were studied. Preliminary chemical analyses indicated the occurrence of a Ti-rich tourmaline supergroup mineral deserving additional investigations. Further studies confirmed the preliminary results, and the mineral dutrowite, as well as its name, were approved by the Commission on New Minerals, Nomenclature and Classification of the International Mineralogical Association (CNMNC-IMA) under voting number 2019-082. The name honours Barbara Lee Dutrow (born 1956), Adolphe G. Gueymard professor at Louisiana State University, for her contributions to the understanding of the chemical variability of tourmaline supergroup minerals, as well as on the petrologic significance of staurolite. Holotype material is deposited in the mineralogical collection of the Museo di Storia Naturale, University of Pisa, Via Roma 79, Calci (Pisa), under catalogue number 19890. This paper describes the new mineral species dutrowite, discussing its crystal chemistry and its possible petrologic significance. Occurrence and physical properties Dutrowite was identified in a sample from the outcrop of the Fornovolasco Metarhyolite Formation close to the Boscaccio locality (44 • 01 53 N, 10 • 22 11 E), near the small hamlet of Fornovolasco, Fabbriche di Vergemoli, Apuan Alps, Tuscany, Italy. The Fornovolasco Metarhyolite Formation is represented by lenticular bodies of usually massive porphyritic, tourmaline-bearing metarhyolite embedded in Paleozoic phyllites belonging to the Lower Phyllites Fm., having an early Cambrian depositional age (Paoli et al., 2017). Radiometric U-Pb dating on zircon suggested a Permian crystallization age for the metarhyolite bodies (Vezzoni et al., 2018). The first petrographic description of these rocks was reported by Bonatti (1933), who also described the occurrence of tourmaline supergroup minerals, characterized by a pleochroism with ω = light blue and ε = light yellow to colourless. Tourmaline, intergrown with quartz, occurs as rounded millimetreto-centimetre-sized orbicules that are wrapped and locally cut by the metamorphic foliation as well as small irregular patches dispersed in the groundmass. The tourmaline orbicules represent a primary feature of the Permian magmatic rock, clearly indicating its subvolcanic intrusive origin. Dutrowite was found as chemically homogeneous domains within anhedral to subhedral crystals up to 1 mm in size (Fig. 1); domains of dutrowite can reach up to 0.5 mm across. Colour is brown, with a light-brown streak. Dutrowite is transparent, with a vitreous lustre. It is brittle, with an irregular fracture. Hardness was not measured, but it should be about 7-7.5, by analogy with other members of the tourmaline supergroup. Calculated density, based on the empirical formula and unit-cell parameters from single-crystal Xray diffraction, is 3.203 g cm −3 . In thin section, dutrowite is transparent and pleochroic, with ω = dark brown and ε = light brown. It is uniaxial (-). Refractive indices were not measured; the mean refractive index, calculated according to the Gladstone-Dale relation (Mandarino, 1979(Mandarino, , 1981, is 1.800. It is worth noting that, as the chemical composition of tourmaline supergroup minerals is very complex, it is unrealistic to unambiguously identify a member of this supergroup on the basis of its optical properties, in agreement with other chemically complex group of minerals (e.g. allanite group - Armbruster et al., 2006). Raman spectroscopy Micro-Raman spectra were obtained on the sample of dutrowite shown in Fig. 1 in nearly back-scattered geometry with a Jobin-Yvon Horiba XploRA Plus apparatus, equipped with a motorized x-y stage and an Olympus BX41 microscope with a 100× objective (Dipartimento di Scienze della Terra, Università di Pisa). The 532 nm line of a solid-state laser was used. The minimum lateral and depth resolution was set to a few micrometres. The system was calibrated using the 520.6 cm −1 Raman band of silicon before each experimental session. Spectra were collected through multiple acquisitions with single counting times of 120 s, with the unfiltered laser power (25 mW). Backscattered radiation was analysed with a 1200 g mm −1 grating monochromator. neous material, Mössbauer data were collected on several grains of tourmaline separated from the groundmass and the tourmaline orbicules occurring in the metarhyolite. Consequently, the obtained values can be considered as a grand value of tourmaline species with similar Mg and Fe contents Table 3. Mössbauer parameters for tourmaline (dutrowite and associated oxy-dravite) collected at room temperature. and different Ti and Al contents (see Vezzoni et al., 2018). The Mössbauer spectrum was collected at room temperature in transmission mode using a 57 Co source in Rh matrix with a nominal activity of 50 mCi. It was acquired over the velocity range ±4 mm s −1 and was calibrated against α-Fe foil. The spectrum could be adequately fitted with four quadrupole doublets assigned to Fe 2+ and one doublet assigned to Fe 3+ using the program MossA (Prescher et al., 2012) (Table 3). X-ray crystallography Intensity data were collected using a Bruker Apex II diffractometer equipped with a Photon II CCD area detector and graphite-monochromatized MoKα radiation. The detectorto-crystal distance was 50 mm. A total of 983 frames were collected using ω and ϕ scan modes, in 0.5 • slices, with an exposure time of 40 s per frame. The data were corrected for Lorentz and polarization factors and absorption using the software package Apex3 (Bruker AXS Inc., 2016). The crystal structure of dutrowite was refined using Shelxl 2018 (Sheldrick, 2015). Starting atom coordinates were taken from Bosi and Skogby (2013). The statistical tests on the distribution of |E| values and the systematic absences agree with the space group R3m. The following neutral scattering curves, taken from the International Tables for Crystallography (Wilson, 1992), were used: Na vs. Ca at X, Mg vs. Fe at Y , and Al vs. Fe at Z. The T and B sites were modelled, respectively, with Si and B scattering factors and with a fixed occupancy of 1, because refinement with unconstrained occupancies showed no significant deviations from this value. Table 4. 4 Results and discussion Raman spectrum of dutrowite The micro-Raman spectrum of dutrowite is displayed in Fig. 2. The Raman shift of the observed bands is shown, as obtained through fit profile using Fityk (Wojdyr, 2010). In the region between 100 and 1200 cm −1 (Fig. 2a, b), vibrational modes of Y O 6 (in the range ∼ 200-240 cm −1 ), ZO 6 (range 360-375 cm −1 ), SiO 4 rings vibrations (between ∼ 650 and 720 cm −1 ), BO 3 bending and stretching modes (in the range 730-780 cm −1 ), and SiO 4 stretching modes (between 760 and 1120 cm −1 ) occur, in agreement with previous authors (e.g. Gasharova et al., 1997;Watenphul et al., 2016a, b). The stretching of O-H bonds occurs in the region between 3400 and 3800 cm −1 (Fig. 2c). Watenphul et al. (2016a, b) discussed the relations between band positions and crystal chemistry of the studied tourmalines. In particular, the vibrational modes of Y O 6 and ZO 6 polyhedra are sensitive to the Mg contents and Fe contents, as well as to the Fe oxidation state (Watenphul et al., 2016b); similarly, the short-range cation distribution around the O(1) and O(3) sites affects the band positions in the O-H stretching region (Gonzalez-Carreño et al., 1988;Bosi et al., 2015;Watenphul et al., 2016a). Specifically, the band at about 3565 cm −1 is related to O(3), whereas the less-intense band at about 3636 cm −1 is related to O(1). The latter is consistent with the reduced content of O(1) (OH) 0.41 (see below). Chemical formula and crystallography of dutrowite Fractional atom coordinates and displacement parameters of dutrowite are reported in Table 5, and selected bond distances are given in Table 6 2.5 R 4+ 0.5 and Z = R 3+ 6 , within the alkali group. Oxygen is dominant at W , thus indicating that dutrowite is an oxy-member of the tourmaline supergroup. The distribution of Fe 2+ and Fe 3+ is in keeping with the results of Mössbauer spectroscopy (Fig. 3), which are consistent with Fe 2+ at the Y position and Fe 3+ in the Z position (e.g. Andreozzi et al., 2008), although a unique Fe site distribution cannot be achieved due to the unresolved absorption doublets. The empirical structural formula of dutrowite was optimized using the method of Wright et al. (2000), distributing cations among the Y , Z, and T sites as follows: Brese and O'Keeffe (1991), are reported in Table 8. There is an excellent match between observed and refined site scattering, i.e. 142.63 vs. 142.20 electrons per formula unit, respectively. Moreover, the W (OH) content (0.41 apfu) is in good agreement with that calculated using the equation reported by Bosi (2013) (Table 8). Owing to the small size of the available material, no Xray powder diffraction pattern of dutrowite was collected. The calculated X-ray powder diffraction pattern, based on the structural model given in Table 5, is reported in Table 9. Titanium in tourmaline supergroup minerals and crystal chemistry of dutrowite Dutrowite is the first tourmaline supergroup mineral having Ti (Z = 22) as a species-forming chemical constituent, although the occurrence of this element in these cyclosilicates as a minor component is well-known. Titanium-bearing tourmalines are known from several geological environments, and some authors have considered this element a geochemical proxy for unveiling the genesis of the studied assemblages; for instance, Ribeiro da Costa et al. (2021) suggested that Ti can be used for discriminating between evolved and less evolved granitic rocks, the latter being usually richer in Ti than the former. Henry and Dutrow (1996), reviewing the contents of minor chemical constituents in tourmaline supergroup minerals, reported up to 4.07 wt % TiO 2 ; this content was given by Lottermoser and Plimer (1987) for dravite occurring within contact rocks in a breccia diatreme at Umberatana, South Australia. Grice and Ercit (1993) reported data for two tourmalines having 2.87 wt % and 2.19 wt % TiO 2 ; these values correspond to 0.40 and 0.29 Ti apfu, respectively. Still higher TiO 2 contents, 4.70 wt %, were later measured in a bosiitelike tourmaline by Flégr et al. (2016); this weight percent Titanium was reported by Dutrow and Henry (2022) in tourmaline from an (anhydrite-gypsum)-bearing meta-evaporite sampled in the Arignac Gypsum Mine, France; one spot analysis, having Ti > 0.25 apfu and Y Ti > Y Al, is consistent with hypothetical "magnesio-dutrowite", with empirical ordered formula X (Na Žáček et al. (2000) from Bolivia. Another possible occurrence of "magnesio-dutrowite" is reported by Bačík et al. (2022) in tourmalinites from Zlatá Idka, Slovakia; these authors found Ti contents up to 0.38 apfu, and they identified two distinct compositional trends, i.e. schorl-dutrowite and dravite-"magnesiodutrowite". Several substitution mechanisms have been proposed to explain the occurrence of Ti in the crystal structure of tourmaline supergroup minerals. Some authors have argued the Ti substitution at the T site (e.g. Povondra, 1981;Grice and Ercit, 1993), but this statement is not supported by optical spectra (Rossman and Mattson, 1986). In addition, the crystal structure refinement of dutrowite does not Figure 3. Mössbauer spectrum of tourmaline supergroup minerals (dutrowite and associated oxy-dravite). Fitted absorption doubled assigned to Fe 2+ and Fe 3+ are indicated in green and red colours, respectively. Diamonds denote measured spectrum, and the black curve represents summed fitted spectra. agree with the occurrence of tetrahedrally coordinated Ti. Consequently, Ti occurs in octahedral coordination, but its assignment to the Y or Z site is ambiguous even from a theoretical viewpoint (see Appendix 1 in Bosi and Andreozzi, 2013). Gadas et al. (2019) reported the possible substitution Ca 2+ + 2Ti 4+ + 2Mg 2+ + O 2− = Na + + 4Al 3+ + (OH) − for tourmaline from Manjaka, whereas in Ca-poor tourmaline fromŘečice Ti seems to be incorporated through the substitution Ti 4+ Nabelek (2021) proposed the substitution mechanism 2Na + + (Fe,Mg) 2+ + 2(OH) − = Ca 2+ + + Ti 4+ + O 2− , also observing a negative correlation between Ti and Al, for tourmalines from tourmalinites in South Dakota, USA. At Fornovolasco, where dutrowite was discovered, Vezzoni et al. (2018) pointed out the variability of the crystal chemistry of tourmaline by a wide range of Fe/(Fe + Mg) ratios, as well as by different degrees of Al saturation. Moreover, these authors suggested a positive correlation between the content of Ti and (Mg + Fe) and a negative relation with the Al content, which results in the heterovalent substitution 2Al 3+ = Ti 4+ + (Fe,Mg) 2+ . In addition to chemical data reported in Table 1, collected on the area where the grain of dutrowite used for the single-crystal X-ray diffraction study was extracted, other spot analyses were done on the other domains having brown and blue absorption colours in transmitted light microscopy ( Fig. 1), respectively. All spot analysis data are deposited in the Supplement. Brown domains are usually enriched in Ti, in agreement with previous results obtained, for instance by da Fonseca-Zang et al. (2008). The blue domain, whose average chemical composition is given in Table 2, has the following empirical ordered formula: X (Na 0.66 Ca 0.14 K 0.01 ) 0.81 Y ( . This is another member of the alkali group that, in agreement with Bosi et al. (2019), corresponds to the end-member formula Na(Mg 2 Al)Al 6 (Si 6 O 18 )(BO 3 ) 3 (OH) 3 O, which is oxy-dravite. The blue domain of this tourmaline has a Mg/(Mg + Fe 2+ ) atomic ratio with an average value of 0.54 (2), ranging between 0.47 and 0.60. Consequently, some domains have a composition with Fe 2+ slightly dominant over Mg and thus probably corresponding to oxy-schorl. Figure 4 shows the X-ray maps for selected elements in dutrowite and associated oxy-dravite. Whereas Na is homogeneously distributed between these two phases, Ca is depleted in the core of the crystal; the X-ray map collected using CaKα allows the identification of several anhedral to subhedral grains of "apatite". Iron and Mg are rather constant in the crystal of tourmaline, although a slight enrichment seems to occur in the domain enriched in Ti; on the contrary, Al is depleted in this latter area. X-ray maps of Fe and Mg also allow identification of "biotite", partially replaced by Fe-rich clinochlore. "Biotite" is also relatively enriched in Ti (2.70 wt % TiO 2 , according to Vezzoni et al., 2018); Table 9. X-ray powder diffraction data (d in Å) for dutrowite. Intensity and d hkl were calculated using the software PowderCell 2.3 (Kraus and Nolze, 1996) on the basis of the structural model given in Table 5. Only the reflections with I calc > 5 are given. The seven strongest reflections are given in bold. minor Ti-rich oxides (rutile, ilmenite) can also be observed. This qualitative description is confirmed by a quantitative estimation based on electron microprobe analysis. Chemical variability of dutrowite and associated oxydravite is shown in Fig. 5. In agreement with , considering Al tot = 6 apfu as a threshold for Al saturation, dutrowite is usually Al-undersaturated, i.e. Al < 6 apfu, whereas associated oxy-dravite is richer in Al, with Al up to 6.8 apfu. The low Al content of Ti-bearing tourmalines was also reported by Scribner et al. (2018). Dutrowite has a slightly lower Na/(Na + Ca) ratio than oxydravite; the lower value is not related to higher Ca content in the former, but to lower Na content in oxy-dravite, that has similar Ca contents but a higher amount of va-cancy at the X site. Magnesium and Fe 2+ are positively correlated in oxy-dravite and dutrowite, with Fe and Mg enriched in the latter. This behaviour is related to the negative correlation between the (Mg + Fe 2+ ) and Al content: dutrowite, having the highest (Mg + Fe 2+ ) content, is Alundersaturated. It is worth noting that (Mg + Fe 2+ ) has a positive correlation with Ti 4+ , suggesting the substitution mechanism 2Al 3+ = Ti 4+ + (Mg,Fe) 2+ . This substitution is further strengthened by the negative correlation between the contents of Ti and R 3+ cations occurring in dutrowite, mainly represented by Al, with minor content of Fe 3+ and V 3+ ; this agrees, for instance, with the results of Novák et al. (2011). The possible involvement of Ca 2+ in the substitution favouring the incorporation of Ti in the crystal structure of tourmaline, suggested by some authors (e.g. Dini et al., 2008;Gadas et al., 2019), is not supported by the current data; indeed, chemical data are quite scattered, and Ca enrichment in dutrowite with respect to oxy-dravite (∼ 0.06 apfu) is low. Consequently, chemical data collected on type material of dutrowite are consistent with previous studies on both natural (e.g. Žáček et al., 2000;Konzett et al., 2012) and synthetic (Vereshchagin et al., 2022) tourmalines, indicating the substitution 2Al 3+ = Ti 4+ + (Mg/Fe) 2+ as the most probable substitution in Ti-rich members of the tourmaline supergroup. Some previous data also indicated a decrease in the OH content and an increase in the oxy-nature of tourmalines (e.g. Gadas et al., 2019), coupled with the Ti incorporation via the substitution Al 3+ + (OH) − = Ti 4+ + O 2− . Such a mechanism may also occur in dutrowite, possibly coupled with the other involving Mg and Fe, e.g. 3Al 3+ + (OH) − = 2Ti 4+ + (Fe,Mg) 2+ + O 2− . These substitution mechanisms should favour an increase in the unit-cell parameters, indeed, in agreement with the ionic radii of Shannon (1976), r Mg > r Ti > r Al . Dottorini (2019) gave the crystallographic data of the tourmaline supergroup minerals occurring in the Fornovolasco Metarhyolite Formation and reported SEM-EDS data. According to these data, tourmaline can be classified as oxy- dravite. Refined unit-cell parameters are a = 15.9544 (2), c = 7.1892(1) Å, and V = 1584.80(5) Å 3 . In agreement with the observed chemical variations, dutrowite has larger a, c, and V values, with a = +0.2 %, c = +0.4 %, and V = +0.8 %. Genesis of dutrowite Dutrowite occurs in the groundmass of the porphyritic metarhyolite belonging to the Fornovolasco Metarhyolite Formation. Actually, it was identified in an unusual sample, characterized by the presence of "biotite". This is a rare mineralogical feature of the Fornovolasco Metarhyolite, since the relics of "biotite" phenocrysts are usually completely replaced by Fe-rich clinochlore, quartz, and rutile (Vezzoni et al., 2018). Textural data indicating the relations of dutrowite with oxy-dravite are limited to a few observations, and the relations between these two phases are not clear. For instance, some zoned crystals, cut orthogonal to the c axis, show a Ti-enriched core (even if with Ti content < 0.25 apfu, i.e. not enough to classify the sample as dutrowite) and an oxy-dravitic rim (see Fig. 7 in Vezzoni et al., 2018). Although this texture might suggest that Ti-richer domains crystallized earlier, possibly at higher T, other tourmaline grains in the same specimen do not show such a regular zoning. For instance, the crystal in Fig. 1 is distinctly zoned, with a sector corresponding to oxy-dravite and the other in contact with "biotite" and partially replacing it. However, as usually occurs in relatively high-T late-magmatic/hydrothermal tourmalines in granitoid rocks, the mineral shows very complex, patchy zoning with no clear core-to-rim zoning. As discussed above, "biotite" is a host for Ti in metarhyolite and may have played some role in the crystallization of dutrowite. Some authors (e.g. Dini et al., 2008;Gadas et al., 2019;Nabelek, 2021) stressed the role of this mica as a source of Ti for Ti-enriched tourmaline. Boron metasomatism and crystallization of tourmaline as a replacement of earlier magmatic silicates is well-known during the late-magmatic and early-hydrothermal evolution of several granitic intrusions (e.g. Woodford et al., 2001;Dini et al., 2008). These processes affected also the Fornovolasco Metarhyolite and could be related to the pre-Alpine tourmalinization and sericitization processes described by Vezzoni et al. (2020). Biotite from metarhyolite has Mg/(Mg + Fe) = 0.39, lower than that of both dutrowite (0.48) and oxydravite (0.54, ranging between 0.47 and 0.60). Consequently, some Fe could have been lost during the genesis of tourmaline, or it could have been fixed in pyrite, frequently occurring in metarhyolite. Preferential mobilization of Fe during the interaction between biotite and late-magmatic hydrothermal fluids is indicated by metasomatic tourmaline with a Mg content higher than that of the replaced biotite (Dini et al., 2008) as well as by hydrothermal experiments (Orlando et al., 2017). Moreover, the rather constant Ca content [Ca/(Na + Ca) ∼ 0.20] in tourmaline could be sourced by plagioclase. Unfortunately, magmatic plagioclase is completely replaced by albite in the Fornovolasco Metarhyolite; however, a composition close to An 20 is probable for the pristine plagioclase in this rock. The tourmalinization of magmatic silicates could also release SiO 2 , favouring the formation of late-stage quartz veins in the metarhyolite and surrounding rocks. Vereshchagin et al. (2022), on the basis of synthesis experiments, suggested that low-P conditions are favourable to Ti enrichment in tourmalines. As regards temperature conditions, a comparison with the Ti enrichment in other silicates can be proposed. For instance, "biotite" is enriched in Ti at low-P (as for tourmaline) and high-T (Henry et al., 2005). If so, dutrowite could be the result of the high-T/low-P replacement of "biotite" in the Fornovolasco Metarhyolite during the late-magmatic/hydrothermal evolution of this Permian intrusive rock. The oxy-nature of both dutrowite and oxy-dravite does not necessarily imply an oxidizing geological environment, which would not be in accord with the local precipitation of pyrite and other sulfides in metarhyolite. As observed in some oxy-tourmalines (e.g. Bačík et al., 2013), the deprotonization reaction could be simply due to local chargebalance requirements related to high-charged cations (e.g. Al 3+ , Ti 4+ ), being the mineralogical expression of specific rock geochemistry. Conclusion Dutrowite is the first tourmaline supergroup mineral with Ti as a species-forming chemical constituent. Its finding and description improve the knowledge of the crystal chemistry of this important group of cyclosilicates and give some insights into the enrichment of Ti in these minerals, suggesting substitution mechanisms and the role of "biotite" as a source of Ti during the late-stage evolution of the magmatic-hydrothermal system associated with the emplacement of the Fornovolasco Metarhyolite Formation. Available data also encourage a more accurate petrological study on this recently described geological formation , focusing on the tourmaline supergroup minerals, which show wide chemical variability. Their characterization may help in deciphering the evolution of this sector of the northern Apennines during both the Permian magmatic-hydrothermal history and the subsequent Alpine tectono-metamorphic events. Data availability. The Crystallographic Information File data of dutrowite are available in the Supplement. Additional chemical data of dutrowite and associated oxy-dravite are also made available. Author contributions. CB collected preliminary data. CB and DM carried out single-crystal X-ray diffraction and micro-Raman spectroscopy; CB and FB examined crystal-chemical data. FZ collected electron microprobe data. HS collected Mössbauer data. AD contributed to the geological background. CB, DM, and FB wrote the paper, with inputs from the other authors. Competing interests. At least one of the (co-)authors is a member of the editorial board of European Journal of Mineralogy. The peerreview process was guided by an independent editor, and the authors also have no other competing interests to declare. Disclaimer. Publisher's note: Copernicus Publications remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Special issue statement. This article is part of the special issue "New minerals: EJM support". It is not associated with a conference.
2023-02-12T16:13:02.097Z
2023-02-10T00:00:00.000
{ "year": 2023, "sha1": "4f61dff287917189b76f354a1347d5cc15f313ff", "oa_license": "CCBY", "oa_url": "https://ejm.copernicus.org/articles/35/81/2023/ejm-35-81-2023.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "0ce120276b5a7b85c76e32ad4bfb96a307926a31", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [] }
80357682
pes2o/s2orc
v3-fos-license
Association between Interleukin-18-137G/C Gene Polymorphisms and Tuberculosis Risk in Chinese Population: A Meta-Analysis This study was to investigate the relationship between IL-18-137G/C polymorphism and TB risk by meta-analysis. The literatures about the IL-18-137G/C polymorphism and risk of tuberculosis were selected from four English databases and four Chinese databases. Data were extracted from the studies by two independent reviewers. Statistical analysis was executed using Revman 5.3 and Stata 11.0 software. A total of 5 studies with 558 TB patient and 720 controls were included in this meta-analysis, The results showed that-137G/C polymorphisms in the IL-18 gene were associated with TB risk in china when taking comparisons of the G allele vs. C allele (OR=1.49, 95% CI=1.21-1.84, P=0.0002), GG vs. GC+X.06, P=0.0003). It was also significant in the subgroup analysis of Chinese adults (G allele vs. C allele: OR=1.32, 95%=CI 1.03-1.70, P=0.003; GG vs. GC+CC: OR=1.39, 95% CI=1.01-1.91, P=0.04) and Chinese children (G allele vs. C allele: OR=1.91, 95% CI=1.31-2.78, P=0.0008; GG vs. GC+CC: OR=2.02, 95% CI=1.33-3.07, P=0.0010). This study provides the evidence that the allele G of IL-18-137G/C polymorphism was closely associated with TB risk in China. Introduction Tuberculosis (TB) is a chronic infectious disease caused by Mycobacterium tuberculosis (MTB), and is a serious public health problem that has caused 1.6 million deaths every year over the world, especially in Asian and Africa [1][2][3][4]. However, only 10% of people infected with Mycobacterium tuberculosis may develop into clinical disease, which indicates that some factors can devote to the pathogenesis of tuberculosis, including host immune response and gene environment interactions [4,5]. The genetic influence on TB infection has had a series of studies which mainly related to the associations between gene polymorphism and TB risk [6]. They entirely demonstrated that host genetic factors were connected with TB susceptibility. The interleukin -18 (IL-18) genes, which is located at chromosome 11q22.2-22.3 with six extrons and five introns, is called interferon (IFN)-γ-inducing factor affiliated to IL-1 family secreted by plenty of immune cells, such as monocytes, dendritic cells, activated macrophages, and Kupffer cells [7][8][9]. IL-18 has been recognized as an essential role in resistant immunity against tuberculosis, and possesses a-137G/C polymorphisms in the promoter region which showed a critical influence on tuberculosis [10][11][12]. The function of IL-18-137 gene region can regulate the different transcriptions. Several casecontrol studies have been carried out to confirm whether IL-18-137G/C polymorphisms is correlated to susceptibility to tuberculosis. The results showed some differences between districts, population, and nations [13][14][15][16][17][18][19][20][21]. In order to get a more dependable conclusion, those relevant case-control data were extracted for a metaanalysis to be performed. Study selection process and characteristics Twenty-nine potentially relevant studies were filtered from our publication's search, and four case-control articles about IL-18-137G/C polymorphism met the inclusion criteria, of which four studies derived from China, The remaining four papers were integrated, a gross of 558 cases and 720 controls were included into the final meta-analysis. The detail characteristics of the eligible literature were presented in Tables 1 and 2. Quantitative data synthesis The meta-analysis results demonstrated that the pooling statistical analysis of all models showed some association between IL-18-137G/C polymorphisms and TB susceptibility in Chinese people (G allele vs. C allele: OR=1.49, 95% CI=1. 21 TB is a chronic infection disease caused high morbidity and mortality in Asia and internationally. Many researchers have confirmed that a series of cytokines take possession of critical roles in development of tuberculosis development [13,14]. Furthermore, several studies have revealed genetic marks of the IL-18 gene promoter region correlating to TB risk and susceptibility, although the outcome of TB is regulated by the environment and mycobacteria [15]. IL-18 is an essential regulatory as well as pro-inflammatory cytokines; it is an effective resistance to the intracellular infection when production of IL-18 increases by pathogen. The several polymorphisms associated with TB risk have been identified in the range of promoter region, such as -137G/C, +105A/C, -607A/C, -372C/G [16,17]. But less statistical analysis has been used to get the inconclusive association between IL-18-137G/C polymorphism and TB susceptibility. Meta-analysis is a powerful method to provide further evidences for reconciling the controversial points. This meta-analysis was based on 5 literatures containing 6 studies with 723 cases and 893 controls. The statistically significant results shown in comparisons of G versus C, GG versus CC, GG+GC versus CC, and GG versus CC+GC suggest that G allele of IL-18-137G/C polymorphism was significantly associated with increased risk of TB in the general Asian population according to overall studies' statistics. In the subgroup analysis by nationality, significant associations were discovered in Chinese adult and children but not in Indian, more studies should be needed to confirm our results. All in all, we found a significant association between IL-18-137G/C polymorphisms and TB risk in Chinese population under allele model, homozygous model, dominant model, and recessive model. Some definite potential limitations should be considered in this meta-analysis when confronting with our results. First, only published literatures were included in this meta-analysis, but did not seek as well as get available unpublished and on-going studies. It is also possible that some of unpublished studies and written or published papers in other language which might meet the inclusion criteria were missed, although our statis`tic of publication bias was not significant. Second, no original data about gene-gene and gene-environment interactions from these studies were obtained, however, gene-gene and geneenvironment interactions also could be factors contributing to the risk of TB. Third, under the subgroup analyses, there were not enough relevant studies found in other countries, most of case-control studies were searched from Chinese, and the result might just be applicable to Chinese population or Asia. Fourth, the control individuals of the included studies could not be confirmed whether they had latent TB infection which could develop into active TB in future. In conclusion, the meta-analysis conducted that the allele G of the IL-18 -137G/C polymorphisms might be associated with increased risk for TB infection in Asia area, especially in Chinese population. More studies with large sample sizes and multi-centers should be included to validate our preliminary findings. Materials and Methods We analyzed literatures using the four English databases (PubMed, Embase, Science Direct, Ovid) and four Chinese databases (CNKI, WangFang, CBM, CNKI, FMJS) to search studies involving MESH terms "Tuberculosis, Pulmonary" and "Polymorphism, Single Nucleotide" or "SNP" combined with "Interleukin*18" or "IL-18". We had no restriction to language, time period, sample size, and publication type. Criteria for considering studies for this review All included studies had to comply with following criteria: (1) publications concentrate on the IL-18-137G/C promoter polymorphisms and TB risk; (2) case-control studies' diagnosis should meet the international criteria; (3) total sample size ≥ 100 (case +control) (4) the genotype quantity should be available for evaluating the odds radio (OR) with 95% confidence interval (CI) and P value. The exclusion criteria were (a) non-case-control, (b) meta-analysis, (c) animal researches, (d) studies without genotype frequency. Data extraction Data were extracted independently by two reviewers according to the inclusion and exclusion standard, and reached a compromise on items mentioned above. If any disagreement based on criteria, the group came to agreement through discusses and inducing a third party. The necessary data were collected from each literature, including first author's name, year of the publication, country, the ethnicity, genotyping method, and total number of sample size, case-control detail, the number of IL-18-137G/C genotypes and alleles for case and control. Data collection and analysis The pooled odds ratio (OR) with its 95% confident interval (CI) was used to appraise the strength of the relationship between the IL-18 polymorphisms and TB risk. The significance of the pooled OR was determined by the Z-test, in which P0.05 was considered significant. The pooled ORs were calculated for allele model (G allele versus C allele), dominant model (GG+GC versus CC), recessive model (GG versus GC+CC), The studies' heterogeneity was assessed by the Chisquare-based Q-test and the inconformity of index I (2), Heterogeneity assumption was regarded to be statistically significant if P0.10. When P ≥ 0.10, the pooled statistical analysis was calculated by the fixed-effect model, otherwise, a random-effect model was also used. To evaluate the age-level of Chinese population's effects, subgroup analyses were performed by age group. Review manager 5.0 program provided by the Cochrane Library and Stata (Version 11.0, Stata Corporation) were used to perform all the statistical analysis.
2019-03-17T13:07:14.373Z
2017-08-09T00:00:00.000
{ "year": 2017, "sha1": "4522911823b40fa1aa000e7e11276226bfea46f0", "oa_license": "CCBY", "oa_url": "https://doi.org/10.4172/2155-9899.1000514", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "fca16b6cf9e2624f76b5c936bfd1f00dd52c1d67", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
234615651
pes2o/s2orc
v3-fos-license
USE OF THE LIFE HISTORY APPROACH WITHIN THE PETROBRAS MEMORY PROGRAM EL USO DE LA HISTORIA DE LA VIDA EN EL PROGRAMA MEMORIA PETROBRAS Recebido: 28.05.2020 Aceito: 06.07.2020 Publicado: 31.08.2020. RESUMO: O trabalho analisa o tratamento da história e da memória na Petrobras, observado pelo emprego da abordagem da história de vida como metodologia permanente no âmbito do Programa Memória Petrobras, tendo os depoimentos dos trabalhadores como fontes históricas fundamentais. Para tanto, realiza-se uma pesquisa de natureza exploratória, mas com abordagem crítica e reflexiva, apoiada em pesquisa bibliográfica e documental focadas, especialmente, em pesquisas e documentos oficiais sobre a companhia. Também foram realizadas entrevistas abertas (MINAYO, 1993) com historiadores do Programa Memória Petrobras, além de análise de seu site web a partir da perspectiva francesa da enunciação editorial (SOUCHIER, 1998), bem como da formação do ethos (AMOSSY, 2010) organizacional a partir de tais recursos histórico-narrativos. ABSTRACT: This paper analyzes the treatment of history and memory at Petrobras examined through the use of the life history approach based on workers' testimonials as primary historical sources as a permanent methodology within the Petrobras Memory Program. To this end, we carry out an exploratory research informed by a critical and reflexive approach, founded on bibliographic and documentary research specifically focused on research and official company documents. Open-ended interviews (MINAYO, 1993) with historians of the Petrobras Memory Program were conducted, as well as an analysis of its website according to the French perspective of editorial enunciation (SOUCHIER, 1998) and the creation of an organizational ethos (AMOSSY, 2010) derived from historicalnarrative resources. Introduction The focus of this paper does not lie in the analysis of one or various life histories, as would be the case with a study premised on biographies, but on the mobilization of life history as part of a communications strategy at Petrobras 1 through the collection of workers' life histories in order to reconstruct the trajectory of the organization based on the multiple and parceled narratives from the workers' perspectives. This is part of a larger doctoral research that studied the use of narratives in legitimation processes within organizational history, that looks at Petrobras as a case study and at the "Petrobras Workers Memory Project", (subsequently named the "Petrobras Memory Program"), as the corpus for analysis. The various accounts and testimonials that constitute the "Petrobras Memory"namely, workers' narratives from diverse areas, managers, etc. -were available until the completion of the thesis in 2015, on a website specially dedicated to the Program. But, as of 2016, at the height of the Lava Jato [car wash] corruption investigation, the site was redacted and all videos, audio files, and testimonials were removed. The alteration to the program's website was explained as "a reconditioning phase" 2 (PETROBRAS, 2020), thereby making it impossible at present to access and analyze the testimonials. First, it is worthwhile to foreground in this context the acceptance and relevance of the life history approach within the scope of Communication Sciences. This interdisciplinary field has been borrowing from historiographical and sociological sources since its very beginnings and has greatly benefited from the use of the Life History in the study of and with subjects, while recognizing the richness of their experiences and stories. For Dhunpath (2000), we would be facing a new paradigm focused on recovering life paths narratively, or in other words, through narrative processes. Dhunpath suggests naming this outlook a narrative paradigm -a "narradigm" -focused on experiences and life histories. Second, and as main focus of this article, we analyze the treatment of history and memory at Petrobras, articulated through the use of an oral history approach as a permanent methodology within the purview of the Petrobras Memory Program, with employees' testimonials and life histories as historical sources. 1 Petrobras is the semi-public Brazilian multinational petroleum corporation founded in 1953. Trans. note. 2 The message currently in place since mid-2016 states: "The Program's website is undergoing a reconditioning phase. Until then, you can contact us via email: memoriapetrobras@petrobras.com.br ". The page can be accessed at: www.memoria.petrobras.com.br § e-ISSN nº 2447-4266 Vol. 6, n. 5 (Special Edition 2), August 2020 3 To this end, we carry out an exploratory research informed by a critical and reflective approach towards the phenomenon being investigated. The methodology is founded on bibliographical and documentary research (LAKATOS and MARCONI, 2003), specifically focused on research and official documents regarding Petrobras, available on the Memory Program webpage until 2016 under the heading "Articles and Publications". Open-ended interviews were also conducted (MINAYO, 1993) The life history approach, widely used in anthropological studies (MORIN, 1980), is now recognized and employed by different disciplines within the human and social sciences. As Silva indicates (2002), its socio-historical origins has occasioned its extensive acceptance and use by researchers from these two areas. According to Silva, there is conflation and frequent overlap between theories and concepts such as "life history", "oral history" and "biography". This is mainly due to the characteristic interdisciplinarity of studies of biographical accounts, which gives rise to two aspects relative to its object of study and center of interest, i.e. the life history of individuals, but with different approaches: the historical biography, linked to the field of history, and the biographical method, linked to sociology. Some distinctions can already be noted between historical biography and the biographical method of Sociology: while the former focuses on the individual, the latter tries to reveal the group; while the former does not favor the source, the latter privileges the life story and the autobiography. Still, one can discern another feature: the historical biography is treated as a genre, whereas the sociological prefers to label itself method (SILVA, 2002, pp.29-30). § e-ISSN nº 2447-4266 Vol. 6, n. 5 (Special Edition 2), August 2020 4 As Coulon (1992) points out, life history is a technique that enables the emergence and the understanding of an actor's inner life. It originates in the early years of the 20th century within the context of the sociological studies of the first Chicago School, where the study by Thomas and Znaniecki (1918) is prototypal in its use of the life history as a historical document representative of the life of an immigrant Polish peasant. As such, the studies conducted in Chicago were pioneering in the use of the biographical research approach. According to Bessin, the biographical perspective -or life trajectories, as he calls them -is based on a procedural logic, or more precisely, on a temporal dynamic where "it is necessary to reject the different temporalities used in a life path, and to highlight specifically the fold between the individual's temporalities and the historical time within which they are inscribed" 3 (BESSIN, 2009, p. 13). In the biographical method, the life history plays a central role: it focuses on the individual and the reconstruction of his life path from narratives and autobiographies, always from the comprehension of the subject under study. From this perspective, it is important to understand the groups' history and social functioning in accordance with the collected narratives, which means that "it is no longer the life story of an individual, or that an autobiography would suffice as research, but that an adequate number of life stories would manage to provide an explanatory account of the group" (SILVA, 2002, p. 28). In Minayo's (1993) conception, there are two methodological modalities to life history as a data collection technique: the complete life history, which contemplates the "total" account of the individual, and the so-called topical life history, which focuses only on key aspects or moments of the given experiential narrative which we are looking to foreground. In the research presented here, the collection of individual testimonials -of life histories -is useful in composing the trajectory of Petrobras's workers and, consequently, of the company's history "through its workforce" (PETROBRAS, 2015). In this context, we take recourse to the topical life history method (MINAYO, 1993), being that in the case of the Memory Program developed by Petrobras, the center of the investigation does not reside in the personal and individual narrative, pivoting solely around one biography, § e-ISSN nº 2447-4266 Vol. 6, n. 5 (Special Edition 2), August 2020 5 but is composed by the intertwining of various life stories to reconstruct a broader history: that of the organization. We wish to note that the use of life stories is at the heart of a communications strategy that seeks to legitimize the company by rendering visible its historicalorganizational narratives (SANTOS, 2016). The Petrobras case study demonstrates the validity of the biographical method, exemplified by the collection of statements from workers, as a relevant and appropriate resource for communications, both internal and external, and which can still be considered by corporate management as context for decision-making. Within the purview of academia, some researchers in Communication Sciences have addressed these issues and presented productive results from the use of the biographical perspective, life history, testimonials and oral history; viz: within communications research in general (PERAZZO, 2006;MAIA, 2006;MARTINEZ, 2015), in journalism specifically (MARTINEZ, 2016, RIBEIRO, 2015, and within the scope of Malheiros, when they propose in 1979 the creation of a sector dedicated to surveying and preserving the company's history. The project was undertaken by these two women who began to collect assets in the form of data, records, information, and files to create the company's archive; the initiative was interrupted in 1980, but resumed in 1982. In The participation and attribution of responsibility for the historical recovery project under the aegis of the CPDOC-FGV is therefore decisive in terms of the activities that would unfold from then on at the oil company and, specially, for the adoption of the life history and oral history approaches characteristic of the study center at the Foundation as indicated by Alberti (1998; and Motta (1995) In the words of Verena Alberti: The close relationship between the process of building up the collection of oral sources and CPDOC history goes beyond thematic identity. From the start, the creation of the interview collection was coupled to the documentation and research activities already developed by the Center. On the one hand, the need for the creation of an oral history program came from the work with personal archives; on the other, the new potential of the oral history methodology diversified and enriched the research. (1998, p. 2) The Petrobras Workers Memory Project comprised the inventorying of documents from various Petrobras units that could constitute a representative archive of § e-ISSN nº 2447-4266 Vol. 6, n. 5 (Special Edition 2), August 2020 7 the company's history, as well as the collection of testimonials from the representatives of the several labor unions associated with it. In total, 260 interviews were conducted, which resulted in the publishing of the Almanaque Memória dos Trabalhadores Petrobras 8 department. Even if it resided there for almost seven years with a considerable collection of testimonials and documents and products generated, it has not yet managed to establish itself as an area of reference in relation to memory within the Company. (FIGUEIREDO, 2009, p. 84) The mission of the Petrobras Workers' Memory Program is to recount the company's history based on reports from people who were once associated, are currently associated, or have some connection with the organization (FIGUEIREDO, 2009, p. 46). Until 2016, the Memory Program's website displayed the following announcement: "We are an Institutional Communications program at Petrobras aimed at preserving, incorporating, and disseminating the company's history, mainly from the perspective of its workers and partners" (PETROBRAS, 2015). The Program has its own budget and annual plan through which research directions and activities to be developed are established. It also has a collection of testimonials, documents, newspapers, and photos gathered from individuals interviewed during the Petrobras Workers' Memory Project. In this sense, one of the principal undertakings that guarantees the ongoing documentation of the company and the practice of oral history is the collection of worker testimonials which can be used in the future as a source for the understanding and recovery of the company's history. Miriam Collares Figueiredo (2014) affirms that the Petrobras Memory Program sought to highlight the importance of the collection and its historical documents in order to recover the company's values, identity, and, most importantly, to show that the workers are a valuable asset in this process and constitutive of the company's trajectory. In addition to the influence of the CPDOC already mentioned, the technique used by Petrobras to recover the company's memory through workers' testimonials is The identity of the worker is very much linked to Brazilian identity, which, according to Retroz (2014) and Collares (2014), is discernible in the testimonials because during their communications the employees would mention that they were there not only to help the company, but also to help the country. As such, a connection can be said to exist between the organizational ethos and the national ethos. According to Sérgio Retroz (2014), it is possible to observe in some statements that the workers understood their work at Petrobras as a contribution to national development, as a "mission to do something for national sovereignty" because "building Petrobras is helping to make the country independent", making it the owner of its own wealth. These ideals were linked to the founding of the company itself, which sought financial independence, upheld the presence of oil on Brazilian soil and defended its exploitation solely by Brazilians (and no longer by foreigners). Regarding the methodology used in the collection of testimonials, the company excels in life histories, real stories, recovery and valoration of the life path of the employee within the company's. From the viewpoint of the historians at Petrobras, the Memory Program was very well received by workers, particularly because of the documentary format of recording the history of employees and former employees through testimonials. Employees believed themselves to be part of the construction of the history of Petrobras. By virtue of being called upon to provide testimonials for the Memory Program, and seeing that their contribution had become part of the company's official history, they felt as if they were effectively being recognized as participants. Petrobras has a library where important company documents and files are kept, however this material is considered confidential and as a result difficult to access. Thus, the surveys, interviews, and assets available through the Memory Program (through § e-ISSN nº 2447-4266 Vol. 6, n. 5 (Special Edition 2), August 2020 10 websites and books) are rendered legitimate references relative to the company's history and accessible to the general public. Access to the Memory Program's website was promulgated through the company's home page through a hyperlinked dialogue box that served as gateway and invited the reader to become familiar with the history of the company from the outlook of its "workforce". By mediating access to the Petrobras Memory Program through its website, the published story narratives become institutionalized. To analyze the editorial content and discourse of the Memory Program's page, we have resorted to an analysis of editorial enunciation (SOUCHIER, 1998). We examine the graphic, typographic, authorial, and editorial choices involved in the design of the Petrobras Memory website as well as the organizational ethos it sought to construct by mobilizing life histories so as to build a meta-narrative of Petrobras's organizational history. Our editorial analysis is informed by the editorial enunciation approach (SOUCHIER, 1998) that analyzes the formatting, structuring and support strategies towards the visualization and reading of a text. As an initial presupposition, it assumes the existence of a material support upon which the text becomes available to the reader and proffers for analysis the modality of the writing, as well as the graphic and editorial § e-ISSN nº 2447-4266 Vol. 6, n. 5 (Special Edition 2), August 2020 11 choices operative in the production of the text's image (SOUCHIER, 1998). The analysis of these elements within the website chosen as corpus for this research makes it possible to study how Petrobras organizes and arranges the collected life histories graphically, textually, and visually in order to build a logical and coherent narrative that serves its interests in communicating the different aspects and moments of its organizational history in order to legitimize its existence and its social vocation to the community. The notion of ethos is a term from Greek rhetoric that refers to the character of the speaker as conferring credibility to his speech. The analysis of discursive ethos (AMOSSY, 2010) allows us to foreground the self-image a speaker projects throughout their enunciation. Nevertheless, even if the subject tries to efface his textual self, he always leaves traces -through intonation, expression, or choices -that denote his identity, as well as by his rhetorical recourse to authority, scientific arguments, testimonials, etc. For this reason, by using developmentalist arguments that highlight the value added to society, organizations seek to connect with the most varied audiences by appealing to their receptiveness (a priori) towards an institution that helps promote national growth. In terms of the corporate websites studied, this would be a company's mode of presentation (as an "I" / subject that projects its image through its discourse) even if diluted through impersonal terms or by donning the guise of an indefinite subject (as in the "on" in French) as constitutive of a mediatized ethos (SODRÉ, 2002). As such, when Sodré's (2002) concept of a mediatized ethos is transported to the organizational context, we can understand it as the resultant of interactions between organizations and society permeated by communication from the most varied of media. So that within the scope of the present work, we can comprehend the media and digital communication mechanisms (websites) as means through which organizations crystallize an identity and a self-image that can be virtualized and mediatized through the World Wide Web. The editorial enunciation analysis shows that the Petrobras Memory Project website adopts a simple look that is in-sync with that of the parent oil company, all the while exalting the green, yellow, and blue colors alluding to the Brazilian flag. Apart from the name of the project revealing the identity of the institution, the website does not call attention to the company's brand. There is only a small Petrobras logo at the top right of the page that discretely institutionalizes the site. The logo, as a navigation button underlying a representative icon of the corporation -what Davallon and Jeanneret (2004) call a signe-passeur -links to the company's main page: it is a sign that mediates a user's transit to the Petrobras institutional website. Likewise, in the upper left, in a reduced font, the name "Petrobras" presents a similar logic. Hardly noticeable at first, it is necessary to mouseover the word to realize that it is a hyperlink linked to the company's main website. Such editorial strategies express sobriety and discretion as an allusion to the main intent of the site. The white background, with no margin or upper or lateral borders, can resemble a blank sheet of paper, ready and available for creation. The Petrobras Memorial website is presented visually as an informational site, a kind of repository, a virtual archive of the history of Petrobras. The purpose of the website or of the project in question is not foregrounded on the site's homepage. The site must be explored until one finds within the "Who we are" tab a brief presentation 7 of the website as well as a history of the Petrobras Memory Program that links to the company's Institutional Communications division. The content is organized in two ways: according to sections or item headings, that create a horizontally-divided strip serving as a thematic menu providing access to individual items through hyperlinks; or according to blocks, subdivisions of the main page, where the fore-mentioned item headings are briefly described and identified by their respective titles. One can readily glean that, essentially, the content of the site is organized around the timeline. It creates a graphic display of a linear temporality, which serves, in the case of Petrobras, to separate or divide the company's history into periods (decades). Each decade is illustrated or narrated through workers' testimonials relative to their experiences during that period. Communicating organizational history and memory through digital platforms (websites, social networks, etc.) allows the adaptation of content, information and organizational narrative to different communication media. In this context, we need to examine not only the narratives put forward and the organizational strategies that legitimize the history, but also the underlying materiality of the production and transmission of messages, and how these affect this process. The historical narrative strategies identified through the analysis of the site are summarized in Table 1. In other words, if on the Petrobras Memory page the company diversifies its historical legitimation strategies -i.e. the forms and resources used on its site -in order to grant visibility to its organizational history and memory, in terms of the sources used towards the recovery of the past, it relies in great part on the life histories of the subjects that integrate it (or integrated it) as constitutive elements. Gardère (2003) classifies this as an "experience collecting" technique when referring to the recovery of the workers' memory within a company. For her, the objective of this practice is to recover and record a person's knowledge, which in turn can be transferred to others or serve as knowledge, or as a source of knowledge, for the organization. The approach adopted by this author focuses on valorizing workers' § e-ISSN nº 2447-4266 Vol. 6, n. 5 (Special Edition 2), August 2020 16 intellectual capital and how it can serve organizational learning, as well as on the creation of a knowledge management database within organizations. However, the memorial record based on life stories, or on so-called oral histories, is not always intended to transmit and preserve a specific savoir-faire [know-how]. There are situations where testimonials focus on the narrative of the lives of individuals, on their victories and mishaps, sometimes directly related to the company's history, but oftentimes only related to the private and external experiences of the organization. The interest, in this case, lies in the recognition of the individual and his path, in order to make him part of a larger story -that of the organization, and, for that matter, of the country as well. Resources resembling the personal testimonial are widely used and promulgated through the Petrobras Memory Program website. Thus, personal and organizational narratives are mixed: when telling a (worker's) life history, the organization's memory is also evoked. In other words, the narrative of the organizational history, as conveyed by the website, is composed of micro-narratives of personal histories from individual participants in Petrobras's trajectory. It is through memory recovery that these fragments of history can be recuperated, evincing the polyphony, the multiplicity of voices and visions that tell one same story, but from different positions of enunciation. Conversely, the life narrative of these worker-narrators is also affected by the organizational history, so that an account would be incomplete if the place and role occupied by the narrating subject (worker) is not sited within the organization's temporal trajectory. These are choices made by the organization regarding its organizational history narrative and the preservation of its memory. By applying the methodology of oral history to the collection of testimonials that constitutes one of the main reference sources of the Memory Program, Petrobras opts to reconstruct its history through the recovery of the memory of its workers, that is, it uses memory as substrate for the recomposition of its history. Based on the study of the strategies used by the Petrobras Memory Program towards the conservation and reporting of the oil company's trajectory, we come to identify the adoption of a specific narrative modality that we have named "testimonial narrative of organizational history" (SANTOS, 2016). We define testimonial narrative of organizational history as the reconstruction and narration of a company's history based on the accounts of those who are part of it § e-ISSN nº 2447-4266 Vol. 6, n. 5 (Special Edition 2), August 2020 17 (or were part of it). Its main source are life histories, interviews, testimonials or accounts from organizational actors. With respect to the enunciators of the accounts, it is possible to discern two variants to the testimonial narrative: a) Autobiographical narrative of the organizational history -narrative conceived from the testimonial of an employee or administator of the company, usually a manager or (ex)president; b) Collective or plural narrative of the organizational history: an explicitly and deliberately polyphonic account, composed of the testimonials of various individuals. In the testimonial narrative, the characters are placed at the center of the story so that their perceptions and outlooks are the main element in the reconstitution of the organizational trajectory. This narrative form is predominant on the Petrobras Memory Program website where the testimonials of workers and former workers of the company constitute the main assets towards the recomposition of its history and organizational memory. This strategy can also be found, albeit sporadically, in the book Petrobras 50 anos [Petrobras -50 Years], through the use and promulgation of testimonials from different organizational actors. The adoption of the testimonial form can be part of an integrative strategy, one that strengthens the employees' sense of belonging, worth and recognition of their participation in the creation of corporate history. To that effect, testimonial narratives serve mainly the internal communicational interests. From the analyses undertaken, we can see how Bourdieu's (1986) postulate on the reconstruction of life history by using individual narratives, at times, resembles an organization's communicational approach when it decides to tell its "life story". The author points out that in the dynamics of recomposing individual histories a selectivity is involved relative to the choice of what are considered the most significant facts. By the same token, selected events are examined in relation to one another and logically organized in order to produce a coherent narrative. The autobiographical narrative is always inspired, at least in part, by the concern for imparting sense, of being rational, of revealing a logic both retrospective and prospective, of consistency and constancy, by establishing intelligible connections, such as those between the effect and its efficient or final cause, between successive states, thus constituted as phases of a necessary development 8 . (BOURDIEU, 1986, p. 69). § e-ISSN nº 2447-4266 Vol. 6, n. 5 (Special Edition 2), August 2020 18 Now, when narrating its history, the organization also conducts a selection of facts and events considered relevant to the restoration of the organizational trajectory, which are normally ordered and systematized according to chronologies or timelines, beyond the various modes of promulgation that aim to legitimize the existence of the organization. Such an approach symbolizes for Bourdieu (1986) At the same time, one must acknowledge the selectivity intrinsic to the construction of historical-organizational narratives. It is not possible to recount everything, to totally restore and systematize a history, because, when narrativizing a history, choices are made based on the available information and records, favoring one concern or another, according to a specific narrative perspective (that of the CEO/president, employees, etc.). Likewise, we take into account the judgments and interpretations that can influence the production of organizational memory when we use testimonials and life histories as information sources, such as, for example, in the author's analysis and § e-ISSN nº 2447-4266 Vol. 6, n. 5 (Special Edition 2), August 2020 19 interpretation of the data based on the available facts and elements towards the reconstitution of the organizational history. Accordingly, one can speak of a process of narrative construction of organizational history (SANTOS, 2018). Within the scope of communications, we question the influence of corporate publications and narratives (oral and written), such as life stories that reconstruct the trajectory of organizations, modulating a re-signification process through the circulation, reading, appropriation and reproduction of such "histories" (notably within the digital realm). § e-ISSN nº 2447-4266 Vol. 6, n. 5 (Special Edition 2), August 2020 20 They are a form of narrative-organizational communication that demonstrates how a narrative approach is inserted into the organizational quotidian by configuring the communicational. And so, in this sense, narratives should not be examined only pragmatically or instrumentally, as products of organizational communications (internal narratives, journalistic or advertising narratives) that are used to achieve certain ends (to raise awareness, convince, sell, etc.). However, they also need to be understood: a) as a process through which organizations communicate, transforming information into understandable accounts; b) as a means by which experience and organizational memory are organized coherently and systematically; c) as a mode of knowing, underlying organizational learning, through the lessons that emerge from its narration. As shown in this study, the role of narratives in communication and, specifically, of life stories, is evident in terms of reconstituting organizational history. These serve as sources for the updating and reframing of organizational memory and, by extension, of social memory -as product of the circulation of these narratives in society, that is, of the trivialization (JEANNERET, 2008) 9 of organizational narratives. It is in this way that public memory, which circulates in media and in political and governmental settings, reflects a dominant ideology and influences individual memory. But the implications of social narratives, whether individual or institutional, go further: they also affect the forming and writing of history. This is because the reports, the narratives that are sources of memory, serve to feed back and, often, question some versions of the so-called "official" history. The case study of Petrobras and its Memory Program is unique within the current context of the deactivation and reconfiguration of the website, where all the records and documents from years of interviews with workers were located. These are life stories, testimonials from persons who have been strategically silenced, even if they had been part of the organization and contributed to the creation of the polyphony of voices and versions of a given history of Petrobras. As a suggestion toward future research, the accounts of life history within organizations, conceived through different voices or organizational actors, could be studied. The experience of the Petrobras Memory Program, from which testimonials were collected and different products developed -such as books, websites and exhibitions -§ e-ISSN nº 2447-4266 Vol. 6, n. 5 (Special Edition 2), August 2020 21 constitutes an interesting object of research. How are these "plural", collective and polyphonic narratives produced? Do they represent the authentic version of history, or can they be considered alternative, unofficial, or even controversial versions of the organizational past? Do the writings and subsequent communications of the collective narratives of organizational history go through some sort of selection, censorship or negotiation with the organization that they portray? What are the consequences of erasing or silencing the plurality of organizational voices, traces and records, as was the case in the Memory Program? The perspective adopted in the research, whether in terms of methodological choices or in relation to the selection of the analytical corpus, is a response to a particular scientific line of questioning. It does not exhaust the possibilities of research on the production of narratives in organizations, nor, more specifically, on the use of life histories as elements in the constitution of organizational narratives. We hope that it contributes to the debate on the biographical approach -of life history, oral and biographical history -as methods within Communication Sciences pertinent to communicational and organizational studies as herein described. § e-ISSN nº 2447-4266 Vol. 6, n. 5 (Special Edition 2), August 2020 § El trabajo analiza el tratamiento de la historia y la memoria en Petrobras, observado por el uso del enfoque de la historia de vida como una metodología permanente en el Programa de Memoria Petrobras, dónde las declaraciones de los trabajadores son fuentes históricas fundamentales. Para ello, se lleva a cabo una investigación exploratoria, pero con un enfoque crítico y reflexivo, apoyado en la investigación bibliográfica y documental centrada, especialmente, en la investigación y documentos oficiales sobre la empresa. También se realizaron entrevistas abiertas (MINAYO, 1993) con historiadores del Programa Memoria Petrobras, así como el análisis de su sitio web bajo la perspectiva francesa de la enunciación editorial (SOUCHIER, 1998), así como la formación del ethos (AMOSSY, 2010) organizacional basado en dichos recursos históriconarrativos. PALABRAS CLAVE: Historia; Comunicación; Petrobras; Memoria.
2021-04-19T07:01:34.176Z
2020-08-31T00:00:00.000
{ "year": 2020, "sha1": "bec635e153576e082be2217924420b9a50b51aa9", "oa_license": "CCBYNC", "oa_url": "https://sistemas.uft.edu.br/periodicos/index.php/observatorio/article/download/11001/18094", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "64c7859c281d9a8070806131697802b9a846f3d2", "s2fieldsofstudy": [ "Sociology" ], "extfieldsofstudy": [ "History" ] }
255945015
pes2o/s2orc
v3-fos-license
Selective loss of glucocerebrosidase activity in sporadic Parkinson’s disease and dementia with Lewy bodies Lysosomal dysfunction is thought to be a prominent feature in the pathogenetic events leading to Parkinson’s disease (PD). This view is supported by the evidence that mutations in GBA gene, coding the lysosomal hydrolase β-glucocerebrosidase (GCase), are a common genetic risk factor for PD. Recently, GCase activity has been shown to be decreased in substantia nigra and in cerebrospinal fluid of patients diagnosed with PD or dementia with Lewy Bodies (DLB). Here we measured the activity of GCase and other endo-lysosomal enzymes in different brain regions (frontal cortex, caudate, hippocampus, substantia nigra, cerebellum) from PD (n = 26), DLB (n = 16) and age-matched control (n = 13) subjects, screened for GBA mutations. The relative changes in GCase gene expression in substantia nigra were also quantified by real-time PCR. The role of potential confounders (age, sex and post-mortem delay) was also determined. Substantia nigra showed a high activity level for almost all the lysosomal enzymes assessed. GCase activity was significantly decreased in the caudate (−23%) and substantia nigra (−12%) of the PD group; the same trend was observed in DLB. In both groups, a decrease in GCase mRNA was documented in substantia nigra. No other lysosomal hydrolase defects were determined. The high level of lysosomal enzymes activity observed in substantia nigra, together with the selective reduction of GCase in PD and DLB patients, further support the link between lysosomal dysfunction and PD pathogenesis, favoring the possible role of GCase as biomarker of synucleinopathy. Mapping the lysosomal enzyme activities across different brain areas can further contribute to the understanding of the role of lysosomal derangement in PD and other synucleinopathies. Introduction The identification of the underlying causes of Parkinson's disease (PD) is a major challenge, since in most instances, cases present with sporadic disease. The discovery of the genetic determinants of PD would be a major step forward in describing the underlying etiology of the disorder and also in identifying possible therapeutic strategies [1]. Mutations in the GBA gene, encoding for the lysosomal hydrolase β-glucocerebrosidase (GCase, EC = 3.2.1.45) cause Gaucher disease (GD) a rare lysosomal storage disorder, and represent a common risk factor for PD [2][3][4]. Patients carrying GBA loss of function mutations have a five-fold increased risk of developing PD with respect to noncarriers [3], and may show a similar phenotype to idiopathic PD. The GCase substrate, glucosylceramide, stabilizes soluble oligomeric α-syn species [5]. As a consequence, GCase deficiency may contribute to α-syn aggregation and accumulation. GCase deficiency has been identified in post mortem brain from patients diagnosed with PD either with or without GBA mutations, the decrease being most evident in the substantia nigra, cerebellum and cortex of PD patients [6,7]. This non-selective loss of GCase activity irrespective of mutation status may either represent a global defect in lysosomal enzymes in PD, or may simply represent the presence of other pathologies such as neuronal loss leading to deficiency. The activity of GCase and other lysosomal enzymes can also be reliably measured in cerebrospinal fluid (CSF) [8]. The reduction of GCase activity has been found also in the cerebrospinal fluid (CSF) of PD patients when compared to neurological controls [9][10][11] and in fibroblasts from patients with GD or PD carrying GBA mutations [12]. Interestingly, other lysosomal enzymes involved in different degradation pathways, were also found to be altered in PD patients diagnosed with PD. An increased β-hexosaminidase activity, a lysosomal enzyme able to hydrolyze the GM2 ganglioside, has been found in CSF and fibroblasts of PD patients [11,12]. Another study showed increased activity of cathepsin E and β-galactosidase in CSF of de-novo PD patients while αfucosidase activity was significantly decreased [13]. Together, these findings suggest that there may be a more widespread dysfunction of lysosomal enzymes in PD and related disorders. Here we determined the specific activity of several endo-lysosomal enzymes, namely β-hexosaminidase, αfucosidase, β-mannosidase, α-mannosidase, β-galactosidase, β-glucocerebrosidase and cathepsin E, in different brain areas of PD patients, age-matched controls and, for some brain areas, of patients diagnosed with dementia with Lewy bodies (DLB). Our aims were to map the activity of these lysosomal enzymes in different human brain regions and also to evaluate the occurrence of lysosomal dysfunction not only in PD but also in another synucleinopathy, DLB. This would provide the necessary information to determine if lysosomal dysfunction is widespread in PD, or if there is a relatively selective loss of GCase. Findings We utilized a large series of clinically and neuropathologically verified cases of PD and DLB and age matched control cases. In Table 1 the demographic characteristics of the patients included in the study are reported. Postmortem brain tissue was analyzed for lysosomal enzyme activity, GBA genotype and GBA mRNA expression. GBA sequencing was performed to verify the presence of pathological mutations on GBA gene. The results showed that our cohort was composed mainly of patients and controls without GBA mutations. Only two out of 26 PD patients were heterozygous for pathogenic GBA mutations. One patient carried the L444P mutation while the other one carried the IVS2 + 1G > A mutation. The remaining PD and DLB patients and control subjects were wild type for the GBA gene (CTRL =9, PD = 23, DLB = 15). GBA genotype was not available in 4 controls, 1 PD and 1 DLB. Using hierarchical clustering analysis, we compared the lysosomal enzyme activities across different brain areas ( Figure 1a). The substantia nigra and the hippocampus clustered together in the analysis, globally showing a high level of almost all the enzyme activities. GCase was the exception, showing low levels in substantia nigra but only in the pathological groups, while its levels were high in frontal cortex. The caudate clustered independently from the other areas and showed the lowest levels of the measured activities. Notably, the differences among the brain areas were greater than the differences between PD, DLB patients and control subjects. The comparison of the mean levels of lysosomal enzyme activities in the three groups is reported in Additional file 1: Table S1. We first evaluated the effect of post mortem delay as a confounder in the different comparisons. No significant association was found between postmortem delay and lysosomal enzymes activity when it was considered as a potential confounder (data not shown). GCase specific activity in substantia nigra and caudate was significantly lower in PD patients when compared to control subjects (p < 0.01 and p < 0.05 respectively, Figure 1b). In DLB patients, GCase activity was also reduced, but the difference was not significant (Additional file 1: Table S1). GCase activity did not change significantly in the other brain regions in any of the diagnostic groups (Figure 1b), suggesting a selective loss in the nigrostriatal system. The two patients carrying GBA mutations showed detectable GCase activity, with values within the range of the PD and DLB patients without mutations. Exclusion of the two patients from the analysis did not change the significance of the final results (data not shown). To evaluate if the decrease in GCase activity was due to a change in gene expression we performed qPCR on GBA mRNA in the substantia nigra. mRNA levels of GBA gene, normalized against GAPDH, were significantly lower in PD and DLB compared to controls (p < 0.05). The same result was found with respect to the subunit A of the succinate dehydrogenase complex (SDHA) (Figure 1c). Other enzymatic activities were changed in different brain areas. In particular, α-fucosidase activity in frontal cortex was significantly lower in PD compared to controls (p < 0.01), while its activity in DLB patients did not change significantly (Additional file 1: Figure S1). Interestingly, α-mannosidase activity was increased in frontal cortex only in DLB patients (Additional file 1: Figure S1). We also performed correlation analysis between the enzyme activities and Alzheimer's neurofibrillary pathology expressed as Braak stage. No significant associations were found for any of the enzyme activities tested (data not shown). Discussion GCase is recognized as an important risk factor for PD and DLB. A recent multicenter study found a more significant association of GBA mutations with DLB compared to PD with dementia [14]. In this study we mapped the activity of a series of endo-lysosomal enzymes in post mortem brain tissue of PD and DLB patients, and compared these with elderly pathologically normal control brains from patients. We confirm that GCase specific activity is reduced in PD brains, especially in substantia nigra and that this reduction was also present in DLB patients although to a lesser extent. Interestingly, the expression of GCase at mRNA level was also reduced in the substantia nigra of both synucleinopathies. Our results confirm previous reports on the deregulation of GCase in PD [6,7]. In PD patients carrying GBA mutations, GCase activity has been found to be reduced in all the brain areas analyzed with exception of the cortex [6]. Recently, in sporadic PD, GCase activity and protein expression were found to be decreased in the anterior cingulate and occipital cortex, where accumulation of α-synuclein occurs during the early phases of the disease [7]. GCase activity decrease in these two regions was associated with a reduced lysosomal chaperone-mediated autophagy and decreased ceramide levels [7]. It is worth noting that in the study of Gegg et al. [6] mRNA levels of GCase were unchanged, while in Murphy et al. [7] GCase expression was reduced. In our study, measuring GCase mRNA expression in substantia nigra of PD and DLB patients, we also identified a significant reduction of GCase mRNA in both PD and DLB. This decrease was paralleled by a significant reduction of GCase activity (PD > DLB). Previous work on DLB brains also showed a trend in reduction of GCase activity in patients without GBA mutation, while in GBA mutations carriers the activity of the enzyme was drastically reduced with respect to control subjects [15]. In this study, the low number of patients carrying heterozygous GBA mutations prevented us from finding any significant relationship between GBA genotype and GCase activity. Nevertheless, we cannot exclude that heterozygous GBA mutations might contribute to the reduction of GCase activity in PD. With respect to the other lysosomal enzymes considered in this study, we found a significant reduction of α-fucosidase activity in the frontal cortex of PD patients compared to control subjects, while in DLB the reduction was not significant. α-Fucosidase is responsible for the hydrolysis of α-1,6-fucose residues from glycoproteins and has been previously found to be reduced in CSF from de-novo PD patients [13]. The possible involvement of α-fucosidase in PD pathogenesis is currently unknown, however brain protein fucosylation is regarded as an important way for regulating different processes such as neurite outgrowth and synaptic plasticity [16]. Defective degradation of fucose residues in proteins may contribute to a deregulation of these processes and/or accumulation of undegraded substrates in lysosomes. In a study by McNeil and colleagues other lysosomal enzymes were altered in fibroblast from PD patients with GBA mutation [12]. Cathepsin D and β-hexosaminidase showed increased activity in PD patients fibroblasts, possibly supporting a global deregulation of lysosome functioning. In the present study, the activity of βhexosaminidase did not show significant change in any of the brain regions assessed in the three experimental groups. Overall, however, we identified minimal changes in other lysosomal enzymes in PD or DLB, suggesting that the defect in GCase may be selective in PD and DLB. Globally, we found that the activity of the lysosomal enzymes tested was higher in substantia nigra and hippocampus with respect to the other brain areas. This result is particularly interesting because it highlights the important role of the lysosome system in the metabolism of substantia nigra. It might be possible that, in this region, lysosomal metabolism is more active due to the elevated oxidative stress taking place in this brain area, with the production of oxidized proteins and lipids [17,18]. A failure of the lysosomal system may trigger the accumulation of specific substrates of lysosomal hydrolases and proteases, including α-synuclein, as shown in animal and cell models of GCase inhibition [19,20]. In conclusion, we have mapped the lysosomal enzyme activity in two types of synucleinopathies. The GCase deficiency found in both diseases and specifically present in PD at two different regulation levels (i.e. mRNA and activity) supports the hypothesis of a common mechanism of GCase reduction in the pathogenesis of synucleinopathies, with a strong link to lysosome dysfunction. Methods All aspects of this study were approved through the local Ethics Committee of the University of Perugia and the Newcastle upon Tyne Research Ethics Committee. Our study utilized 55 post-mortem brains, 26 diagnosed with PD, 16 with DLB and 13 controls without any neurodegenerative disease (age and gender matched). Brain tissue was obtained from the Newcastle Brain Tissue Resource (NBTR) at Newcastle University, UK, from individuals with a clinical diagnosis of Parkinson's disease or dementia with Lewy bodies following informed consent from donors and with assent from the next of kin. Tissue from individuals without a history of cognitive impairment or movement disorder served as controls. Neuropathological assessment was performed according to standardized neuropathological diagnostic procedures [21] and confirmed the presence of Parkinson's disease pathology and pathological changes of dementia with Lewy bodies, or an absence of significant neuropathology in control cases. For each patient, samples of snap frozen hippocampus, substantia nigra, putamen, frontal cortex and caudate were used. GBA genotyping was carried out according to Kurzawa-Akanbi and colleagues [15]. Enzymatic assays Brain tissues were thawed and lysed on ice in 50 mM phosphate buffer pH 7.0, containing 150 mM NaCl, 5% w/v, using an Ultra-turrax. 0.1% NP-40 detergent was added and homogenized tissues were sonicated for 30 seconds on ice at 20 Watt. The samples were kept on ice for 30 minutes and centrifuged for 10 minutes at 16,000xg and the supernatants were used for assay. The activities of the lysosomal enzymes βhexosaminidase, α-fucosidase, β-mannosidase, αmannosidase, β-galactosidase, β-glucorecebrosidase and cathepsin E were determined as previously described [8,11]. All of the activities were measured in triplicate. One unit (U) of enzyme activity was defined as the amount of enzyme that hydrolyses 1 pmol of substrate/min at 37°C. mRNA levels of GBA gene Total RNA was extracted from substantia nigra tissue using standard methods and the relative changes in GBA gene expression were determined by qPCR via relative quantification using GAPDH and SDHA as housekeeping genes (Applied Biosystem 7300, TaqMan gene expression assay, probe numbers GBA: Hs00986836_g1, SDHA: Hs00188166_m1, GAPDH: Hs02758991_g1). The 2 -ΔCT method [22] was used to calculate relative changes in gene expression determined from the realtime quantitative PCR analysis. Data analysis The R software [23] was used for statistical analyses. Continuous variables are presented as means (±standard deviations) and medians (ranges). Categorical variables are presented as count and percentages. Hierarchical clustering analysis was used to compare the lysosomal enzyme activities across different brain areas. Differences between patients and controls were assessed using ANOVA for both comparing lysosomal enzymes activities and GBA mRNA expression. Dunnett's post-hoc test was used for multiple comparisons. ANCOVA was performed for testing the confounding effects of postmortem delay on the relationships between lysosomal enzymes activity or GBA mRNA expressions and disease. A p-value ≤0.05 was considered significant in all the analyses. Additional file Additional file 1: Figure S1. Boxplot of lysosomal enzyme activities significantly different among the experimental groups. Table S1. Lysosomal enzymes activities for brain areas and groups.
2023-01-18T15:32:52.337Z
2015-03-27T00:00:00.000
{ "year": 2015, "sha1": "a525f9f974709c84a0224d70fea64f9d32325113", "oa_license": "CCBY", "oa_url": "https://molecularneurodegeneration.biomedcentral.com/counter/pdf/10.1186/s13024-015-0010-2", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "a525f9f974709c84a0224d70fea64f9d32325113", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
14208535
pes2o/s2orc
v3-fos-license
Systematic Variation in Reviewer Practice According to Country and Gender in the Field of Ecology and Evolution The characteristics of referees and the potential subsequent effects on the peer-review process are an important consideration for science since the integrity of the system depends on the appropriate evaluation of merit. In 2006, we conducted an online survey of 1334 ecologists and evolutionary biologists pertaining to the review process. Respondents were from Europe, North America and other regions of the world, with the majority from English first language countries. Women comprised a third of all respondents, consistent with their representation in the scientific academic community. Among respondents we found no correlation between the time typically taken over a review and the reported average rejection rate. On average, Europeans took longer over reviewing a manuscript than North Americans, and females took longer than males, but reviewed fewer manuscripts. Males recommended rejection of manuscripts more frequently than females, regardless of region. Hence, editors and potential authors should consider alternative sets of criteria, to what exists now, when selecting a panel of referees to potentially balance different tendencies by gender or region. Introduction The peer-review process is an evaluation tool used to assess the merit of scientific work [1,2].Referees, experts in a particular field, are crucial to the success of the review system by providing impartial judgment on emerging research of their peers and colleagues [3][4][5][6].They contribute many hours to the process, typically anonymously and with no remuneration [6][7][8].Referees have a powerful influence on decisions made relating to publication [6,9] and specific attributes associated with these individuals may relate to subjective manuscript evaluations. A number of studies from various scientific disciplines have focused on the integrity of referees in assessing manuscripts and whether evaluations are based solely on the intrinsic quality of the manuscript or on factors unrelated to the research [1,3,[10][11][12].For instance, gender [10,13], status [13] and an author's country of affiliation [11,12] have been demonstrated to affect the referee recommendation to publish or reject a given manuscript [1,3,14].This has been described as reviewer bias whereby the characteristics of an author are potentially used by referees and can influence manuscript acceptance [15].However, few studies in ecology and evolution have looked explicitly at referee characteristics and how they relate to the review process.In disciplines such as medicine, it has been demonstrated that younger referees and those with more experience tend to score manuscripts lower [13].Additionally, males have been shown to take longer to review, are more likely to 'accept as is', or are more likely to outright 'reject' relative to females in medicine [16].Here, the importance of gender and scientific age of referee responses within ecology and evolution is similarly tested.Using an online survey, we assessed the importance of characteristics of ecological referees and their reported handling of manuscripts.We expected that ecology is similar to medicine in that gender, status, and region are important determinants of referee performance.A total of 17 questions relating to the publication process were included.For the purposes of this paper however, only those questions relevant to referee behaviour were tested and reported here (Text S1, Dataset S1, Dataset Notes S1).The questions were a combination of open-ended, multiple choice, and likert-scale questions.A group of high impact factor journals publishing ecology and evolutionary biology articles were listed.These were selected based on their 2004 impact factor.Nature, Science, PNAS and Current Biology were also included, as they are top biology journals even though not listed by ISI as ecology.We excluded those journals focusing on reviews (e.g.TREE, Annual Review of Ecology and Evolutionary Systematics) and specialty journals (e.g.Molecular Ecology, Global Change Biology).Despite only recent circulation, we included PLoS Biology which began in 2003 but was already receiving high citations.The final list comprised Nature, Science, Current Biology, PNAS, Ecological Monographs, American Naturalist, Ecology, Ecology Letters, Evolution and PLoS Biology.The survey was distributed to the Ecological Society of America (ECOLOG) and EvolDir mailing lists as well as promoted at international ecological and evolutionary conferences and posted on the working group website.These distribution lists were selected as a representative means to target ecologists and evolutionary biologists.The extent to which individual respondents subscribe to both list-serves was unknown hence the minimum (assuming there was complete overlap in subscribers to both list-serves) and maximum (where there was no subscription overlap) population sizes ranged from 6000 to 12 200.We received 1334 responses to the questionnaire, representing between 11% and 22% of the total population solicited. Design and Implementation of Survey As an estimate for experience, a potentially important covariate, the number of years involved in the publication process was estimated by subtracting the survey date from the reported year of first publication [17].All countries were categorized to the following regions; North America, Europe, or 'Other'.Official language designation was determined according to the country of host institution and characterized as English first language (EFL) or non-English first language (NEFL) using the United Nations Educational, Scientific and Cultural Organization classifications [18]. Statistical Analyses Chi-square analyses were used to describe the distribution of respondents according to their gender, region of affiliation, whether or not they published in or reviewed for the 'top' ecology journals, and referee language designation [19].Generalized linear mixed models were used to test for an effect of gender and region on the number of manuscripts reviewed and reported review time.Due to the non-parametric distribution of some data, an ordinal logistic regression was used to test for effects of the above variables on individual reported rejection rates [19].Tukey HSDs that control for multiple pair-wise tests were used for all post-hoc contrasts with the exception of the latter analysis where a post-hoc contingency table was used to test for differences between levels [19].A logistic regression was used to test for the relationship between review time and rejection rate [20]. Years since first publication was treated as a covariate in all statistical analyses involving gender.All analyses were conducted with JMPH, Version 5.1 [21]. The number of individuals that reviewed for the ten listed journals and those that did not was similar (x 2 1 = 2.88, p = 0.090).However, fewer females (x 2 1 = 25.65,p,0.001), and fewer respondents from NEFL designated countries (x 2 1 = 23.46,p,0.001) reported reviewing for the listed journals.The referees for the listed journals spent significantly more years publishing than individuals who had not reviewed for these journals (12.9760.29 vs 6.9160.29 years ago; F 1,1278 = 216.85p,0.001) and spent significantly less time reviewing manuscripts on average (6.8660.31vs 7.9860.33hours; F 1,1165 = 6.05 p = 0.014).The responses showed that if a respondent refereed for one of the listed journals they were more likely to have also published within such journals (x 2 1 = 409.63,p,0.001; Figure 1). Handling of Manuscripts by Referees Males reviewed significantly more manuscripts than women overall (9.1360.50vs 5.5660.68manuscripts per year; F 1,6 = 11.06,p,0.001;Table 1).There was no difference between regions, however, there was a significant interaction between region and gender: European males reviewed significantly more manuscripts than European or North American females (F 2,6 = 4.06, p = 0.018; Table 1; Figure 2a).There was no difference in review load of North American males and females (t 1,1074 = 1.96 p = 0.192), or between referees from Other countries (p = 0.206).Although there appeared to be a difference between females from Other countries and females of North America and Europe, this was not significant after controlling for multiple comparisons.As expected, respondents who published over a longer period of time reviewed significantly more manuscripts (F 1,6 = 137.36,p,0.001;Table 1). There was significant variation in the time spent reviewing according to region (F 2,6 = 3.07, p = 0.047; Table 1) and Europeans took longer to review than North Americans (7.9660.45vs 7.0060.32hours).Again, there was a significant interaction between gender and region (F 2,6 = 3.30, p = 0.037; Table 1) and European females spent more time reviewing than male or female North Americans (t 1,1078 = 1.96 p = 0.001; Figure 2b).After controlling for multiple comparisons no significant difference appeared between European males and females.Similar to review load, males and females from Other countries (p = 0.665) and North American males and females (p = 0.840) did not differ in the time they invested in reviewing manuscripts.Respondents who had spent more years publishing spent significantly less time reviewing manuscripts (F 1,6 = 8.23 p = 0.004). Self-reported rates of rejection were higher for males than for females although only marginally significantly so (x 2 = 3.73, p = 0.053; Table 1).Post-hoc comparison of gender according to rejection rate showed that females 'accepted' (a self-reported rejection rate of ,25%) significantly more manuscripts than males (x 2 3 = 9.97, p = 0.019; Figure 2c).There was no effect of average time spent reviewing on the typical recommended decision regarding a manuscript (x 2 1 = 2.11, p = 0.147). Discussion Referees are integral to the peer-review process and are arguably a critical element that facilitates effective progress within a discipline.Therefore, a diverse and representative referee population with unique experiences and different scientific strengths promotes accurate and fair feedback on emerging scientific topics [3].However, these potential strengths can also be a weakness if representation is uneven by gender or region, or if evaluation of a manuscript is based on factors that do not relate to the potential scientific merit of the work [2,22].Despite individual differences, more broad attributes of referees such as gender and region can act as determinants of a referee's handling of manuscripts, particularly in terms of the number of manuscripts being reviewed, review time and rejection rate.Editors need to consider the impact that referees individual traits can have on their evaluation of a manuscript and subsequent recommendation for publication. The respondent population, a third of which was female, was representative of the general scientific community as documented by the National Science Foundation (NSF) and UNESCO, which independently reported that females comprised 30% of all academic science and engineering doctoral positions in the United States of America [23], and constituted 32% of scientific researchers in Europe [24].Historically, men have been participating in science longer than women [25], and thus on average have more publishing experience.Our data showed that females had a lower average number of years since their first publication relative to males and we used this variable as a surrogate for activity within the publishing and refereeing process.In doing so, we presume that individuals have been actively participating in the peer-review process since the time of their first publication but we recognize that this may not always be the case, particularly for females who may take time off for childbearing.However, while the number of manuscripts reviewed and the time spent reviewing differed according to the time since first publication, there was no effect on the reported decision of a referee regarding manuscript rejection. In addition to the appropriate representation of genders within our respondent population, there was diversity among regions, with respondents from countries outside of North America comprising over a third of the respondents, an unexpected response given that the survey was distributed through North American based list-serves.The sampling population was potentially bias as individuals who consult list-serves can have different characteristics than the bulk of the research community.We were unable to test for response bias as non-respondents could not be tracked due to the use of list-serves for survey distribution [17,26].Males reviewed significantly more papers than females.There are two possible explanations for this finding.First, it is possible that males are more likely to be asked to review by editors than females, either because there are more males in research or males are more preferred.Whether this is the case is uncertain and should be explored as females may represent a currently underutilized cohort within the ecological community.Second, males may be more likely to agree to review a manuscript if asked.However, it is probable that rates of both solicitation and acceptance affect the results obtained. A gender region interaction for the number of manuscripts that are reviewed appears to be driven by a difference between European and males and females that is not present in other regions.Although it may appear that European males are more efficient referees, reviewing more papers than their female counterparts, the review times show that there was no difference in the amount of time European males and females spend reviewing manuscripts.The only significant difference in review time was for European females who take substantially longer than North American males and females.This was contrary to previous findings in medicine that male referees spend more time reviewing [16].This difference might be explained by the size of each discipline.In medicine there may be a greater pool of available referees for manuscript review resulting in fewer requests per individual.Hence it is possible that medical referees are able to allocate more time per review than ecology and evolution referees who review more papers.Spending more or less time reviewing may reflect the degree of thoroughness but might also indicate referee efficiency.In this study, time spent reviewing did not correlate with typical rejection rate which also suggests that efficiency or degree of thoroughness for both positive and negative suggested outcomes is important. There was no difference in the number of manuscripts reviewed or review time between North American males and females.The absence of a gender gap is promising; a sign that referees in North America are participating in the peer-review process to an equal extent.The survey also showed that gender, but not region of the referee, affected the recommendation to accept.This was consistent with previous work in which males assess manuscripts more strictly [13].Thus having referees of all the same gender reviewing a manuscript can inappropriately increase or decrease the likelihood of a recommendation for publication. The findings also have direct implications for referees who are in academia.Promotion in academia is often tied to the number of manuscripts a scholar has reviewed and the quality of the journals requesting reviews.Females, particularly European, are at a disadvantage in their probability for promotion, reviewing fewer manuscripts and fewer reviewing for 'top' ecology journals than males.Whether the composition of the editorial board affects the number of manuscripts reviewed by females and academics from NEFL countries needs to be considered. Here, we demonstrate that referee gender and region can have implications for the way in which manuscripts are reviewed.The findings demonstrate that males and referees with greater publishing years review and reject more manuscripts than females and referees who started publishing more recently, and that Europeans spend the greatest amount of time reviewing.We propose that gender and geographical affiliation be considered by editors when recommending referees for the evaluation of manuscripts.Although the main criteria for choosing referees should be the extent of their expertise in a particular study area, where appropriate, these traits should be balanced.We recognize that ensuring such a balance becomes restrictive, however, editors should be cognizant of referee tendencies according to gender and region when evaluating their recommendations, and making a final decision for manuscript publication.In addition, the peerreview system would benefit from developing criteria for selecting referees and establishing more detailed standards for manuscript review [27].Introduction of these measures will ensure that we control for the net effects of different referees on a given manuscript and generate more equitable reviews.General linear mixed models were used to test responses for respondent gender, region and the interaction of gender and region on the number of manuscripts reviewed and the time spent reviewing.An ordinal logistic regression was used to examine variation in reported rejection rates.In all cases, years since 1 st publication was included as a covariate (see text for details).doi:10.1371/journal.pone.0003202.t001 A web-based survey of ecologist and evolutionary biologists was designed by the National Centre for Ecological Analysis and Synthesis (NCEAS) Ecobias working group (www.ecobias.org),and was posted online from May 4 th , 2006 to November 4 th , 2006. Figure 1 . Figure 1.Respondent relationship between publishing and reviewing.Respondent publication and referee activity for the listed 'top' ten ecology journals (see text S1 for details).doi:10.1371/journal.pone.0003202.g001 Figure 2 . Figure 2. Relationships between referee gender and manuscript handling.Panel 2a shows the number of manuscripts reviewed per year, and 2b displays the time it takes to review a manuscript in hours.Data are presented as mean 6SE.Gender and regions not connected by the same letter are significantly different (Tukey HSD, p,0.05).Panel 2c highlights the rejection frequency among females and males.doi:10.1371/journal.pone.0003202.g002 Table 1 . Relationship between manuscript handling and respondent demographics.
2016-05-12T22:15:10.714Z
2008-09-12T00:00:00.000
{ "year": 2008, "sha1": "b177251c531e88cb30d99278f530adf8bb9238bd", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0003202&type=printable", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ad44cd610f290eea211c096c15ef2e19e6492106", "s2fieldsofstudy": [ "Biology", "Environmental Science" ], "extfieldsofstudy": [ "Medicine" ] }
247462479
pes2o/s2orc
v3-fos-license
Pricing Decision for a Closed-Loop Supply Chain with Technology Licensing under Collection and Remanufacturing Cost Disruptions : Closed-loop supply chain (CLSC) management faces collection and remanufacturing cost disruption challenges. This study explores a CLSC system wherein original equipment manufacturers (OEMs) license the third-party remanufacturer (TPR) to bear the remanufacturing activities and investigate pricing decisions in the CLSC, while considering collection and remanufacturing cost disruptions. To obtain the optimal pricing strategy, we develop game theory models under the disruptions of both centralized and decentralized CLSCs. Based on theoretical and numerical analyses, we obtain the following results: (1) Whether or not disruption events occur, the centralized supply chain can better encourage consumers to participate in the collection of used products than a decentralized supply chain; (2) when collection disruption in a large positive region or the remanufacturing cost disruption in a large negative region occurs, OEM and TPR profits will greatly increase, and the OEM will raise the licensing fee to extract more profit from the remanufacturing activity; (3) a certain robust region exists for the retail price and wholesale price when the supply chain faces disruption increase; (4) when the supply chain faces the disruptions, it has great influence on the OEM’s licensing fee but little on the TPR’s acquisition price. The main contributions of the study include: (1) We considered the impacts of both technology licensing and collection and remanufacturing cost disruption; (2) we developed game theory models to determine the optimal manufacturing and remanufacturing quantities, and pricing strategy under the disruptions; (3) based on theoretical and numerical analyses, we presented some interesting and important insights. The results of this paper could provide useful guidelines for supply chain members on how to effectively control costs to obtain more profit by adjusting prices and selecting a better operation mode for the closed-loop supply chain. Introduction In the past decade, the shortage of global resources and environmental pollution has become increasingly serious. Countries worldwide are attempting to achieve sustainable development and operate a circular economy (Subramanian and Subramanyam, 2012). In 2016, the State Council of China issued the implementation plan of the extended producer responsibility system. The system clearly defined that the extended producer responsibility system should be implemented in four categories of products: electronics, automobiles, batteries, and packaging products. The manufacturer should bear the environmental responsibility for the full life cycle of its products, including design, circulation, recycling, waste disposal, and so on. Under the policy's guidance, manufacturers achieve considerable economic and environmental benefits by implementing the operation and management of a closed-loop supply chain (CLSC). For example, Xerox's financial statements demonstrate that the recycling and remanufacturing of copiers saved about $200 million and reduced manufacturing costs of new products by 40-65% [1]. Three strategies for recycling the used products were thus proposed: (1) directly by the manufacturers, (2) by retailers, and (3) through third-party remanufacturers [2]. Choosing proper strategies for recycling used products is crucial [3]. Issuing technology licensing to a third-party remanufacturer (TRP) is one such strategy. Some original equipment manufacturers (OEMs) do not participate in remanufacturing activities as they instead focus on manufacturing new products. To adopt sustainable manufacturing practices and take social responsibility, OEMs can entrust a third-party remanufacturer to recycle and remanufacture by means of technology licensing [4]. Technology licensing has been widely adopted by companies to protect their intellectual assets and improve companies' profitability and efficiency [5]. For instance, Apple entrusted Foxconn Group to recycle and reprocess used Apple mobile phones [6]. Qi et al. [7] and Li et al. [8] found that compared with a non-licensing situation, the technology licensing policy can improve social welfare and increase patent holders' profits. The CLSC faces high uncertainty risk because of a series of factors such as the unpredicted availability of recycling products, consumer awareness of environmental protection, and product characteristics. This uncertainty can cause disruptions in the collection and remanufacturing processes. For remanufactured products, the supply of raw materials or parts originates from the collection of used products; hence, the disruption of collection quantities has a great influence on the remanufacturing cost. For example, the 2011 Tohoku earthquake in Japan caused a shortage of electronic components such as semiconductor chips. Because of the sharp rise in remanufacturing costs, downstream remanufacturers were forced to reduce production volume or to find alternative manufacturers. Thus, the problems caused by uncertainty about the quantity and quality of used products, remanufacturing cost, and demand have gained increasing attention. For the traditional supply chain, Wu, Chen, and Zhang [9] found that the dual-channel perishable product supply chain has a certain robustness region under market demand and production cost disruption. For the closed-loop supply chain, Han, Yang, and Hou [10] studied the influence of both the remanufacturing production and market demand disruptions on the motivation of manufacturers to license the third-party remanufacturer to conduct remanufacturing activities. A remanufacturer under technology licensing naturally faces collection uncertainty and disruptions. For example, Caterpillar, one of the world's largest remanufacturers, relies on mature technology and entrusts agent manufacturers in different regions to conduct remanufacturing activities all over the world. They suspended the operation of some plants in the regions affected by COVID-19 due to a shortage of parts in 2020. The negative impact on the operational performance and financial situation of the whole enterprise is substantial [11]. However, to the best of our knowledge, there is no research in CLSC literature considering the impacts of both technology licensing and collection and remanufacturing cost disruption. Hence, we consider the impacts of both technology licensing and collection and remanufacturing cost disruption, and develop game theory models to determine the optimal manufacturing and remanufacturing quantities, and optimal pricing strategy under the disruptions. This study examines the impacts of both collection and remanufacturing cost disruptions on the pricing decisions in a CLSC, where the OEM licenses a TPR for remanufacturing. Further, we analytically discuss the impact of different disruption cases on the optimal pricing strategies and chain members' profits. In this study, the following questions are addressed: (1) How do collection and remanufacturing cost disruptions affect the decisions of chain members? (2) How does the OEM adjust the licensing behavior to respond to different disruption cases? The remainder of this paper is organized as follows. Section 2 briefly discusses the relevant literature. We specify the basic assumptions and notation in Section 3. In Section 4, we develop and address centralized and decentralized game theory models with and without disruptions and obtain equilibrium strategies for each disruption case. We compare different disruption cases and analyze the managerial implications of collection and remanufacturing cost disruptions on the optimal pricing decisions in Section 5. In Section 6, we present numerical examples. Finally, the paper is concluded in Section 7, and future research directions are provided. Reverse Logistics Channel for Closed-Loop Supply Chains Many studies have explored the reverse logistics channel for collecting and remanufacturing in a closed-loop supply chain. For collecting processes, Savaskan et al. [12] proposed three collection modes, retailer collection, manufacturer collection, and thirdparty collection, and analyzed the optimal pricing strategies of chain members. In their study, retailer collection was the best choice for the manufacturer. Xiong and Liang [13], Wen and Dong [14], and Yan [15] discussed the advantages and disadvantages of the three recycling models and obtained the optimal recycling channel structure from the perspective of consumer awareness of environmental protection, corporate social responsibility, and remanufacturing the RL process. Liu and Chen [16] constructed profit decision models under three different recovery strategies: manufacturer independent recovery, manufacturer joint retailer recovery and manufacturer integrated third-party recovery under the assumption of corporate social responsibility investment. Zhao et al. [17] structured a dual-channel recycling model. Through the analysis of the recycling competition model, the optimal selection of recycling channels and the optimal pricing of products are obtained. If the manufacturer aims for maximum recycling quantities of the used products, a dual-channel recycling model should be selected. Wei et al. [18] proposed that the product cycle should be considered in the dual-channel recycling model. Thus, he studied the two-stage closed-loop supply chain model with two recycling channels in a dynamic environment and found the optimal strategy for profit and recycling rate maximization. Huang et al. [19] considered the closed-loop supply chain pricing decision model under a mixed recycling model of retailers and third-parties, compared it with a previous single-channel recycling model, and finally proved that the dual-channel recycling model is better than the single-channel recycling model. Bulmus et al. [20] considered the optimal pricing model of recycling and remanufacturing when the OEM and the third-party independent remanufacturer compete and found that the optimal recycling price of the OEM is only related to its own recycling structure and not to the third-party independent remanufacturer. In addition, through the comparison of different recycling models, some studies explored the impact on the recycling price of waste products. Abbey et al. [21] found that the price of the remanufactured products under the recycling mode of third-party enterprises is higher than that of the products under the recycling mode of the retailers. In addition to the single recycling model, some scholars study the collection strategies under the mixed recycling model, especially cooperation or competition between manufacturers and third parties. Örsdemir et al. [22] considered the problem in which an OEM and an independent remanufacturer (IR) decide the production quantity under the competitive mode and found that the remanufacturing activities undertaken by the remanufacturer are more conducive to improving the ecological environment and social welfare. Closed-Loop Supply Chains with Technology Licensing Some cases wherein manufacturers and remanufacturers simultaneously participate in the remanufacturing process through a cooperative way, such as technology licensing, have also been referred to in some studies. Arora and Ceccagnoli [23] explored the influ-encing factors and internal operation mechanism of technology licensing through empirical research. Hong et al. [24] considered a two-stage CLSC model and analyzed the impact of technology licensing on production and recycling decisions and found that franchise contracts are more conducive to increasing consumer surplus. Ali Sabbaghnia and Ata Allah Taleizadeh [25] studied the three most common technology licenses in CLSCs and found that technology licenses affect welfare and financial concepts. The above studies investigate cases where OEMs and remanufacturers undertake remanufacturing activities in a competitive or cooperative manner. Zou et al. [6] studied the strategy choice of the third-party remanufacturing mode adopted by manufacturers, and found that the strategy is related to the degree of consumers' acceptance of remanufactured products. When the degree of consumers' acceptance of remanufactured products is high, the outsourcing mode is better than the licensing mode. In contrast, the licensing mode is the best strategy. Huang and Wang [26] considered the influence of different recycling channels on the pricing and recycling strategy of CLSCs with technology licensing. Huang and Wang [27] studied the impact of retailer information sharing on the decisions of manufacturers and third-party remanufacturers in the supply chain and found that the retailer's sharing of demand forecast information is beneficial to the manufacturer, but it has a negative impact on the retailer itself. However, a limitation of these studies is the assumption that the market environment is static, and scarce attention has been focused on licensing activity under a disruption scenario (See Table 1). In real-life cases, some changes might may in the collection and remanufacturing processes because of unexpected economic and environmental crises; in addition, demand disruptions commonly occur in a CLSC. Therefore, we investigate the optimal pricing and production decisions in a CLSC with technology licensing under disruptions. Reference Scope Limitations [23] Operation mechanism Assume that the market environment is static and licensing activity is undisturbed [24,25] Consumer surplus or welfare and financial concepts [6,26,27] Strategy choice Closed-Loop Supply Chains with Disruptions Disruption risk is a significant topic in supply chain management. Many researchers have examined its impact on channel coordination. Qi et al. [28] proposed demand disruptions in a two-period supply chain and derived that wholesale quantity discount policies can coordinate both centralized and decentralized supply chains. Zhao and Zhu [29] discussed the coordination of a recycler and a remanufacturer in a closed-loop supply chain with uncertain demand. Gao and You [30] constructed an uncertain model of sustainable supply chain optimization to solve the multiple uncertainties of product demand. Han and Kang [31] explored the influence of the uncertainty of recycling quantity on the acquisition price of recyclers and remanufacturers. However, the above studies only considered one type of disruption factor. In addition, the supply of products is often uncertain. Uncertainty regarding the quantity or quality of raw materials can lead to the disruption of manufacturing cost. Han et al. [32] found that the retailer's recycle model is the best with a positive disruption, and the manufacturer's recycle model is the best in a stable environment or with a negative disruption in a CLSC with remanufacturing cost disruption. Wu et al. [33] also established a supply chain model with two competitive retailers, considering the impact of remanufacturing cost disruption on pricing decisions. An improved revenue-sharing contract is designed to coordinate supply chain profits effectively. The above studies discussed the impact of cost disruption on the supply chain due to the uncertainty of the recycling process. However, disruptions are known to occur at the same time. Teunter and Flapper [34] considered the relationship between product demand and remanufacturing cost and examined the optimal strategy under static and uncertaint environments. Besides the case of manufacturer recycling or retailer recycling, some OEMs actually license TPRs to recycle and remanufacture. Therefore, our work considers a CLSC mode with technology licensing where the OEM licenses the TPR to remanufacture and explore the impact of simultaneous collection and remanufacturing cost disruptions. In contrast, prior work on remanufacturing under technology licensing assumed that the production and collection process is in a stable environment. In fact, as we know, the effect of occurrence of emergencies varies. Hence, our model captures both aspects, where the OEM licenses the TPR to remanufacture in the presence of collection and remanufacturing cost disruptions. Our work explores the optimal pricing decisions in a CLSC with technology licensing under both collection and remanufacturing cost disruptions. To sum up, the main contribution of this study lies in the following aspects: First, we investigated the optimal pricing and production decisions in a CLSC with technology licensing under disruptions. We considered the impacts of both technology licensing and collection and remanufacturing cost disruption. Second, we developed game theory models to determine the optimal manufacturing and remanufacturing quantities, and pricing strategy under the disruptions, i.e., the OEM authorized the TPR for remanufacturing in the presence of collection and remanufacturing cost disruptions. We addressed centralized and decentralized game theory models with/without disruptions, and obtained equilibrium strategies for each disruption case. Furthermore, based on theoretical and numerical analyses, we presented some interesting and important insights. The findings gained through this work could provide useful guidelines for supply chain members on how to effectively control costs to get more profit by adjusting price and to choose a better operation mode for closed-loop supply chain. Problem Description and Model Assumptions Based on the previous analysis, in this section, we consider a CLSC where the OEM licenses TPR to collect used products and produce remanufactured products. The OEM is responsible for the fabrication of new products. The OEM and TPR sell new and remanufactured products, respectively, to retailers at the same wholesale price. For instance, Eastman Kodak Company receives single-use cameras from large retailers that also develop film for customers. On average, 76% of the weight of a disposed camera is reused in the production of a new one [12]. The OEM gains the remanufacturing profits from the TPR by charging a licensing fee (Figure 1). In Figure 1, the three partners in the CLSC, the OEM, the TPR, and a retailer, are shown. In forward logistics, the OEM and TPR will produce the same products at the same price. The retailer then sells the new products and remanufactured products to the consumer at the same retail price [12,[35][36][37]. In reverse logistics, the TPR needs to pay a certain amount of technology licensing fee to obtain the OEM's technology licensing and then recycle the waste products from consumers at a certain acquisition price. For the sake of clarity, the relevant notations are provided in Table 2. Table 2. Parameters and decision variables. Decision variables w The unit wholesale price p The unit retail price δ The unit acquisition price for the collected product f The licensing fee q n The quantity of new products q r The quantity of remanufactured products Parameter c n The unit cost of manufacturing new products c r The unit cost of remanufacturing returned products α The market size β Sensitivity of consumers to the retail price ∆ Cost savings per unit of product in the remanufacturing, Remanufacturing cost disruption λ 1 The unit inventory cost λ 2 The unit shortage cost Note New products and remanufactured products are indicated by subscripts n and r, respectively. The following modelling assumptions were used in constructing the model. 2. The acquisition quantity of used products is G(δ) = u + vδ, where u and v represent the acquisition quantity when the acquisition price δ= 0 and the sensitivity of consumers to the acquisition price, respectively. This assumption is similar to the research of Bakal and Akcali [38]. 3. All collected products can be remanufactured successfully and resold. Additionally, we can determine that the quantity of remanufactured products is equivalent to the acquisition quantity of used products (i.e., q r = G(δ)), and the quantity of new products can be calculated as q n = D(p) − G(δ). 4. No difference exists between new and remanufactured products in terms of quality, feature, packaging, and price [39]. With the enhancement of consumers' awareness of environmental protection, an increasing number of consumers are willing to accept the same price of new products, considering that remanufactured products are more conducive to energy conservation and environmental protection [40], especially in the remanufacturing practice of papermaking and some electronic products [41]. 5. The unit cost of the remanufactured product is lower than the unit cost of the new product, for c r < c n , and c n − c r = ∆, where ∆ represents cost savings per unit of product in the remanufacturing process and on the assumption of ∆ > r. This assumption is analogous to research into making the remanufacturing process profitable [42]. 6. The manufacturer is the leader, and the market information is completely symmetrical. Each member of the supply chain is risk neutral and expects the maximum profit as the decision-making goal. In the centralized model, when no disruptions occur, the CLSC system is considered as a whole. It aims to maximize the overall benefits of the CLSC, and the total profit function of the centralized CLSC is determined by the following: Model Development In a centralized CLSC with no disruptions, the total profit function is π c jointly concave in p and δ, taking the first-order partial derivatives of π c with respect to p and δ, and letting the derivative be zero, we have By solving the equation system of Equations (2) and (3), we obtain the optimal retail price and acquisition price and p c * = α+c n β 2β , δ c * = v∆−u 2v . Substituting p c * = α+c n β 2β and δ c * = v∆−u 2v into q n and q r , we derive q n c * = α−u−c n β−v∆ 2 and q r c * = u+v∆ 2 . Substituting the values of the parameters back into Equation (1), the supply chain's total profit is π c * = u 2 β+v 2 β∆ 2 +v(α 2 −2c n αβ+β(c n 2 β+2u∆)) 4vβ . Decentralized CLSC Model without Disruptions In a decentralized CLSC, it is considered that the OEM, who acts as the channel leader, first determines the optimal wholesale price and technology licensing fee to maximize their own profits and then the retailer and the TPR decide the retail price and acquisition, respectively. We can obtain the optimal solutions by backward induction. Based on the previous assumptions and analysis, the profit function of the retailer, the OEM, and the TPR are formulated as follows: And: Taking the second-order derivatives of Equations (4) and (6) with respect to and pδ, respectively. We can obtain ∂ 2 π R ∂p 2 = −2β < 0 and ∂ 2 π T ∂δ 2 = −2v < 0. Thus, π R and π T are concave in p and δ, respectively. Taking the first-order partial derivatives of π R and π T with respect to p and δ, respectively, and letting the derivative be zero, we have the following: and Taking the second-order partial derivatives of π M with respect to f and w, we have the Hessian matrix as follows: π M is jointly concave in f and w. Taking the first-order partial derivatives of with π M respect to f and w, respectively, and letting the derivatives be zero, by solving the following , we obtain the optimal price strategies as follows: Substituting Equations (9) and (10) into Equations (7) and (8), we can derive the optimal retail price and acquisition price as follows: Furthermore, the optimal quantities of new and remanufactured products can be obtained and simplified as follows: The CLSC Model with Disruptions When disruption events occur, they will cause collection and remanufacturing cost disruptions in a CLSC. The quantity of remanufactured products will be disrupted to G(δ) = u + ∆ u + vδ (Huang and Wang, 2018 [4]), and the remanufacturing cost will be disrupted to c r = c r + ∆ r (Wu and Han, 2016 [42]). We use the notation with a tilde (or ∼) to represent the disruption. To be specific, assume that u + ∆ u > 0 and ∆ − ∆ r > 0. In addition, the production plan will be changed by the occurrence of disruption, which will cause the increased inventory cost and shortage cost. We define λ 1 as the unit inventory cost of an increased product and λ 2 as the unit shortage cost of a decreased product, where λ 1 < c n and λ 2 < c n . Moreover, we let λ n1 and λ n2 denote the unit inventory cost of an increased new product and the unit shortage cost of a decreased new product, respectively. Then, let λ r1 and λ r2 denote the unit inventory cost of an increased remanufactured product and the unit shortage cost of a decreased remanufactured product, respectively. Centralized CLSC Model with Disruptions In the centralized model, the entire CLSC system is regarded as a whole, with only one decision maker. The total profit function of the centralized CLSC is determined in three cases under disruption as follows: when q n > q n * and q r < q r * (p − c n )(α − βp) + (∆ − ∆ r − δ)(u + ∆ u + vδ) when q n = q n * and q r = q r * (p − c n )(α − βp) + (∆ − ∆ r − δ)(u + ∆ u + vδ) − λ n2 (q n * − q n ) + − λ r1 ( q r − q r * ) + , when q n q n * and q r q r * , q n and q r represent the demand quantity of new products and remanufactured products under disruptions, respectively. Because shortage and inventory cannot occur at the same time, and the total market demand is constant, when q r > q r * then q n < q n * , or when q n > q n * then q r < q r * . To facilitate the analysis, according to the different ranges of disruption, the occurrence of disruptions can be divided into three situations for discussion. The disruption ranges in different situations are shown in Table 2. The range of values in Table 3 is obtained by comparing the demand of new products and remanufactured products in a disrupted environment with demand in a stable environment. The demand for new products and remanufactured products in a disrupted environment is obtained by substituting the optimal price under the disruptions into the demand function. Table 3. The disruption cases. The results of other cases in Table 2 are analogous. Substituting Equations (16)- (19) into Equation (15), we can obtain the optimal profit π c * : Decentralized CLSC Model with Disruptions When disruption occurs, it will causecollection and remanufacturing cost perturbations in the decentralized CLSC. The assumptions in this model are consistent with the centralized decision-making model under the disruption events and will not be discussed in detail. When the disruption occurs, the profit function of the retailer, OEM, and TPR are formulated as follows: Thus, the profit function of TPR can be determined: In addition, the profit function of OEM can be determined as follows: Assuming that q r > q r * , using the second-order derivatives of Equations (21) and (24) with respect to p and δ, respectively, we obtain ∂ 2 π R ∂p 2 = −2β < 0 and ∂ 2 π T ∂δ 2 = −2v < 0. Thus, π R and π T are concave in p and δ. The optimal retail price and p * the optimal acquisition price δ * are solved as: Taking the second-order partial derivatives of π M with respect to f and w, we have the Hessian matrix as follows: and |H M | = βv > 0, π M is jointly concave in f and w. Considering the first-order partial derivatives of π M with respect to f and w, respectively, and letting the derivatives be zero, by solving the following equation system , we can obtain the optimal price strategies as follows: w * = α+c n β−βλ n2 2β and f * = β(u+∆ u )+v(α−β(c r +∆ r +λ r1 )) 2vβ . Substituting w * and f * into Equations (26) and (27), we can obtain the optimal retail price and acquisition price p * = 3α+c n β−βλ n2 4β and δ * = ∆v−3u−v∆ r −3∆ u −vλ n2 −vλ r1 4v . The other cases of q n and q r are similar. Proposition 1. When disruption causes the collection and remanufacturing cost perturbation, the optimal price strategies of supply chain members are as follows: Comparisons with Managerial Implications In this section, the optimal results derived in Section 4 are compared and some preliminary corollaries are obtained. Corollary 1. In a centralized CLSC, both the optimal retail price and optimal quantity are given as follows: The proof of Corollary 1 is shown in Appendix A. (The other proofs of Corollary can also be found in Appendix A) Corollary 1 indicates that when the collection disruption occurs in a relatively small region, a certain robust region for the retail price exists, regardless of the value of ∆ u . The occurrence of disruptions does not strictly affect the chain member's decision. A counterbalance and restriction between the deviation of acquisition quantity and remanufacturing cost exists. Basically, the negative effects of one disruption are offset by the positive effects of another disruption. In the case of ∆ u > βλ n2 , with an increase in ∆ u , more used products being collected means more supply quantities for remanufacturing, remanufacturing cost decreases owing to the scale effect, and demand quantity will increase because of the lower retail price. While, if ∆ u < −βλ n1 , the remanufacturing cost will increase with the decrease in ∆ u , so the demand quantity of remanufactured products will decrease owing to the higher retail price. Moreover, the adjustment of the optimal price policies is only related to the increased stand λ 1 λ 2 . Corollary 2. In a centralized CLSC, acquisition price under the large positive disruption region of ∆ u is lower than that under small disruption region of ∆ u . The acquisition price under a large negative disruption region of ∆ u is higher than that under the small disruption region of ∆ u . price in centralized CLSC. Thus, the retail price in centralized CLSC is less than that in decentralized supply chain. Corollary 5. In the same disruption region, we have p * > p c * and δ * < δ c * . Corollary 5 indicates that regardless of the retail price of the centralized CLSC being lower than that of the decentralized supply chain, the acquisition price of centralized CLSC remains higher than that of the decentralized supply chain. From Corollaries 4 and 5, whether or not a disruption event occurs, the retail price of centralized CLSC is lower than that of the decentralized supply chain, and the acquisition price of centralized CLSC is higher than that of the decentralized supply chain. The centralized supply chain is considered more conducive to encouraging consumers to participate in the collection of used products and remanufacturing. Corollary 6. In the decentralized mode, the acquisition price is negatively correlated with the collection and remanufacturing disruption costs. The technology licensing fee is positively correlated with the collection disruption and negatively correlated with remanufacturing disruption cost. Corollary 6 indicates that, with a positive increase in collection disruption, the acquisition price will decrease and the technology licensing fee will increase. The OEM will raise the licensing fee. According to Corollary 2, under this circumstance, the TPR can collect more used products with a lower acquisition price for their own profit. Similarly, when the remanufacturing cost increases due to disruption events, the TPR will reduce the acquisition price of used products to ensure cost savings for remanufacturing. Based on the above discussion, we obtain the following managerial implications: (1) From the perspective of decentralization, when the remanufacturing cost is reduced due to the sudden increase in the quantities of used products, the OEM should set a lower wholesale price, a higher technology licensing price, and extract more profits from the TPR. To ensure their profits, the TPR will set a lower acquisition price to control costs. The retailer's profit depends on the wholesale and retail prices. Because the manufacturer sets a lower wholesale price, the retailer will gain more profit when the acquisition volume is positively disrupted. (2) From the perspective of a centralized CLSC, when recycling quantity is reduced to a certain extent due to the occurrence of disruption events, the profits of a centralized CLSC will be reduced. In other cases, if the supply chain wants to obtain higher profits, it needs to consider the relationships between various factors. Finally, regardless of whether disruption occurs, the centralized CLSC will decide a lower retail price and higher acquisition price than the decentralized CLSC. Thus, the centralized CLSC can more easily stimulate consumers to participate in remanufacturing activities. Numerical Examples To further examine the results of the above discussion and find potential patterns of change, we first focus on how the change in the disruption ∆ u when ∆ r = 0 holds (and ∆ r when ∆ u = 0 holds) affects the profit of OEM and TPR, respectively. Second, we compare and analyze the impacts of the change of the disruption ∆ u when ∆ r = 0 holds (or ∆ r when ∆ u = 0 holds) on the acquisition price and licensing fee, respectively. The relevant parameters are set according to the conditions as follows: α = 300, c n = 30, c r = 10, β = 5, σ = 2, λ n1 = 0.9, λ n2 = 1.2, λ r1 = 1, λ r2 = 1.5, u = 15, v = 15, u 1 = −0.53, and u 2 = 1.75. 6.1. The Profits of OEM and TPR with Different ∆ u and ∆ r As clearly shown in Figure 2, the profits of the OEM and TPR increase with the increase in ∆ u . In ∆ u a large negative region, the profits of the OEM and TPR are lower than those in the small range of ∆ u . Owing to the relatively small profit of the TPR, the OEM cannot extract much profit from charging the licensing fee, and the OEM will reduce the licensing fee. When ∆ u is in a large positive region, the profits of the OEM and TPR will be increasing greatly. OEM will raise the licensing fee to extract more profit from the TPR from remanufacturing. As clearly shown in Figure 3, when ∆ u = 0 is satisfied, the profit of the OEM decreases with an increase in ∆ r . When ∆ r is in a large negative region, the TPR will obtain more profit from remanufacturing, which means that the OEM can extract more profit from remanufacturing. When ∆ r is in a large positive region, the OEM and TPR will lose much profit because of the higher remanufacturing cost. From Figures 2 and 3, the impact of ∆ u and ∆ r on the profit of the TPR is similar to that on the profit of the OEM. In the same disruption range, the impact of the change in collection disruption on the profit is more stable than that of remanufacturing cost disruption. 6.2. The Acquisition Price and the Licensing Fee with Different ∆ u and ∆ r As described in Figures 4 and 5, in the same disruption range, the acquisition price will decrease with the increase in ∆ u and ∆ r . Overall, the acquisition price under different disruption regions remains flat. From Figures 6 and 7, the licensing fee increases with an increase in ∆ u . This will decrease with an increase in ∆ r . Overall, the licensing fee under different disruption regions changes in a large range. Conclusions and Further Research In this study, we take the CLSC in which the OEM licenses the TPR to remanufacture as the research object and study the factors affect the optimal pricing decision and the overall profit of the supply chain when the collection and remanufacturing cost disruptions occur simultaneously. Our research provides the following findings. 1. Whether or not disruption events occur, the centralized supply chain could better encourage consumers to participate in the collection of used products than a decentralized supply chain. 2. When collection disruption in a large positive region or the remanufacturing cost disruption in a large negative region occurs, the OEM' and TPR profits will greatly increase, and the OEM will raise the licensing fee to extract more profit from the remanufacturing activity. 3. A certain robust region exists for the retail price and wholesale price when the supply chain faces disruption increase. The OEM should raise the licensing fee to extract more profit from the remanufacturing activity and the TPR should keep the acquisition price basically unchanged when the collection disruption and remanufacturing cost disruption in a large region. 4. When the disruptions occur, they have a great influence on the OEM's licensing fee but little on the TPR's acquisition price. The findings gained through this work can provide useful guidelines for supply chain members on how to effectively control costs to get more profit by adjusting price and selecting a better operation mode for a closed-loop supply chain. This paper may be extended in the following aspects. It is assumed that no difference between new and remanufactured products exists. Remanufactured products not reaching the same specifications as the new products is thus a more realistic case. Further, this paper does not consider the dual-channel supply chain structure and the case of licensing multiple TPRs to participate in remanufacturing. For future study, we can consider more practical cases in which new and remanufactured products are treated at different prices. In addition, blockchain technology, as a distributed digital ledger technology that ensures transparency, traceability, and security, has shown promise for easing some global supply chain management problems, and has attracted practitioner attention in the supply chain domain [43][44][45][46]. Specifically, blockchain technology also aids in environmental supply chain sustainability, such as tracking and identifying further transaction information both of new products and used products, improving the recycling, etc. [46]. We also plan to integrate circular blockchain platforms for TPR and OEM, as suggested by [44].
2022-03-16T15:19:27.386Z
2022-03-12T00:00:00.000
{ "year": 2022, "sha1": "4e71ffbdea8300feded3c11a70eb28d1d918f8d0", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2071-1050/14/6/3354/pdf?version=1647591620", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "c722ea975f21a4983d9002af1827790d41fd9871", "s2fieldsofstudy": [ "Business", "Engineering" ], "extfieldsofstudy": [] }
264530760
pes2o/s2orc
v3-fos-license
A Machine Learning Algorithm Predicting Acute Kidney Injury in Intensive Care Unit Patients (NAVOY Acute Kidney Injury): Proof-of-Concept Study Background Acute kidney injury (AKI) represents a significant global health challenge, leading to increased patient distress and financial health care burdens. The development of AKI in intensive care unit (ICU) settings is linked to prolonged ICU stays, a heightened risk of long-term renal dysfunction, and elevated short- and long-term mortality rates. The current diagnostic approach for AKI is based on late indicators, such as elevated serum creatinine and decreased urine output, which can only detect AKI after renal injury has transpired. There are no treatments to reverse or restore renal function once AKI has developed, other than supportive care. Early prediction of AKI enables proactive management and may improve patient outcomes. Objective The primary aim was to develop a machine learning algorithm, NAVOY Acute Kidney Injury, capable of predicting the onset of AKI in ICU patients using data routinely collected in ICU electronic health records. The ultimate goal was to create a clinical decision support tool that empowers ICU clinicians to proactively manage AKI and, consequently, enhance patient outcomes. Methods We developed the NAVOY Acute Kidney Injury algorithm using a hybrid ensemble model, which combines the strengths of both a Random Forest (Leo Breiman and Adele Cutler) and an XGBoost model (Tianqi Chen). To ensure the accuracy of predictions, the algorithm used 22 clinical variables for hourly predictions of AKI as defined by the Kidney Disease: Improving Global Outcomes guidelines. Data for algorithm development were sourced from the Massachusetts Institute of Technology Lab for Computational Physiology Medical Information Mart for Intensive Care IV clinical database, focusing on ICU patients aged 18 years or older. Results The developed algorithm, NAVOY Acute Kidney Injury, uses 4 hours of input and can, with high accuracy, predict patients with a high risk of developing AKI 12 hours before onset. The prediction performance compares well with previously published prediction algorithms designed to predict AKI onset in accordance with Kidney Disease: Improving Global Outcomes diagnosis criteria, with an impressive area under the receiver operating characteristics curve (AUROC) of 0.91 and an area under the precision-recall curve (AUPRC) of 0.75. The algorithm’s predictive performance was externally validated on an independent hold-out test data set, confirming its ability to predict AKI with exceptional accuracy. Conclusions NAVOY Acute Kidney Injury is an important development in the field of critical care medicine. It offers the ability to predict the onset of AKI with high accuracy using only 4 hours of data routinely collected in ICU electronic health records. This early detection capability has the potential to strengthen patient monitoring and management, ultimately leading to improved patient outcomes. Furthermore, NAVOY Acute Kidney Injury has been granted Conformite Europeenne (CE)–marking, marking a significant milestone as the first CE-marked AKI prediction algorithm for commercial use in European ICUs. Introduction Acute kidney injury (AKI) is recognized as a major global public health concern, leading to increased morbidity and mortality, with associated high financial health care costs and a major social impact [1,2].The incidence of AKI in the intensive care unit (ICU) has increased over the past decade due to increased acuity as well as improved recognition.A multinational epidemiological study has shown that the incidence of AKI in the ICU exceeds 50% (18% in stage 1, 9% in stage 2, and 30% in stage 3) [3].The development of AKI in ICUs is independently associated with increased ICU length of stay, risk of long-term renal dysfunction (chronic kidney disease and end-stage renal disease), and short-and long-term mortality [4,5]. The definition of AKI has evolved from the risk, injury, failure, loss, and end-stage criteria and the AKI network classification to the Kidney Disease: Improving Global Outcomes (KDIGO) classification [6,7].These definitions are based exclusively on serum creatinine and urine output.Timely recognition of AKI has been challenged by limitations associated with the traditional parameters used for diagnosis.Renal impairment typically precedes changes in serum creatinine and urine output.Thus, the current AKI diagnostic and staging strategy only detects AKI after renal injury or impairment has already occurred. Late AKI diagnosis and its heterogeneous nature have been identified as contributing factors to the limited efficacy observed in drug trials targeting this condition.Studies have indicated that early diagnosis and treatment of reversible AKI reduces mortality [5].Therefore, an AKI diagnosis based solely on creatinine level and urine volume does not meet the clinical demand.Once AKI has developed, there are no treatments available to reverse or restore renal function other than supportive care, emphasizing the importance of early identification and prevention [8][9][10][11][12][13][14][15]. Extensive research has been carried out to try developing new biomarkers, AKI prediction models, and scoring systems based on risk factors.In recent years, the use of electronic health records (EHRs) has become widespread, and the introduction of artificial intelligence has provided new methods for mining massive medical data and training models based on machine learning algorithms.AKI is well-suited for prediction and risk forecasting based on routinely collected data contained within ICU EHRs, as the KDIGO consensus definition for AKI allows for temporal anchoring of events. The Acute Dialysis Quality Initiative convened a group of key opinion leaders and stakeholders to discuss how best to approach AKI research and care in the "big data" era [16].Acute Dialysis Quality Initiative recommends developing tools for predicting AKI, defined as KDIGO stage 2 or 3, rather than targeting all AKI stages.KDIGO stage 1 can be viewed more as a "risk of AKI." Traditionally, AKI predictors or risk factors have been more strongly associated with higher-severity AKI [17,18].This stronger association will likely result in more powerful and robust predictive machine learning algorithms. Previously published machine learning AKI prediction algorithms have, at least in recent years, shown robust prediction accuracy.However, the absolute majority of the studies are retrospective, single-database studies.Many studies have focused on subspecialized conditions such as cardiac surgery, trauma, and burns.Very few models have been externally or prospectively validated, which limits the generalizability of the models. To the best of our knowledge, no model has yet taken the final step in the validation process, testing the impact on patient outcomes in randomized clinical trials when used as a clinical decision support tool for making bedside real-time predictions. In this proof-of-concept study, we have developed, using machine learning methods, an algorithm for early continuous predictions of AKI at KDIGO stage 2 or 3 in a broad critical care setting.This algorithm uses only clinical data routinely collected from the time of admission to the ICU and is designed to be integrated as a clinical decision support tool in EHR systems. Data Set and Study Population The algorithm for predicting AKI was developed based on the Massachusetts Institute of Technology Lab for Computational Physiology Medical Information Mart for Intensive Care IV (MIMIC-IV) clinical database [19,20].This database contains demographics, vital signs, laboratory tests, medications, and more for 53,150 adult ICU patients (76,540 ICU stays) admitted to an ICU or emergency department between 2008 and 2019.AKI onset was defined as the time of the first onset of KDIGO stage 2 or stage 3 [6,7]. Patients included in the analysis (Figure 1 and Table 1) had at least 1 measurement of each of the variables included in the algorithm and were aged 18 years or older at the time of admission.Differences between the AKI and non-AKI cohorts were assessed by appropriate tests of statistical significance (Welch t test for numerical variables, Fisher exact test, or chi-square test for categorical variables).No adjustment was made for multiple comparisons. To ensure that spurious variables were excluded and the most important variables were included, a preselection of the variables was done in cooperation with medical professionals.Hence, the algorithm was based on the following 22 variables: age, sex, heart rate, respiratory rate, body temperature, systolic blood pressure, diastolic blood pressure, vasopressor use, pH, glucose, lactate, serum creatinine, bilirubin, blood urea nitrogen, leukocytes, thrombocytes, oxygen saturation pulse oximetry, fraction of inspired oxygen, partial pressure of oxygen, International Normalized Ratio, Glasgow Coma Scale, and urine output.Hourly values were used, and a last observation carried forward approach was used for any hours with missing information.For any hours with more than one measurement, hourly averages were used.Feature engineering was performed to obtain 2 additional variables: the creatinine ratio (ratio of the current value of creatinine to the minimum creatinine value during the last 7 days) and the creatinine difference (difference between the current value of creatinine and the minimum creatinine value during the last 2 days).All the variables were then standardized by the mean and SD of the training population.No additional feature engineering was deemed necessary. Machine Learning Algorithm Development The algorithm was developed using a hybrid ensemble model [21,22] consisting of a Random Forest (Leo Breiman and Adele Cutler) and an XGBoost model (Tianqi Chen) [23].This method effectively combines both models, and the final risk score is a weighted combination of the predictions from both models.This method was chosen based on its strong performance with tabular data.Each of the 2 models could face difficulties predicting in specific situations, and their combination acts as a safety net to mitigate the mistakes of each other, reducing the impact of their potential individual errors.Data were preprocessed using R (The R Project), and the models were executed using XGBoost [23] and Sci-Kit Learn (David Cournapeau) [24] backends in Python (version 3.8; Python Software Foundation).The model's hyperparameters were selected using a sparse grid search, exploring a reasonable number of hyperparameter combinations while excluding combinations that would obviously underperform or not substantially enhance performance.The XGBoost model used the following nondefault hyperparameters: "max_depth" = 8, "learning_rate" = 0.2, "reg_lambda" = 1.2, and "min_child_weight" = 4. Training stopped if the validation error had not decreased for the last 10 training rounds.Area under the receiver operating characteristic curve (AUROC) was used as the evaluation metric.The Random Forest, executed with Sci-Kit Learn, used the following hyperparameters: "max_features" = 0.5, "min_samples_leaf" = 10, and "n_estimators" = 300.The models were then combined with weights of 0.25 for the Random Forest model and 0.75 for the XGBoost model. The data were split into 3 separate data sets: 1 training set to train the model, 1 validation set to continuously evaluate performance for different hyperparameter combinations, and 1 test set, which was held out to test the final model's performance.Random onset matching [25] was used, randomly selecting 4-hour sequences with the last time point 12 hours before AKI onset for patients with AKI or at any point during the entire ICU stay for patients without AKI.The time points were sampled to maintain a similar distribution of time since admission to the ICU in both populations.Since the algorithm was initially planned for implementation in the Nordic countries, data were sampled to maintain a prevalence of AKI of 22% in all 3 data sets, resembling the prevalence of AKI stages 2 and 3 in Nordic ICU patients [26].This also facilitated comparisons between the data sets, as the AUROC, area under the precision-recall curve (AUPRC), and accuracy are influenced by prevalence.A prediction horizon of 12 hours was chosen to predict AKI as early as possible as well as to minimize performance degradation observed in longer prediction horizons. Performance To assess performance, receiver operating characteristic (ROC) were calculated, that is, the proportion of true positives (sensitivity) in relation to the proportion of false positives (1specificity).Based on the ROC curve, an operating point (threshold) was chosen for classifying patients with a high risk of developing AKI.True positives were patients with AKI that were accurately predicted by the algorithm 12 hours before AKI onset, and false positives were patients without AKI that were wrongly predicted by the algorithm to be at risk of developing AKI.The operating point for the algorithm was chosen to keep sensitivity (the proportion of true positives) around 0.80 while maximizing specificity (the proportion of true negatives) to minimize the false alert rate while ensuring high sensitivity.The algorithm should ideally provide a high proportion of true positives and a low proportion of false positives, corresponding to a large AUROC.The AUPRC is also important, where a large area represents both high recall (low false negative rate) and high precision (low false positive rate).High scores for both recall and precision show that the algorithm yields accurate results (high precision) and captures the majority of all positive results (high recall).Accuracy is the proportion of correct predictions, and positive predictive value is the proportion of predicted AKI cases that are true AKI cases. Variable Importance For the sake of model interpretability, variable importance was calculated using the kernel SHAP (Shapley Additive exPlanations) method [27].The SHAP method calculates Shapley values for each prediction, and the Kernel SHAP uses a weighted linear regression to compute these values.Figure 2 presents an example of a graphic obtained with the SHAP values calculated on the hold-out data.Figure 2A illustrates the distribution of the SHAP values for each variable.To evaluate the global contribution of each variable independently of time, the SHAP value was summed over time for each variable, yielding Figure 2B.According to Figure 2, urine output, creatinine ratio, and Glasgow Coma Scale are the most contributing variables to the model for the hold-out data set.Example of interpretation: urine output at t (12 hours before acute kidney injury onset) is the most important parameter, as it has the largest absolute SHAP value.The blue color indicates that a low urine output value will increase the predicted risk.Creatinine ratio 12 hours before onset is the second most important parameter, as it has the second largest absolute SHAP value.The red color indicates that a high creatinine ratio value will increase the predicted risk. Ethical Considerations As this study is based on a publicly available database, an ethics review was not sought.The MIMIC-IV contains deidentified data, where patient identifiers have been carefully eliminated in compliance with the HIPAA (Health Insurance Portability and Accountability Act) safe harbor provision.The process of gathering patient data and establishing the research database underwent evaluation by the institutional review board at the Beth Israel Deaconess Medical Center.They granted an exemption from the requirement for informed consent and gave their approval for the data sharing endeavor [19,20]. Results The AUROC for the developed algorithm was as high as 0.91 (Figure 3 and Table 2) when predicting 12 hours before onset. The AUPRC was 0.75 on training data and 0.71 on test data when predicting 12 hours before onset (Table 2).The sensitivity, specificity, and accuracy of the algorithm were all high (sensitivity 0.84-0.85,specificity 0.85-0.87,and accuracy 0.84-0.85;Table 2).The chosen operating point yielded a positive predictive value of 0.61 on training data and 0.63 on test data when predicting 12 hours before onset (Table 2).This metric was expected to be lower than the sensitivity, specificity, and accuracy due to the class imbalance.A sensitivity of 80% (Table 2) results in 20% false positives, and since most patients were negative cases (non-AKI), there would be an overproduction of predicted AKI cases.Comparing the distribution of AKI predictions made by the algorithm with the distribution of actual AKI cases (prevalence), we can see that the algorithm predicted 29% of AKI cases in training data and 28% in test data (Table 2), which is somewhat larger than the prevalence of 22%. Principal Results In this study, we developed a machine learning algorithm, NAVOY Acute Kidney Injury, for early continuous predictions of stage 2 and 3 AKI in ICU patients.The algorithm was trained on data from a broad critical care setting (the MIMIC-IV clinical database) and was designed for integration as a clinical decision support tool within EHR systems in ICUs.To optimize its use as a prospective clinical decision support tool, it was designed to make fully automated continuous predictions based on real-time data routinely collected in ICU EHR systems, using variables collected from time of admission and 4 hours of input.This allows for high-performance risk assessments for AKI in adult patients to be provided to clinical staff within only a few hours after ICU admission.Specificity (proportion of true negatives) was prioritized to reduce false alarms, which is especially relevant in clinical decision support tools since XSL • FO RenderX interventions might carry some risk.This also decreases the risk of alarm fatigue, which is a well-known phenomenon in critical care settings. The AUROC of NAVOY Acute Kidney Injury was 0.91 for predictions 12 hours before AKI onset, and this result was consistent between training and test data, indicating that the algorithm yields a high proportion of true positives and a low proportion of false positives.NAVOY Acute Kidney Injury has been externally validated at Skåne University Hospital in Sweden (ClinicalTrials.govNCT05424874, data on file) and obtained Conformite Europeenne (CE)-marking, making it the first CE-marked AKI prediction algorithm for commercial use in European ICUs. Limitations NAVOY Acute Kidney Injury was trained on a US adult population (MIMIC-IV), and the evaluation was performed on a hold-out data set from the same population, which may limit its generalizability and suggest a need for additional external validation (ongoing research). Additionally, the evaluation was based on retrospective data, which could lead to inconsistencies in data recording and necessitate prospective validation before putting the algorithm to use in clinical practice.Furthermore, the calculation of the creatinine ratio used the first creatinine value following ICU admission as the baseline, not the first value in the patient's hospital stay, potentially missing some cases on the first day of their ICU stay. Comparison With Previous Work Most previously published machine learning AKI prediction algorithm studies are retrospective and single-database studies, often focusing on specific conditions such as cardiac surgery, trauma, and burns.Few models have been externally or prospectively validated, limiting their generalizability. In a review of 19 published machine learning AKI prediction algorithms by Gameiro et al [28], one model was prospectively validated in an ICU setting [29].This model was developed to predict AKI based solely on creatinine.Baseline creatinine values were defined as the lowest creatinine value identified in the 3 months before, not including admission.Predictions were made upon ICU admission (AUROC 0.80), on the first morning in the ICU (AUROC 0.94), and after 24 hours of ICU stay (AUROC 0.95). Yu et al [30] recently published a review of machine learning models for AKI.A total of 13 algorithms were studied in a critical care setting comparable to our patient cohort.Performance was reported as AUROC, ranging from 0.69 to 0.926.The model with the highest reported AUROC was designed to predict whether patients with AKI stages 1 or 2 will progress to AKI stage 3 [31].One model, designed to make daily predictions, was externally and prospectively validated, with an AUROC of 0.86 [32]. As pointed out by Moor et al [25], it can be difficult to compare studies based on measures such as AUROC or accuracy, as these measures are directly affected by the prevalence of the studied condition.Even studies from the same database can be difficult to compare due to differences in data extraction and data preprocessing methods.In situations where there is an imbalance, such as in AKI prediction, where the number of patients without AKI is substantially greater than those with AKI, the AUPRC should be reported.While AUROC is primarily affected by specificity and sensitivity, AUPRC is more dependent on the balance between precision and recall.An algorithm can have a very high AUROC, but a much lower AUPRC if the prevalence is very low.However, the NAVOY Acute Kidney Injury algorithm has a high AUROC as well as a high AUPRC, indicating that the algorithm provides accurate results (high precision) and returns a majority of all positive results (high recall).Direct comparisons with previously published AKI algorithms are, however, challenging since none of them have presented AUPRC values. To the best of our knowledge, no machine learning AKI prediction algorithm has yet taken the final step in the validation process, testing the impact on patient outcomes in randomized clinical trials when used as a clinical decision support tool for making real-time bedside predictions.Dascena Inc had planned a clinical trial for the Previse AKI prediction algorithm [33] but this study has been withdrawn (ClinicalTrials.govNCT04200950).A clinical trial has been conducted with the Mayo Clinic AKI Sniffer [34,35], but results have not yet been published (ClinicalTrials.govNCT01621152). Future Work While NAVOY Acute Kidney Injury shows promise, further research is needed to assess its generalizability and clinical utility.External validation in diverse patient cohorts and prospective clinical trials are essential steps toward establishing the algorithm as a reliable clinical decision-support tool.In future implementations of the algorithm at different institutions, an initial "silent" period is planned, during which the predictions will not be presented.This period will facilitate a prospective comparison between the predictions and the actual onset of AKI and will thereby enable calibration of the model to ensure that the algorithm functions as expected at each institution before going live.We have developed a technical platform for real-time predictions, which is currently being tested with our sepsis prediction algorithm, NAVOY Sepsis [36], in the ICU at the Southern General Hospital in Sweden (ClinicalTrials.govNCT05095220).In future research, we intend to clinically validate NAVOY Acute Kidney Injury in a similar fashion.The integration of NAVOY Acute Kidney Injury into ICU settings holds potential for improving real-time patient care and outcomes. Conclusions AKI affects a large proportion of ICU patients and is associated with significant morbidity and mortality.Currently, AKI is diagnosed using the KDIGO classification based on serum creatinine and urine output, parameters that typically lag behind renal injury.We have developed a machine learning AKI prediction algorithm, NAVOY Acute Kidney Injury, that predicts the risk of AKI (KDIGO stage 2 or stage 3) with high accuracy up to 12 hours before onset.The algorithm uses variables routinely collected and contained in ICU EHRs and could serve as a valuable tool for strengthened patient XSL • FO RenderX monitoring, earlier detection, and intervention, potentially improving patient outcomes.NAVOY Acute Kidney Injury is the first CE-marked AKI prediction algorithm for European ICUs, but further validation and prospective studies are necessary to confirm its generalizability and clinical utility. Figure 1 . Figure 1.Intensive care unit (ICU) stays included in the analyses.AKI: acute kidney injury; MIMIC IV: Medical Information Mart for Intensive Care IV. The training data consisted of 9996 sequences (n=9996 ICU stays) of 4-hour data (AKI, n=2199 sequences and non-AKI, n=7797 sequences).The validation data consisted of 2128 sequences (n=2128 ICU stays) of 4-hour data (AKI, n=468 sequences and non-AKI, n=1660 sequences).The test data consisted of 2105 sequences (n=2105 ICU stays) of 4-hour data (AKI, n=463 sequences and non-AKI, n=1642 sequences) and were only used in the final evaluation of the chosen model. Figure 2 . Figure 2. Shapley Additive exPlanations (SHAP) values for the hold-out data.Each point on the graph corresponds to the SHAP value for a specific variable and prediction.The red color indicates a high variable value, while blue indicates a low value.A high absolute SHAP value signifies a variable's high importance.A positive SHAP value increases the predicted risk, while a negative SHAP value decreases it.(A) SHAP values produced with input values of each variable from all 4 time points (with t being the last hour of the 4-hour period).(B) SHAP values averaged over the 4-hour period.Example of interpretation: urine output at t (12 hours before acute kidney injury onset) is the most important parameter, as it has the largest absolute SHAP value.The blue color indicates that a low urine output value will increase the predicted risk.Creatinine ratio 12 hours before onset is the second most important parameter, as it has the second largest absolute SHAP value.The red color indicates that a high creatinine ratio value will increase the predicted risk. Figure 3 .Table 2 . Figure 3. Receiver operating characteristic (ROC) curve for algorithm predicting acute kidney injury (AKI) and hold-out test data predicting AKI 12 hours before onset.AUROC: area under the ROC curve; FPR: false positive rate; TPR: true positive rate. e AKI: acute kidney injury. Table 1 . Patient characteristics of population for algorithm development and validation (patients with data 12 hours before onset, n=11,484 intensive care unit [ICU] stays). Time from ICU admission to AKI onset (hours) Differences between the AKI and non-AKI cohorts, as assessed by Welch t test for numerical variables or Fisher exact test or chi-square test for categorical variables.Comorbidities are defined by International Statistical Classification of Diseases, ninth revision codes registered during the ICU stay. b c Not applicable.d
2023-10-28T15:21:59.119Z
2023-02-02T00:00:00.000
{ "year": 2023, "sha1": "7843a302b3a7ebf060f9a7626993fafb7aff01a3", "oa_license": "CCBY", "oa_url": "https://doi.org/10.2196/45979", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "70bd7c1321a62b2013b3181083a4be04236fd7ee", "s2fieldsofstudy": [ "Medicine", "Computer Science" ], "extfieldsofstudy": [] }
257309436
pes2o/s2orc
v3-fos-license
“Domestic Drama,” “Love Killing,” or “Murder”: Does the Framing of Femicides Affect Readers’ Emotional and Cognitive Responses to the Crime? We conducted two framing experiments to test how downplaying femicide frames affect readers’ reactions. Results of Study 1 (Germany, N = 158) indicate that emotional reactions were increased when a femicide was labeled as “murder” compared to “domestic drama.” This effect was strongest among individuals with high hostile sexism. Study 2 (U.S., N = 207), revealed that male compared to female readers perceived a male perpetrator more as a loving person when the crime was labeled as “love killing” compared to “murder.” This tendency was linked to higher victim blaming. We recommend reporting guidelines to overcome the trivialization of femicides. Introduction According to the World Health Organization, about one-third of women worldwide experience physical and/or sexual violence at least once in their lifetime (2017).The majority of murders of women are committed by their intimate partners or ex-partners (Small Arms Survey, 2012).In the newspapers, killings of women by intimate partners are often labeled as "domestic crimes," "family tragedies," "crimes of passion," or even "love killings" (Exner & Thurston, 2009).Such terms represent crimes against women as individual cases and shed light on the personal relationship between perpetrator and victim.As a consequence, media coverage often ignores that misogynist violence in heterosexual partnerships is a global rather than an individual problem (Gillespie et al., 2013;Isaacs, 2016).Over the past decades, research has established a specific term for the structural phenomenon of murders of women: femicide (Russell & Van de Ven, 1976;Russell, 2011).After women's rights activists strongly criticized the media's trivialization of structural sexual and partnership violence against women, some daily newspapers have changed their reporting policies with regard to the labeling of misogynist crimes (e.g., dpa in Germany, see Borgers, 2019).However, this has not yet become common practice, neither in media coverage nor in everyday language (see Bouzerdan & Whitten-Woodring, 2018 for newspaper coverage). It is not yet clear how these different frames of deadly male violence against women affect information processing, emotional reactions, and crime evaluations of media recipients.To test whether conceptual framing affects these variables, we conducted two newspaper framing experiments in Germany and the U.S. in which the same crime was either labeled with a downplaying femicide frame (e.g., "domestic drama") or an adequate criminalistic label (e.g., "murder"). Theoretical Perspectives on Femicide Framing Media framing plays a pivotal role in the social construction of reality (Scheufele, 2004).By using a very particular concept or metaphor to describe an event, situation, or recommended policy, journalists have the power to influence the way media recipients represent the information to which they have been exposed.The criminality of a city, for example, can be described in terms of a "virus" or a "beast" (Thibodeau & Boroditsky, 2011, 2013), fighting cancer can be portrayed as a "journey" or a "battle" (Hendricks et al., 2018), and an interpersonal relationship may be framed as a "war" or a "two-way street" (Robins & Mayer, 2000, experiment 2).All these examples illustrate how different labels for one and the same subject matter can work as a frame and lead to different perceptions and interpretations. Following Entman (1993, p. 52), framing means to "select some aspects of a perceived reality and make them more salient in a communicating context, in such a way to promote a particular problem definition, causal interpretation, moral evaluation, and/or treatment recommendation."According to psychological models of information processing (Anderson, 1996;Fiske, 1982), linguistic frames are able to activate or transform existing schemata, which contain general information about the object of interest as well as on the relations of its subordinate attributes (Brewer & Nakamura, 1984;Crocker et al., 1984;Fiske, 2018).Such attributes are described as slots, which are assumed to be typically filled with default values (Scheufele, 2004). Previous research has shown that even minor linguistic frames such as single concepts or metaphors have a remarkable impact on cognitive (Elmore & Luna-Lucero, 2017;Hendricks et al., 2018;Thibodeau & Boroditsky, 2011, 2013), emotional (Cho & Boster, 2008;Kalmoe, 2014;Kim & Cameron, 2011;Lee & Schwarz, 2014), and behavioral (Barry et al., 2009;Flusberg et al., 2017;Iyengar & Simon, 1993) responses of readers.For instance, participants who read a short vignette in which an interpersonal relationship was framed as a "war," reported higher support for guarded communication within romantic relationships than did participants who read the same information labeled as a "two-way street" (Robins & Mayer, 2000, Experiment 2).In a similar vein, recalling conflicts with one's intimate partner decreased relationship satisfaction when the relationship was previously framed as a "unity" compared to "journey" (Lee & Schwarz, 2014, Experiment 1). Framing is a presentation tool that is often used in media coverage (Entman, 1993;Pan & Kosicki, 1993).Various dominant frames have been established for the reporting of misogynist crimes and femicides during the past decades or even centuries (Meyers, 1996).A qualitative analysis of community and daily newspapers reporting deaths related to domestic violence in Washington State in 1998 by Bullock and Cubert (2002) revealed that domestic murders are often reported with a stronger focus on the male perpetrator's motives than on the female victim's experience and suffering.Especially articles on smaller cases were identified to contain large information gaps about the circumstances of the crime but at the same time often report exculpatory attributions of the offender.In some articles, the perpetrator was even portrayed as a victim in the same case (Bullock & Cubert, 2002, p. 485).A mixed methods study by Anastasio and Costa (2004, Study 1) confirms the finding that female victims of violence are less precisely personalized than males. In addition, experimental data indicate that adding personal information about the victim increases empathy for female victims and decreases victim blaming (Anastasio & Costa, 2004, Study 1).In a similar vein, a content analysis of femicide portrayal of 292 domestic homicide reports by a Florida metropolitan newspaper between 1995 and 2000 showed that female victims are often blamed by the use of negative language, accentuating their relations to other men and highlighting their choices of not reporting former incidents (Lloyd & Ramon, 2017;Taylor, 2009).More recent framing analyses of newspaper reports on deadly domestic violence against women have shown that a high proportion of newspaper articles in the U.S. normalizes misogynist crimes as commonplace, isolated incidents or as individual loss of control by the perpetrator (Gillespie et al., 2013;Richards et al., 2011).Such evaluations of misogynist crimes and murders are corroborated by the use of headlines like "domestic drama," "crime of passion," or "love killing" (Exner & Thurston, 2009). Following cognitive models of text comprehension and recall, headlines serve as initial cues, which help to organize and structure incoming information (Bransford & Johnson, 2004;see Lorch, 1989 for an overview).Titles and headlines thus help readers to focus their attention selectively to frame consistent information (Lorch et al., 1993).Accordingly, framing deadly domestic violence initially as a "domestic drama" or "crime of passion" animates readers to focus more strongly on the shared guilt between perpetrator and victim compared to a neutral or criminological term of the crime, such as "murder" or "homicide."Such framing is a very important aspect with regard to the social manifestation and downplaying of violence against women. Blaming victims for being responsible for the crimes they experience has been found to notably decrease empathy for them (Sinclair & Bourne, 1998;Sprankle et al., 2018).We, therefore, expect that the use of downplaying frames of deadly domestic violence against women will affect readers' perceptions of and emotional reactions toward the crime. Moderating Factors Media recipients are no "blank slates."Usually, we have certain political attitudes and personal preferences that influence the way we process information (Boyer et al., 2022;Leeper & Slothuus, 2014).In the context of misogynist violence, especially participants' sexism has been repeatedly confirmed to play an important role (Abrams et al., 2003;Chapleau et al., 2007;Masser et al., 2006;Valor-Segura et al., 2011).According to Glick and Fiske's (1996) model of ambivalent sexism, sexist attitudes can be classified into two different but complementary forms of sexism: Benevolent and hostile.While support for traditional gender roles and emphasis on women's warmth and worthiness of protection are expressions of benevolent sexism, hostility, and antipathy-especially toward norm-violating women-are described as hostile sexism (HS).Both forms of sexism have been found to be positively correlated (Glick & Fiske, 1996;Glick et al., 2000).Former research in the field of violence against women has demonstrated that hostile-but not benevolent-sexism is strongly related to an increased acceptance of rape myths (Chapleau et al., 2007;Masser et al., 2006) and victim blaming (Valor-Segura et al., 2011) as well as higher proclivity of acquaintance rape (Abrams et al., 2003).Likewise, social dominance orientation (SDO, i.e., a personal preference for unequal, hierarchical power relations between social groups) has been found to be similarly strongly related to the endorsement of violence against women (Berke & Zeichner, 2016;Rollero et al., 2021).More precisely, Berke and Zeichner (2016) found that men's SDO was significantly associated with a higher number and intensity of electric shocks given to a female competitor in a fictitious reaction time task.We therefore aim to explore the moderating role of HS and SDO on the framing misogynist crimes within both male and female participants in Study 1. Study 1 Study 1 was embedded in the German media context.In autumn 2019, a heated debate arose in Germany about stopping the trivialization of femicides through the use of misleading labels (Borgers, 2019).After women's rights organizations and popular feminists in Germany had strongly criticized the use of mitigating labels of femicides, such as "domestic drama," the German Press Agency's chair, Froben Homburger, agreed with this movement as follows: "Drama and tragedy bring murder and homicide close to a fateful event in which the roles of victim and perpetrator seem to blur: Isn't the perpetrator a victim (of a broken relationship, for example)-and therefore: Does the victim not also have a share in the crime?"(Homburger on November 14, 2019).This tweet was followed by his decision that the German Press Agency (dpa) will in future refrain from using terms such as "family tragedy" or "domestic drama." However, both police authorities and smaller-to-larger daily newspapers continued to make use of these terms.A LexisNexis search for the term "domestic drama" in German newspaper outlets and press releases yielded more than 100 hits in about a half year between November 14, 2019 and July 1, 2020.Furthermore, they are (still) part of the everyday language.In accordance with linguistic framing theory (Entman, 1993;Scheufele, 2004), we expect that framing a typical case of male-perpetrated deadly domestic violence against a female victim as a "domestic drama" compared to a "murder" decreases emotional reactions to the crime (Hypothesis 1a), leads to exculpatory attributions of the perpetrator (Hypothesis 1b) and to lower levels of suggested penalty for the perpetrator (Hypothesis 1c).In order to test our three-part hypothesis, we conducted a newspaper framing experiment in which a typical case of homicide was either labeled as "domestic drama" (German: "Beziehungsdrama") or as "murder" ("Mord") more than one month after Froben Homburger's tweet.In addition to the directional main effect hypotheses, we also explored the moderating effects of HS and SDO in Study 1. Method Participants.Based on an a priori sample prediction for medium framing effects (d = .50,1−β = .80,α < .05,n predicted = 128) using G*Power (Version 3.1 by Faul et al., 2007), we conducted an online experiment with a planned sample size of 150 participants to account for potential dropout.The design of the study, all measurements, as well as the envisaged sample size have been preregistered at the open science forum (OSF, see https://osf.io/ef7u8).Data were collected from 26 to 27 December 2019 using a German crowd-sourcing platform (www.clickworker.de)and has been published on the OSF for reasons of data transparency (https://osf.io/4dkrh/).Overall, 241 persons visited the survey link.Seventy-six participants (42 male; 29 female; 5 diverse; M age = 31.60,SD = 11.78,range = 18-72 years) did not correctly answer the attention check on the first survey page and were then dismissed from further interviewing.Seven participants (3 male, 4 female; M age = 31.14,SD = 13.80,range = 19-59) stopped the interview before the end.The remaining sample includes 158 participants (86 male; 64 female; 8 diverse; M age = 34.06,SD = 12.02, range = 18-74 years).On average, participants had higher school certificates than the German standard population (29.75% secondary school certification; 13.3% vocational baccalaureate diploma; 26.6% high school certificate; 25.3% university degree).Study participation was rewarded with USD 0.60 for a maximum duration of 5 min. Procedure, materials, and measures.The study was designated as a media perception survey in which participants read and evaluated a short newspaper clipping.Data collection was conducted in accordance with the ethical standards of the German Association of Psychology (DGPS) and with the 1964 Helsinki Declaration, and the study was approved by the local ethics committee (application number LEK-233).Participants first agreed with the privacy statement and the informed consent of the study.Then, age, gender, and educational attainment were assessed.Thereafter, participants were randomly assigned to one of the two experimental conditions differentiated by the framing of the crime as "Domestic drama" [German: "Beziehungsdrama," hereinafter referred to as drama condition] or "Murder of wife" [German: "Mord an Ehefrau," hereinafter referred to as murder condition].The full newspaper clipping read: Domestic drama [Murder of wife] Women stabbed to death by her husband Now there is certainty.Karin S. from Lohhausen, missing since Saturday, November 23, is dead.The 48-year-old husband of Karin S. was arrested on Monday.He is accused of having killed [murdered] his wife in the night from Saturday to Sunday.For technical reasons, the police are not giving any details about the exact course of events.However, it is clear that the wife was so severely injured by her husband with a knife that she died at the crime scene immediately after the crime.A friend of the killed woman stated that she had already told of assaults by her husband in the past.The investigators are trying to clarify the closer circumstances of this crime.The husband is now in custody for clarification of the exact circumstances.In the meantime, the examining judge has issued an arrest warrant. Each article was accompanied by the same photo of a flagged, empty crime scene.The image was royalty free and in the public domain on pixaby (www.pixabay.com/de/).(Both original "news" reports are available in Figure S1 in the online supplement materials.)The content and design of the fictitious articles were based on existing reports of average homicides of women.With the information given, the crime could not be clearly classified as an act of passion or planned action.After the treatment, participants were asked about their emotional reactions and evaluated the circumstances of the crime as well as their recommendation of imprisonment.Subsequently, scales of HS and SDO were completed.At the end of the study, participants were debriefed and thanked for their participation. As dependent variables, emotional reactions, evaluations of the crime circumstances as well as penalty suggestions for the perpetrator were assessed.Emotional reactions to the crime were measured immediately after the treatment.Participants were asked how they feel about the crime, which was described in the newspaper article.Answers were given on four 9-point Likert-type scales with the stem: "The described crime…" 1 (does not go under my skin) to 9 (goes very much under my skin); 1 (does not make me angry at all) to 9 (makes me very angry); 1 (does not touch me at all) to 9 (touches me very deeply); and 1 (does not concern me at all) to 9 (makes me very concerned) (α = .92). Then, participants evaluated the circumstances of the crime on four separate items.The items represent independent aspects of the German Criminal Code that are relevant for the assessment of a crime.Following the German principles of sentencing, the severity of a crime is assessed according to the circumstances in which the crime took place (see Section 46, Paragraph 2, German Criminal Code).These include the questions of whether an act was planned or not ("The crime was probably planned"), whether the act was committed under the influence of emotions ("The crime was probably an emotional act"), the presence of mitigating circumstances such as alcohol or drug abuse ("There are probably mitigating circumstances which exonerate the husband"), and aspects of the perpetrator-victim interactions such as deliberate provocations of the perpetrator ("The responsibility for the crime probably lies not only on the side of the husband").Answers were given on a 7-point Likert scale ranging from 1 (completely disagree) to 7 (completely agree).In addition, participants were able to indicate the motive for the crime as an open answer.Answers were coded into 7 categories (1 = jealousy, 2 = infidelity of the wife, 3 = dispute and loss of control, 4 = mental disorder, 5 = misogyny, 6 = alcohol/drug abuse, 7 = other). The suggested penalty level for the perpetrator was measured by asking participants to indicate how many years they would sentence the perpetrator into prison.Responses ranged from 0 to 100 years (M = 23.09,SD = 21.41,mdn = 15).If someone did not approve of jail time or wanted to make additional recommendations, he/she had the possibility to enter another measure (n = 21, 13.29%). Finally, a German 11-item version of the HS scale (Eckes & Six-Materna, 1999) was assessed as a potential moderator (e.g., "What feminists really want is for women to have more power than men") using ratings from 1 (completely disagree) to 7 = (completely agree).Scores were averaged across items such that a higher score indicated a stronger endorsement of HS (α = .93). In addition, SDO was measured with a German translation of the 16-item version of the SDO scale (Pratto et al., 1994, translated by Six et al., 2001, e.g., "To get ahead in life, it is sometimes necessary to stand on others").Responses were given on a 5-point Likert scale from 1 (completely disagree) to 5 (completely agree).Scores were averaged across items so that a higher overall score indicates a greater endorsement of social dominance (α = .88). Results Framing effects were tested by a one-way MANOVA (using SPSS 25) with framing condition (drama vs. murder) as the independent factor and emotional reactions, the four evaluations of the circumstances of the crime, and suggested penalty level as the dependent variables.The one-way MANOVA found no significant differences between the framing conditions on the combined dependent variables, F(6, 146) = 1.11 p = .359,η p 2 = .04,Wilk's λ = .956,indicating the absence of a global treatment effect on the majority of dependent variables.However, looking at the univariate tests and in line with Hypothesis 1a, readers' emotional reactions toward the crime were stronger when the crime was labeled as murder compared to domestic drama (see Table 1).But, the results of the univariate tests do not confirm Hypothesis 1b.There were no significant framing effects on participants' evaluations of the circumstances of the crime.Nor was there confirmation of Hypothesis 1c.Results indicate no meaningful difference between the framing conditions on suggested penalty levels. We conducted additional exploratory mediation models to test whether penalty levels were mediated by participants' emotional reactions, but this was not the case (see Figure S2 in the online supplement materials).The statements on possible motives for the crime were also unaffected by treatment, χ 2 (6, n = 151) = 10.62,p = .101.In both conditions, jealousy (53.6% of the time) was mentioned as the most common motive for the crime. We separately tested both moderators using Model 1, respectively.The experimental framing was included as an effect-coded categorical variable (−1 = drama, +1 = murder).Participants' age, gender (−1 = male, +1 = female), and education level (as ordinal variable) were included as covariates.HS and SDO were added as meancentered moderators, respectively.Table 2 presents the moderation model for HS.As there was no moderating effect of SDO on the effect of framing on participants' emotional reactions, results of this moderation model are presented in Table S1 in the online supplement materials. As shown in Table 2 and Figure 1, the framing effect was especially pronounced among participants with high compared to low HS.Participants with high HS reported stronger emotional reactions toward the crime when it was labeled as "murder" compared to "domestic drama."In this vein, results of an analysis of the conditional effects reported in Table 3 revealed that the treatment effect was strongest in this group, whereas there was no significant treatment effect among participants with low or medium HS levels.In addition, results of the regression model indicate a significant effect of participants' gender on the emotional reactions toward the crime with women reporting stronger emotional reactions than men. Discussion In Study 1, we aimed to test whether using a downplaying femicide frame (i.e., domestic drama) compared to an adequate crime label (i.e., murder) when reporting a typical case of deadly domestic violence negatively influences recipients' emotional reactions toward women, perceived circumstances of the crime, and support for strict penalties for the perpetrator.In addition, we exploratively tested the moderating influence of participants' HS and SDO on the effect of framing.In general, the results of Study 1 indicate limited evidence for an effect of framing on the variables under research.We found that participants' emotional reactions toward the crime could be increased by the use of the adequate crime label compared to the downplaying frame.But no effect of framing on the perception of the crime circumstances or the suggested penalty levels was found.Nevertheless, the exploratory moderation analyses indicated that framing effects may be conditioned by recipient characteristics, as we found that especially participants with high HS levels were positively affected in their emotional reactions by the use of the adequate crime label compared to the downplaying frame.This was not the case for participants with high SDO levels.The study provides some first insights into how downplaying frames of misogynist crimes can affect readers' reactions to the crime.However, the study is accompanied by some limitations.First, the expected effect size which was chosen for the a priori power analysis might have been too large (see Amsalem & Zoizner, 2022 for a meta-analytic view on framing effects).Following studies should therefore rather calculate with small effect sizes.In addition, Study 1 aimed to address too many questions at once by focusing on too many variables.A central variable in the field of research on violence against women is victim blaming.This variable is particularly associated with a positive evaluation of misogynistic violence (Sinclair & Bourne, 1998;Sprankle et al., 2018;Valor-Segura et al., 2011).Therefore, it would be interesting for further research to examine the effects of framing on victim blaming in more detail.Another weakness is, that due to the unequal representation of men and women in the sample, it was not possible to test for systematic gender effects on framing.As HS has been identified as a relevant moderator for the framing effect, it is reasonable that participant gender, as a prior variable, also has an impact on framing.A last concern is that we only tested our expectations in one country context.Femicides and the media framing of femicides is a global problem (Gillespie et al., 2013;Isaacs, 2016), so embedding a similar study design in another country context would help to generalize the findings.This is why we aimed for a second study in which we reduced Note.Frame coding: −1 = drama; 1 = murder, low = −1 SD, high = +1SD. our set of dependent variables, systematically tested the moderating effect of participant gender on the impact of different femicide frames, and moved to another country and language context. Testing Participants' Gender as A Moderating Factor Previous studies have shown a consistent gender effect in the evaluation of violence with men evaluating violence more positively than women (Anastasio & Costa, 2004, Study 1;Chapleau et al., 2007;Romero-Sánchez et al., 2017).In the context of gender-based violence, this begins with the enjoyment of sexist humor (Greenwood & Gautam, 2020;Romero-Sánchez et al., 2017;Ryan & Kanjorski, 1998), a higher victim-blaming tendency (Black & Gold, 2008;Furnham & Boston, 1996;Sims et al., 2007;Ståhl et al., 2010), and ends with the endorsement of actual violence against women (e.g., Furnham & Boston, 1996;Rickert & Wiemann, 1998;World Health Organization, 2017).This is largely due to the different socialization of boys and girls, which typically goes hand in hand with a more positive appraisal of violence among boys (e.g., Simons et al., 1998).In addition, gender differences in the evaluation of violence against women have been also found to be grounded in a lower identification with female victims among men (see van der Bruggen & Grubb, 2014 for a discussion).Women, in contrast, identify more strongly with victims of misogynistic violence (Davies et al., 2009;Donovan, 2007), thus they are expected to evaluate violence against women, in general, more negatively than men and should be less susceptible to subtle cues such as the linguistic framing.Men, on the other hand, have a higher psychological distance from female victims.Therefore, it can be assumed that they are more likely to be stimulated to change their reactions toward misogynist violence via subtle cues such as framing.To systematically test the effect of participants' gender on femicide framing, a second study was conducted, in which participants' gender was varied as a quasi-experimental factor. Study 2 The aim of Study 2 was to conduct a similar framing experiment as in Study 1 but in another country context and to examine participants' gender as a potential moderating factor of the framing effect.In addition, this study focused on only two central variables of interest: individuals' perception of the perpetrator and victim blaming.The design of Study 2 was mainly adapted from Study 1.One big deviation from Study 1 was that we presented participants with different conceptual frames, as we moved to another language context.This time, participants read the same information about a typical case of femicide either labeled as "love killing" or "murder."In line with Hypothesis 1c of Study 1, we expected that participants' perceptions of the perpetrator are increased and that their victim blaming becomes stronger if the crime is labeled as "love killing" compared to "murder."In addition, regarding the moderating effect of participants' gender, we expect this effect of framing to be pronounced among men compared to women (Hypothesis 2). Method Participants.Based on the results of Study 1, we calculated with a smaller effect size in our a priori power analysis (d = .30,1−β = .80,α < .05,using G*Power, Version 3.1 by Faul et al., 2007).Results recommended sample size of N predicted = 202 participants to identify rather small framing effects and interactions in 2 × 2 MANOVA with two response variables.To allow for potential dropout, we aimed to collect data from 250 individuals.Participants took part via Amazon MTurk (https://www.mturk.com).Invitation mails were weighted by participants' gender to guarantee a balanced representation of male and female participants across both framing conditions.Overall, 264 participants began the study (138 male; 122 female; 4 diverse; M age = 39.80,SD = 13.27,range = 19-77 years).However, 57 participants (34 male; 22 female; 1 diverse; M age = 36.84,SD = 11.41,range = 23-67 years) were not forwarded to the treatment page as they failed the previous attention check.The final sample consisted of 207 participants (104 male; 100 female; 3 diverse; M age = 40.61,SD = 13.65,range = 19-77 years).On average, participants had higher school certificates than the German standard population (14.9% high school certificate; 12.6% vocational secondary certification; 4.3% other secondary school-leaving certificate; 60.41% university degree).Study participation was rewarded with USD 0.50 for a maximum duration of 4 min.The data from Study 2 has been made publicly available on the OSF (https://osf.io/g6yht). Procedure, materials, and measures.Participants first agreed with the informed consent of the study and were then asked to answer on questions about sociodemographic information.In this context, we used the same attention check variable as in Study 1.Then, participants were asked to carefully read a small text on the next survey page.The treatment text read as follows: Love killing [Murder]: Woman killed [murdered] by husband A woman was allegedly killed [murdered] by her husband in her house after both had a violent argument.Before the incident she told him that she wanted to separate.Susan Watson, 42, was found on her kitchen floor in Suffiled after the police were called by a concerned neighbor at 12:10 p.m. on Saturday and she was having difficulty breathing.She died on the spot. This time, we added no photo to the text.On the following two pages, participants were asked two questions about the crime.The first question was directly related to the mental representation of the perpetrator and measured participants' perception of "how much the perpetrator has loved his wife" (loving perpetrator).Answers were given on a 7-point Likert scale from 1 (not at all) to 7 (very much).The next item was adapted from Anastasio and Costa (2004, Study 2) and assessed participants' victim blaming by asking "how much (they) think the victim was responsible for the incident," measured on a 7-point Likert scale reaching from "not at all responsible" to "mainly responsible." Results A 2 × 2 factorial MANOVA with frame (love killing vs. murder) and participant gender (male vs. female) as independent factors was conducted.Contrary to Hypothesis 1c, we found no significant differences between the framing conditions on the combined dependent variable score, F(2, 198) = 0.23, p = .790,η p 2 = .002,Wilk's λ = .998,indicating the absence of a global framing effect on participants' perceptions of the perpetrator and victim blaming.However, there was a significant main effect of participant gender, F(2, 198) = 6.80, p = .001,η p 2 = .064,Wilk's λ = .936,and an interaction effect between participants' gender and framing, F(2, 198) = 5.54, p < .01,η p 2 = .053,Wilk's λ = .947.As reported in Table 4, male participants reported to a higher degree that the perpetrator loved the victim and showed a generally stronger victim-blaming tendency than female participants.Supporting Hypothesis 2, the interaction between participant gender and framing revealed that male participants were more likely to indicate that they perceived the perpetrator has loved the victim when they were presented with the "love killing" compared to the "murder" frame, whereas this was not the case among female participants.However, the interaction effect between participant gender and framing was non-significant with regard to participants' degree of victim blaming. An explorative test of a moderated mediation.As we found that the framing significantly affected male participants' perceptions of the perpetrator, we exploratively tested whether the framing of the crime had an indirect effect on participants' victim-blaming tendency via perceiving the perpetrator as a loving person conditioned by participants' gender.Therefore, we exploratively conducted a moderated mediation model using SPSS PROCESS (Version 3.2.01 by Hayes, 2018, Model 7).The experimental framing was included as an effect-coded categorical predictor (−1 = love killing, +1 = murder).Participant gender was added as effect-coded moderator (−1 = male, +1 = female), and similar to Study 1, participants' age and education level (as ordinal variable) were included as covariates.Perceiving the perpetrator as a loving person was treated as a mediator, and victim blaming served as a dependent variable.Results of the moderated mediation model are reported in Tables 5 and 6.There was indeed a significant mediational link between the framing of the crime, participants' perpetrator perceptions, and victim blaming, which was moderated by participants' gender.In line with the findings of the MANOVA, there was no direct effect of the treatment on victim blaming c' = −0.001,SE = .09,|t| = 0.01, p = .991,95% CI [−0.19, 0.19].But with regard to the first path of the mediational link (see Table 5), we found a significant interaction between participant gender and framing on their perceptions of the perpetrator, which was in the second path model positively associated with victim blaming (see Table 6), b = 0.22, SE = .06,|t| = 3.92, p < .001,95% CI [0.11,0.34].A test of conditional indirect effects supported the assumption that the effect of framing on victim blaming was mediated via perceiving the perpetrator as a loving person among Discussion The results of Study 2 were to some extent similar to those of Study 1, as we found limited evidence for a global effect of framing on the dependent variables.However, in Study 2, we also aimed to systematically test the moderating effect of participant gender on the impact of framing.In doing so, we found support for our assumption that especially male participants-similarly to those with high HS-were more strongly affected by the framing.More precisely, we found that male participants indicated to a higher extent that the perpetrator must have loved the victim when the crime was labeled as "love killing" compared to "murder."However, no direct effect of the treatment or the interaction between treatment and participant gender was found on victim blaming.But, results of the moderated mediation analysis showed that the downplaying femicide frame increased male participants' perception that the perpetrator had loved the victim (compared to the adequate crime label) and thereby also amplified their likelihood of victim blaming, whereas women were unaffected by the framing.Taken together, the results of Study 2 echo those of Study 1 and indicate rather limited main effects of framing on media recipients' perceptions of and reactions toward deadly domestic violence.However, a closer look shows that certain target groups that are of great importance for overcoming violence against women (i.e., men and people with hostile sexist attitudes) can be positively influenced by an adequate media framing of femicides.Therefore, the effects of framing should not be underestimated. General Discussion We conducted two media framing experiments to test whether framing a typical case of deadly domestic violence against a female victim with either a downplaying frame (e.g., "domestic drama") or an adequate crime label (e.g., "murder") affects readers' emotional reactions toward the crime, perceptions of the perpetrator and the circumstances, suggested penalty levels, and victim blaming.Supporting former speculations on the use of downplaying femicide frames (Gillespie et al., 2013;Richards et al., 2011;Taylor, 2009), emotional reactions to the crime were increased (Study 1) and male participants' perceptions of the perpetrator as a "loving person" could be decreased (Study 2) when the crime was labeled with an adequate crime label compared to a downplaying frame.However, we did not find support for our hypothesis that framing influences individual perceptions of the crime circumstances, the perpetrator's motives, or the suggested quantum of penalty.These results are contradictory to our expectations, but they are in line with previous research indicating that media framing and headlining predominantly affect emotion-based routes of information processing (Berry et al., 2007;Cho et al., 2003).In addition, framing effects are in general lower with regard to behavior-or policy-related variables compared to emotional and attitudinal variables (Amsalem & Zoizner, 2022;Reijnierse et al., 2015).An important finding of both studies is that especially target groups which are rather known for increased support of gender-based violence, that is, participants with high HS (Chapleau et al., 2007;Masser et al., 2006;Valor-Segura et al., 2011) and/or men (Abrams et al., 2003;Anastasio & Costa, 2004, Study 1;Chapleau et al., 2007;Masser et al., 2006;Romero-Sánchez et al., 2017), were positively affected by the use of adequate crime labels instead of downplaying frames.We think that this finding is supported by evidence on the heuristic-systematic model of information processing (HSM, e.g., Chaiken & Ledgerwood, 2011).According to the HSM, a central variable that is relevant to the persuasive power of a message is personal involvement (Axsom et al., 1987;Chaiken, 1980;Ryu & Kim, 2015).Chaiken (1980, Study 1) found that recipients with high personal involvement in a topic were more strongly convinced by strong than by weak messages and had a lower susceptibility to heuristic cues such as communicator likeability.In a similar vein, Axsom et al. (1987) showed that heuristic acoustic cues (e.g., enthusiastic audience) affected message persuasion of participants with low personal involvement but not of those with high involvement.With regard to the topic of misogynist violence, it is likely that especially non-sexist egalitarian women and men have a higher personal involvement in the issue than sexist people (see also Anastasio & Costa, 2004, Study 2 for general gender effects) and would be thus less affected by heuristic cues such as headline frames.The fact that men and hostile sexists can be positively affected by headline framing makes clear how important adequate crime labeling is in everyday life-especially when a culture of "Machismo" is present (Castillo Rojas, 2019;Yodanis, 2004). Limitations and Future Research Directions Even though our studies provide first insights into framing effects of different crime labels in the field of misogynist violence, there are some limitations of the present research.First, both studies should be seen as initial attempts to investigate conceptual framing effects on the evaluation of femicides.In Study 1, we used a broad set of dependent variables and in Study 2 a very reduced one.Further research should therefore aim for a stepwise test of different femicide frames on different variables with a previously precisely planned test of moderating variables.A second concern of the present study is that we used a rather weak experimental manipulation in which the variation of the frame was mainly manipulated by the variation of a single word.From a strict methodological position this can be seen as a proper experimental manipulation, but with regard to the everyday confrontation with newspaper articles, the question arises whether this practice is ecologically valid.As downplaying frames of misogynist violence in real life may be associated with other text features such as information gaps (Anastasio & Costa, 2004), it would be also interesting to test the interaction effect between frame and the degree of context information given in the newspaper article. Figure 1 . Figure 1.Moderating Effect of Hostile Sexism on the Relationship Between Framing and Emotional Reactions. Table 1 . One-Way MANOVA Comparing Framing Conditions Across the Dependent Variables (Study 1). Table 2 . Participants' Hostile Sexism as a Moderator of the Relationship Between Framing and Emotional Reactions. Table 3 . Conditional Effects of Article Framing at Different Moderator Levels. Table 4 . One-Way MANOVA Comparing Framing Conditions Across the Dependent Variables (Study 2). Table 5 . First Stage Path of the Moderated Mediation Model of Study 2 Predicting Participants' Perceptions of the Perpetrator as a Loving Person.
2023-03-04T06:17:22.135Z
2023-03-02T00:00:00.000
{ "year": 2023, "sha1": "8e1ee4374c485c457cfbe5eb9ca7d9824fb8919c", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.1177/10778012231158103", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "ad15ccfe16113434f60a67a0a1dc8289eabb51f2", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
2772296
pes2o/s2orc
v3-fos-license
Thermodynamic properties of finite binary strings Thermodynamic properties such as temperature, pressure, and internal energy have been defined for finite binary strings from equilibrium distribution of a chosen computable measure. It is demonstrated a binary string can be associated with one-dimensional gas of quasi-particles of certain mass, momentum, and energy. , is a string C n of M bits. The value C n is the sum of set bits in string C n . We take C n values as the observable measure of the ensemble C, and N as the total number of observations. Following are the associated measures and properties: , where k is the number of set bits in string B 2) , where N i is the number of occurrences of i th distinct value C i in the ensemble; k is the number of set bits in string B. From central limit theorem we expect the distribution N i (C i ) for random strings to converge to normal distribution as length M increases: Note, that factor 2 in front of N × 2 above and in property 4) appears because the same C i value can be realized with k and M-k set bits in string B. 2. The same text compressed with 'gzip -9', 3kB, green color dots. 4. Random string produced as output from /dev/urandom on UNIX, 16kB, purple color dots. The N i (C i ) distributions are shown as dots. The corresponding normal distributions calculated using (2) are plotted as lines. The normal distribution on the graph is virtually identical to the adjusted binomial (3) for all four cases. The graphs demonstrate N i (C i ) distribution for random string and compressed string is well approximated by normal distribution. We consider such distribution as equilibrium state of ensemble C. The N i (C i ) distribution for non-random strings, such as English text and UNIX executable file differ significantly from corresponding normal distribution. We consider such ensemble C to be in non-equilibrium state. As follows from (2) the equilibrium distribution is provided by The equilibrium distribution (4) gives the definition of energy levels can be considered particle momentum, and M as the mass of the particle. The "particles" here are the strings C n in ensemble defined by (1). The equilibrium temperature is thus provided by . The average per particle internal energy can then be calculated as , and average in bits per particle entropy as function defined, for example lgamma(x) in C. Therefore the calculation of entropy from (8) and internal energy from (7) is straightforward given known distribution N i (C i ) obtained from (1), for arbitrary binary string B of finite length. Calculation of thermodynamic quantities for equilibrium state can be done analytically, based on partition function, known from (4) The average equilibrium internal energy per particle is calculated using (9) as ( ) This result is consistent with average energy per particle in one-dimensional Maxwell-Boltzmann gas. The average equilibrium entropy in bits per particle is Other thermodynamic relations follow; S, U, and F are the entropy in nats, internal energy, and Helmholtz free energy for the whole ensemble: Table 1 presents results of computation of average internal energy and entropy in bits per particle for four ensembles C built from four strings B used for graphs on Figure 1. Computations were performed using (7) and (8). The corresponding equilibrium average internal energy and entropy have been calculated using (10) Table 1 The results of calculations show match of values eq U and eq micro S for compressed file and random file with (13) to within 0.01%. The presented model can be applied to a pairs of different source strings (A, B), not , where m=(i-1)%M A , n=(i-1)%M B , ) , (
2010-01-24T11:11:57.000Z
2010-01-24T00:00:00.000
{ "year": 2010, "sha1": "2bee6b5d18fe0c6d60ff1a1eed295f2c011f9143", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "2bee6b5d18fe0c6d60ff1a1eed295f2c011f9143", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
23626771
pes2o/s2orc
v3-fos-license
NQO1 and NQO2 Regulation of Humoral Immunity and Autoimmunity* NAD(P)H:quinone oxidoreductase 1 (NQO1) and NRH:quinone oxidoreductase 2 (NQO2) are cytosolic enzymes that catalyze metabolic reduction of quinones and derivatives. NQO1-null and NQO2-null mice were generated that showed decreased lymphocytes in peripheral blood, myeloid hyperplasia, and increased sensitivity to skin carcinogenesis. In this report, we investigated the in vivo role of NQO1 and NQO2 in immune response and autoimmunity. Both NQO1-null and NQO2-null mice showed decreased B-cells in blood, lower germinal center response, altered B cell homing, and impaired primary and secondary immune responses. NQO1-null and NQO2-null mice also showed susceptibility to autoimmune disease as revealed by decreased apoptosis in thymocytes and pre-disposition to collagen-induced arthritis. Further experiments showed accumulation of NADH and NRH, cofactors for NQO1 and NQO2, indicating altered intracellular redox status. The studies also demonstrated decreased expression and lack of activation of immune-related factor NF-κB. Microarray analysis showed altered chemokines and chemokine receptors. These results suggest that the loss of NQO1 and NQO2 leads to altered intracellular redox status, decreased expression and activation of NF-κB, and altered chemokines. The results led to the conclusion that NQO1 and NQO2 are endogenous factors in the regulation of immune response and autoimmunity. The roles of genetic factors in immune deficiency and autoimmune diseases have been recognized for decades (17). B cells play a key role in regulation of immune system. B cells produce antibodies, provide support to other mononuclear cells, and contribute directly to inflammatory pathways (18). Impaired B cell production, maturation, homing, and activation are known to lead to defective immune response (19). Dysfunctional immune response and impaired apoptosis in T cells have been implicated in many immunological abnormalities including autoimmune lymphoproliferative syndrome (18,20,21). In this report, we investigated the in vivo role of NQO1 and NQO2 in immune response and autoimmunity. Both NQO1null mice as well as NQO2-null mice demonstrated lower B cells in peripheral blood, decreased germinal center response, altered B cell homing, and impaired antibody responses. NQO1-null and NQO2-null mice also showed increased susceptibility to autoimmune disease as revealed by decreased apoptosis in thymocyte and predisposition to collagen-induced arthritis. The loss of NQO1 and NQO2 led to accumulation of NADH and NRH that altered intracellular redox status. The studies also demonstrated decreased expression and lack of activation of NF-B and altered chemokines and chemokine receptors. These results suggest that the loss of NQO1 and NQO2 leads to altered intracellular redox status that results in decreased expression and lack of activation of NF-B. This leads to B-cell deficiency and alterations in the homing of B cells and impaired humoral immune response and autoimmunity. labeled anti-CD19, 2.5 l of PE-labeled anti-CD4, and 2.5 of l of CyChrom-labeled anti-CD8 antibodies gently vortexed and incubated on ice in the dark for 30 min. Red blood corpuscles were hemolyzed and fixed using Coulter Q-prep and analyzed using a Coulter EPICS XL-MCL flow cytometer. Femurs were cut at both ends. Bone marrow was flushed with sterile cold PBS. After two PBS washes, the cells were suspended in annexin binding buffer to a concentration of 10 ϫ 10 6 cells/ml. Spleen cells were suspended in cold PBS using the rough surface of glass slides. Red blood cells were lysed using red blood cell lysis buffer containing 15.5 mM NH4Cl, 1 mM KHCO3, and 0.001 mM EDTA. Cells were suspended in annexin binding buffer to a concentration of 10 ϫ 10 6 cells/ml. Thymocytes were obtained from the thymus and suspended in cold PBS using the rough surface of glass slides. Cells were then suspended in annexin binding buffer to a concentration of 10 ϫ 10 6 cells/ml. 100 l of cell suspensions (bone marrow, spleen, or thymus) was added to the appropriate antibodies (1 l of annexin V-FITC, 2.5 l of PE-labeled anti-CD19, 2.5 l of FITC-labeled anti-CD43, 2.5 l of FITC-labeled anti-CD25, 2.5 l of FITC-labeled anti-IgD, 2.5 l of FITC-labeled anti-CD19, 2.5 l of PE-labeled anti-CD4, and 2.5 l of CyChrom-labeled anti-CD8 antibodies). Samples were gently vortexed and incubated on ice, in the dark, for 30 min. Samples were then fixed in 4% paraformaldehyde and then analyzed using Coulter EPICS XL-MCL flow cytometer. Thymidine Incorporation Assay of Proliferation in the Bone Marrow and Spleen Cells-The wild-type, NQO1-null, and NQO2-null mice were sacrificed, and their femurs and spleen were obtained. The bones were cut, and marrow was flushed out gently with RPMI 10% fetal bovine serum with antibiotics. Cell samples were cultured in triplicates in 96-well plates for 48 h. 1 Ci of [H 3 ]thymidine was added to each well. Twentyfour hours later, cells were harvested on a glass fiber filter mat using Tomtec Harvester 96. Incorporated thymidine was measured by scintillation counter. Cell suspensions were prepared from the spleens in RPMI 10% fetal bovine serum with antibiotics. Cell samples were cultured in triplicates in 96-well plates. 0.5 g of Con A was added to each well. Forty-eight hours later, 1 Ci of [H 3 ]thymidine was added to each well. Twenty-four hours later, cells were harvested, and incorporated thymidine was measured. Evaluation of Germinal Center Response-Eight-week-old wild-type, NQO1-null, and NQO2-null mice were injected intraperitoneally with alum-precipitated 50 g of 4-hydroxy-3nitrophenyl acetate (NP) conjugated to chicken ␥-globulin (CGG). Twelve days after immunization, spleens were obtained. Each spleen was split in half. One half was frozen for histology (22). Sections were cut and then probed by immunohistochemistry using antibodies against germinal center B cells marker GL-7 by procedures as described (22). The other half was suspended in cold PBS using the rough surface of glass slides. Red blood cells were lysed using red blood cell lysis buffer containing 15.5 mM NH 4 Cl, 1 mM KHCO 3 , and 0.001 mM EDTA. Cells were suspended in cold PBS in 10 ϫ 10 6 cells/ml. 100 l of the spleen cell suspension was then added to 2.5 l of FITC-labeled anti-B220 and PE-labeled anti-GL-7. Samples were gently vortexed and incubated on ice, in the dark, for 30 min. Samples were then fixed in 4% paraformaldehyde and then analyzed using Coulter EPICS XL-MCL flow cytometer. Primary and Secondary Immune Response Assessment-Eightweek-old wild-type, NQO1-null, and NQO2-null mice were injected intraperitoneally with 50 g of NP conjugated to CGG as described (22). Twelve days after immunization, spleen and bone marrow were obtained for primary immune response analysis. For another set of mice, 8 weeks after primary immunization, mice were injected intraperitoneally with 20 g of NP-CGG. Assays for serum immunoglobulin levels were performed at 0, 5, and 10 days after secondary immunization. On day 12, mice were sacrificed and analyzed for antibody-forming cells (AFC). NP-specific antibody-forming cells were quantitated by ELISPOT assay using nitrocellulose filters coated with NP-BSA 5:1 and 25:1. Labeled anti-IgM Ab and anti-IgG Ab were used to visualize NP-specific AFC. Collagen-induced Arthritis Models-Eight-week-old wildtype, NQO1-null, and NQO2-null mice were immunized with chicken collagen II to induce arthritis as described (23). Incidences of arthritis were recorded. Clinical arthritis scores were calculated using a scale from 0 to 3 for each paw with a maximum score of 12 per mouse. Anti-collagen antibody titers were measured on days 21 and 42. On day 42, mice were euthanized. Sections in the paws were examined microscopically. NAD(P)H:NAD(P) and NRH:NR Ratio in Bone Marrow, Spleen, and Thymus-The following procedure was used to collect and process the tissue for determination of NAD(P): NAD(P)H and NRH:NR ratio because of the sensitivity of these molecules. The bone marrow, spleen, and thymus were surgically removed while the mice were under anesthesia to avoid changes in the levels of the pyridine nucleotides. The bone marrow was then instantly placed in liquid nitrogen (24). While frozen, the ends of the bone were cut using a surgical blade, and while the marrow was thawing, it was flushed by using the solution containing 200 mM KCN, 1 mM bathophenanthroline, and 60 mM KOH. The pyridines were extracted with chloroform and analyzed from these tissues by HPLC and procedures as described previously (4,25). Electrophoretic Mobility Shift Assay-One million bone marrow cells were obtained from wild-type and NQO1-null mice and treated with 10 g/ml lipopolysaccharide (LPS). The nuclear extract was prepared, and an electrophoretic mobility shift assay was performed by procedures as described previously (26). The NF-B binding oligonucleotide sequence used was 5Ј-TTGTTACAAGGGACTTTCCGCTGGGGACTTTC-CAGGCAGGCGTGG-3Ј. Microarray Analysis-We used Affymetrix GeneChip mouse expression set 430 and RNA from untreated mice bone marrow for microarray analysis. Three samples for each genotype (wild type, NQO1-null, and NQO2-null) were analyzed. Each sample included a pool of bone marrow cells from five mice. RNA samples were prepared from bone marrow cells using a Qiagen RNeasy kit. The excellent quality of the RNA samples was confirmed with an Agilent 2100 Bioanalyzer. Our core facility analyzed the samples and provided data to us. We used dChip 1.3 and GeneSpring software to analyze data. We categorized alterations in gene expression in several groups according to gene function using dChip 1.3. These categories included apoptotic genes such as p53, Bax, Bcl-2, caspase 2, caspase 3, caspase 8, apoptosis inhibitor 6, and C/EBP; interleukins, chemokines, and their receptors such as CXCR4, CCL9, CCR1, CXCL12, interferon ␥ receptor, interferon ␥-induced GTPase, interleukin 7, interleukin 10, and interleukin 10 receptor; and transcription regulation genes, DNA damage response genes, and DNA replication and metabolism genes. The results for chemokines and chemokine receptors are presented as fold increase or decrease. Immunological Phenotype of NQO1-null and NQO2-null Mice-Previously, we generated NQO1-null mice deficient in NQO1-and NQO2-null mice deficient in NQO2 (6, 27). An analysis of blood, bone marrow, and spleen from wild-type, NQO1-null, and NQO2-null mice was performed. Flow cytometry analysis of blood lymphocytes showed a decrease in the number of CD19ϩ B cells and an increase in CD4ϩ T cells (Fig. 1, A and B). However, NQO1-null and NQO2-null mice showed higher numbers of CD19ϩ cells in the bone marrow ( Fig. 1C; compare with A and B). To trace different stages of B cell development in the bone marrow, we analyzed bone marrow cells for CD19 and CD43 (for pro-B cells), CD19 and CD25 (for pre-B cells), and CD19 and IgD (for mature B cells). There was no difference between wild-type, NQO1-null, and NQO2null mice in the number of B cell progenitors, pro-B and pre-B cells (Fig. 1C). However, there was a significant increase in the Mice were euthanized, and bone marrow was collected from femurs. After two PBS washes, the cells were suspended in annexin binding buffer to a concentration of 10 ϫ 10 6 cells/ml. To cells in separate tubes, we added PE-labeled anti-CD19 antibody; annexin V-FITC and PE-labeled anti-CD19 antibody; FITClabeled anti-CD43 and PE-labeled anti-CD19 antibody, FITC-labeled anti-CD25 and PE-labeled anti-CD19 antibody, FITC-labeled anti-IgD and PE-labeled anti-CD19 antibody. Assays for the determination of B cells, apoptosis in B cells, and B cell development were essentially performed as described by the manufacturer and measured using a Coulter EPICS XL-MCL flow cytometer. E and F, flow cytometry analysis of the spleen. Spleens were obtained from mice after euthanization. Spleen cells were suspended in cold PBS. Red blood cells were lysed. Cells were suspended in annexin binding buffer to a concentration of 10 ϫ 10 6 cells/ml. 100 l of cell suspension was added to FITC-labeled anti-CD19, PE-labeled anti-CD4, and CyChrom-labeled anti-CD8 antibodies (E ) and to annexin V-FITC and PE-labeled anti-CD19 antibody (F ). Samples were gently vortexed and incubated on ice, in the dark, for 30 min. Assays for the determination of B cells, apoptosis in B cells, CD4ϩ T cells (helper), and CD8ϩ T cell (cytotoxic) were essentially performed as described by the manufacturer and measured using Coulter EPICS XL-MCL flow cytometer. G, proliferation of bone marrow and spleen cells. The mice were sacrificed, and their femurs and spleens were removed. Bone marrow and spleen cells were exposed to 1 Ci of [H 3 ]thymidine. Twenty-four hours later, cells were harvested, and incorporated thymidine was measured. OCTOBER 13, 2006 • VOLUME 281 • NUMBER 41 number of mature IgDϩ B cells in the bone marrow of NQO1null and NQO2-null mice (Fig. 1C). Apoptosis in bone marrow B cells was lower in NQO1-null and NQO2-null mice as compared with wild-type mice (Fig. 1D). Flow cytometry analysis of spleen lymphocytes showed no difference between wild-type, NQO1-null, and NQO2-null mice (Fig. 1E). In contrast to lower apoptosis in bone marrow B cells, the knock-out mice showed no difference in apoptosis of spleen B cells as compared with wild type (Fig. 1F). NQO1 and NQO2 Regulate Immune Response and Autoimmunity We then measured proliferation of bone marrow cells and spleen T cells using thymidine incorporation assay. The results are shown in Fig. 1G. NQO1-null bone marrow cells showed significantly higher proliferation rate than that of wild type ( p Ͻ 0.01). NQO2-null bone marrow does not proliferate significantly different from that of wild type. NQO1-null spleen cells showed significantly higher proliferation rate than that of wild type ( p Ͻ 0.001). NQO2-null spleen T cells proliferate slightly faster than that of wild type ( p Ͻ 0.05). We observed that NQO1-null mice have bigger spleen than wild-type mice. We weighed the spleens of 10 wild-type, NQO1-null, and NQO2-null mice. NQO1-null mice spleen is slightly but significantly bigger than that of wild-type mice (wild type, 105 Ϯ 8, and NQO1-null, 134 Ϯ 10 mg; p Ͻ 0.05). No significant difference between NQO2-null and wild type in spleen weight was observed. We analyzed the spleen cells from wild-type, NQO1-null, and NQO2-null mice for marginal zone B cells by flow cytometry after staining for marginal zone B cell markers CD23 and CD21 ( Fig. 2A). CD23ϩCD21ϩ marginal zone B cells showed a significant decrease in the spleen of NQO1-null mice ( Fig. 2A). CD23ϩCD21ϩ marginal zone B cells were also lower in NQO2-null mice as compared with wild type (Fig. 2A). In same experiment, CD21ϩ cells were found Spleens were obtained from wild-type, NQO1-null, and NQO2-null mice after euthanization. Spleen cells were suspended in cold PBS. Red blood cells were lysed. Cells were suspended in cold PBS to a concentration of 10 ϫ 10 6 cells/ml. 100 l of cell suspension was added to FITC-labeled anti-CD21 and PE-labeled anti-CD23 antibodies. Samples were gently vortexed and incubated on ice, in the dark, for 30 min. Assays for the determination of CD21ϩ and CD23ϩ splenic B cells were essentially performed as described by the manufacturer and measured using Coulter EPICS XL-MCL flow cytometer. B and C, germinal center (GC) response. Eight-weekold wild-type, NQO1-null, and NQO2-null mice were injected intraperitoneally with 50 g of NP conjugated to CGG. Twelve days after immunization, spleens were obtained. Germinal center response was evaluated by flow cytometry analysis for B cell marker B220 and germinal center marker Gl-7 and immunohistochemistry analysis using antibodies against germinal center marker GL-7. increased in NQO1-null mice as compared with wild-type mice. NQO2-null mice also demonstrated an increase in CD21ϩ cells. However, the increase was significantly lower than NQO1-null mice. The significance of the increase in CD21ϩ cells remains unknown. Humoral Immune Response in NQO1-null and NQO2-null Mice-To assess the humoral immune response in NQO1-null mice and NQO2-null mice, we started by evaluating the germinal center response. Eight-week-old wild-type, NQO1-null, and NQO2-null mice were injected intraperitoneally with 50 g of alum-precipitated NP conjugated to CGG (NP-CGG). Twelve days after immunization, spleen cells and tissue sections were analyzed for germinal center response. Both flow cytometry and immunohistochemistry analysis showed significantly lower germinal center response in NQO1-null mice ( p Ͻ 0.01) (Fig. 2, B and C). Some decrease, but not statistically significant, was observed in NQO2-null mice (Fig. 2, B and C). For more thorough assessment of humoral immune response, we measured primary and secondary antigen-specific antibody response in wild-type, NQO1-null, and NQO2-null mice. Eight-FIGURE 3. Primary and secondary humoral immune response. A and C, primary humoral immune response. Eight-week-old wild-type, NQO1-null, and NQO2-null mice were injected intraperitoneally with NP conjugated to CGG. Twelve days after immunization, spleen and bone marrow were obtained for primary immune response analysis. NP-specific AFC were quantitated by ELISPOT assay using nitrocellulose filters coated with NP-BSA 5:1 and 25:1. Labeled anti-IgM Ab and anti-IgG Ab were used to visualize NP-specific AFC. B and D, secondary humoral immune response. Seven-week-old wild-type, NQO1-null, and NQO2-null mice were injected intraperitoneally with 50 g of NP conjugated to CGG. Eight weeks after primary immunization, mice were injected intraperitoneally with 20 g of NP-CGG. On day 12, mice were sacrificed and analyzed for AFC. NP-specific AFC were quantitated by ELISPOT assay using nitrocellulose filters coated with NP-BSA 5:1 and 25:1. Labeled anti-IgM Ab and anti-IgG Ab were used to visualize NP-specific AFC. . Serum analysis for antigen-specific IgG after primary and secondary immunization. Eight-week-old wild-type, NQO1-null, and NQO2null mice were injected intraperitoneally with 50 g of NP conjugated to CGG. Eight weeks after primary immunization, mice were injected intraperitoneally with 20 g of NP-CGG. Assays for serum immunoglobulin levels were performed at 0, 5, and 10 days after secondary immunization. OCTOBER 13, 2006 • VOLUME 281 • NUMBER 41 JOURNAL OF BIOLOGICAL CHEMISTRY 30921 week-old wild-type mice were immunized with NP-CGG as described earlier. Twelve days after immunization, spleen and bone marrow were obtained for primary immune response analysis. For another set of mice, 8 weeks after primary immunization, mice were injected intraperitoneally with 20 g of soluble NP-CGG. Twelve days later, mice were sacrificed and analyzed for secondary immune response. Assessment of immune response was done by measuring the number of AFC. Labeled anti-IgM antibody and anti-IgG1 antibody were used to visualize NP-specific AFC. Both NQO1-null and NQO2-null mice (especially NQO1-null) showed weaker primary and secondary immune response (Fig. 3). Serum levels of NP-specific IgG were measured on days 0, 5, and 10 after the secondary immunization. NP-specific IgG were lower in NQO1-null and NQO2-null mice (Fig. 4). Autoimmunity in NQO1-null and NQO2-null Mice-Decreased apoptosis in the thymus and bone marrow cells of NQO1-null and NQO2-null mice and increased spleen T cell proliferation in NQO1-null mice pointed to the possibility of higher susceptibility to autoimmune disease. We used the collagen-induced arthritis model to assess the susceptibility of NQO1-null and NQO2-null mice to autoimmunity. We found that NQO1-null mice developed arthritis earlier and for a longer duration than wild-type mice (Fig. 5A). Arthritis in NQO1null mice was more severe than that in wild-type mice. This is reflected as a higher arthritis clinical score (Fig. 5B). There was no significant difference between NQO2-null and wild-type mice in the onset and severity of arthritis (Fig. 5, A and B). However, arthritis lasted longer in NQO2-null mice as compared with wild type (Fig. 5, A and B). Alteration in Intracellular Redox Status, Decreased Expression, and Activation of NF-B and Altered Chemokines and Chemokine Receptors-The analysis of bone marrow NADH, NAD, NRH, and NR showed a significant increase in NADH:NAD ratio in NQO1-null and NRH:NR ratio in NQO2-null mice (Fig. 6A). Similar analysis also showed a significant increase in NADPH:NADP ratio in NQO1-null mice as compared with wild-type mice (data not shown). We used electrophoretic mobility shift assay to investigate the expression and LPS activation of NF-B in wildtype, NQO1-null, and NQO2-null mice. The results are presented in Fig. 6B. Decreased NF-B binding to DNA was observed with bone marrow nuclear extract from NQO1null mice as compared with wild-type mice (compare lanes 3 and 5). LPS treatment demonstrated a significant increase in NF-B binding to DNA in wild-type mice (compare lanes 3 and 4). The LPS-mediated activation of NF-B binding was more or less not observed in NQO1-null mice (compare lanes 5 and 6). The decreased binding of NF-B and lack of LPS activation of NF-B binding to DNA was also observed in NQO2-null mice (data not shown). However, the magnitude of difference was lower in NQO2-null than in NQO1null mice. The results on the increase in NADH:NAD and NRH:NR ratios and lower NF-B binding and lack of LPS activation of NF-B binding to DNA were also observed in spleen and thymus of NQO1-null and NQO2-null mice (data not shown). Microarray analysis of bone marrow from untreated wild-type, NQO1-null, and NQO2-null mice were performed. The microarray analysis revealed alterations in chemokines and chemokine receptors associated with the loss of NQO1 and NQO2 in respective null mice (Fig. 6C). Specially noted were chemokine (CXC motif) ligand 12, receptor 4, and receptor 1. Interestingly, the alterations noted were of lower FIGURE 5. Collagen-induced arthritis model and apoptosis in thymocytes. A and B, collagen-induced arthritis. wild-type, NQO1-null, and NQO2-null mice were injected intradermally at the base of the tail with 200 g of chicken collagen II emulsified in complete Freund's adjuvant. Twenty-one days later, mice were injected with 100 g of collagen II in complete Freund's adjuvant. Mice were observed for signs of arthritis. Incidence of arthritis was recorded (A). Clinical arthritis scores were calculated using a scale from 0 to 3 for each paw with a maximum score of 12 per mouse (B). C and D, flow cytometry analysis of the thymus. Thymocytes were obtained from wild-type, NQO1-null, and NQO2-null mice after euthanization. Thymocytes were suspended in cold PBS. Cells were then suspended in annexin binding buffer to a concentration of 10 ϫ 10 6 cells/ml. 100 l of cell suspension was added to 2.5 l of PE-labeled anti-CD4 and 2.5 l of CyChrom-labeled anti-CD8 antibodies (C) and 1 l of annexin V-FITC (D). Samples were gently vortexed and incubated on ice, in the dark, for 30 min. CD4ϩCD8Ϫ T cells, CD8ϩCD4Ϫ T cell, CD4ϩCD8ϩ T cells, and apoptosis in thymocytes were measured using Coulter EPICS XL-MCL flow cytometer. magnitude in NQO2-null mice as compared with NQO1-null mice. DISCUSSION The experiments in this study, for the first time, establish a physiological role of NQO1 and NQO2 in control of immune response and autoimmunity. The NQO1-null and NQO2-null mice showed impaired humoral immune response. The magnitude of impairment in humoral immune response was of higher magnitude in NQO1-null mice than NQO2-null mice. Phenotypic analysis of NQO1-null and NQO2-null mice showed a decrease in the number of B cells in the blood and an increase in the bone marrow, whereas no change in B cell number was observed in the spleen. Further investigations revealed that mature B cells and not pro-and pre-B cells increased in bone marrow of NQO1-null and NQO2-null mice. Since the maturation of B cells to IgD positive mature B cells takes place in the peripheral lymphoid organs, spleen, and lymph nodes (not in the bone marrow), the increase in IgD positive mature B cells in the bone marrow most likely resulted due to the increased homing of mature B cells into the bone marrow. Microarray analysis of the bone marrow showed increased chemokine (CXC motif) receptor 4 (CXCr4) and chemokine (CXC motif) ligand 12 (CXCL12), 20% in NQO1-null and 5% in NQO2-null mice. CXCr4 and CXCL12 are the chemokine receptor and ligand involved in the homing of B cells to the bone marrow (28). NQO1-null and NQO2-null mice (especially NQO1-null) showed weaker primary and secondary antibody response. This was demonstrated by the lower number of NP-specific AFC in the spleen and bone marrow and lower serum NP-specific IgG after primary and secondary immunization with NP-CGG. Germinal center response was also weaker, especially in NQO1-null mice. All these observations led to the conclusion that the loss of NQO1 or NQO2 (NQO1 more than NQO2) resulted in impaired humoral immune response, suggesting that NQO1 and NQO2 are significant endogenous factors in regulation and proper functioning of immune response. T helper cells are major perpetrators in autoimmunity (29). The decreased apoptosis has been linked to autoimmunity (30). Mutations in genes that regulate apoptosis, such as Fas, FasL, caspase 10, and caspase 8, result in higher susceptibility to autoimmune diseases (20,21,31,33,34). Flow cytometric analysis of thymus showed lower apoptosis in NQO1-null and NQO2-null thymocytes as compared with wild-type mice. The thymus is a main place for T cell tolerance. T cell tolerance includes elimination of autoreactive T cells (negative selection), mostly by apoptosis (29). Lower apoptosis in the thymus of NQO1-null and NQO2-null mice might compromise T cell tolerance. This might have allowed autoreactive T cells to escape negative selection and thus increased susceptibility to develop autoimmune disease. These findings together pointed toward the possibility of increased susceptibility in NQO1-null or NQO2-null mice to develop autoimmunity. Indeed, NQO1-null mice developed arthritis earlier and for a longer duration than wild-type mice in collagen-induced arthritis model. NQO2-null mice did not show significant difference in autoimmunity from wild-type mice in the same experiment. The higher sensitivity of NQO1null mice to induce arthritis could be due to impaired T cell tolerance in the lymphoid organs as a result of decreased apoptosis and increased proliferation of T cells. This is supported by the lower apoptosis seen in the thymus and bone marrow cells of NQO1-null mice and the increased proliferation in bone marrow cells and splenic T cells. The decreased apoptosis and increased proliferation in the lymphoid organs of NQO2-null mice were milder than that in NQO1-null mice. The above observations raised an interesting question regarding the mechanism of NQO1 and NQO2 regulation of immune response and autoimmunity. The loss of NQO1 and NQO2 led to alterations in intracellular redox status. This was due to accumulation of reduced NADH in NQO1-null mice and NRH in NQO2-null mice. Alterations in the redox status of the cells presumably changed transcription and/or modifica-FIGURE 6. NQO1 relationship to immune response and autoimmunity. A, NAD(P)H:NAD(P) ratio in bone marrow of wild-type, NQO1-null, and NQO2-null mice. Femurs were surgically removed, and pyridines were extracted with chloroform and analyzed by HPLC procedures as described under "Materials and Methods." The data are shown only for NADH/NAD. B, electrophoretic mobility shift assay. One million bone marrow cells were untreated or treated with 10 g/ml LPS for 30 min. Nuclear extracts were prepared, and NF-B binding was analyzed by electrophoretic mobility shift assay. Only the shifted bands are shown. C, microarray analysis. RNA from wild-type, NQO1-null, and NQO2-null mice bone marrow were used to perform microarray analysis. The differences in selected chemokines and receptors are listed as compared with wild-type mice. tion of factors including the loss of expression and lack of LPS activation of NF-B and alterations in chemokines (including CXCr4 and CXCL12). The redox modulation of NF-B and chemokine/receptor is reported earlier (35). LPS has been shown to cause apoptosis in B cells by activating NF-B (36), CD4ϩCD8ϩ thymocytes, and lymphoid organs (32). Failure of activation of NF-B might have contributed to reduced apoptosis in thymocytes in NQO1-null and NQO2-null mice. The alterations in intracellular redox status combined with lower expression and lack of activation of NF-B might have altered the homing of B cells and reduced antibody responses. The changes in B cells were translated to decreased primary and secondary immune response. Decreased apoptosis and increased proliferation of thymocytes contributed to autoimmunity in NQO1-null mice. The results on impaired immune response and autoimmunity in NQO1-null and NQO2-null mice are extremely significant and have major impact on human health. This is since 2-4% of human individuals are homozygous for C 3 T mutation in the NQO1 gene, leading to proline to serine substitution, and totally lack the NQO1 protein (10,11). In addition, greater than 20% individuals are heterozygous and carry one mutated NQO1 allele. These individuals lack 50% NQO1 protein. The NQO1 homozygous, and heterozygous mutant individuals are expected to have impaired immune response and are at risk for autoimmune diseases. This is the first study toward genotyping human individuals for lack of NQO1 and problems associated with impaired immune response and autoimmunity. In conclusion, NQO1 and NQO2 are important endogenous factors in regulation of immune response and autoimmunity. The loss of NQO1 or NQO2 (especially NQO1) results in impaired humoral immune response and higher susceptibility to autoimmune diseases. The alterations in intracellular redox status due to the loss of NQO1 and NQO2 presumably led to changes in expression and induction of factors including NF-B, chemokines, and chemokine receptors. These changes resulted in altered B cell homing, reduced B cell response, and decreased apoptosis of thymocytes. These changes led to compromised immune response and autoimmunity. The detailed mechanisms of the role of NQO1 and NQO2 role in regulation of immune response and autoimmunity await future investigations.
2018-04-03T02:24:51.213Z
2006-10-13T00:00:00.000
{ "year": 2006, "sha1": "9f47b64dcc53a471fe6c551639a25a88d3947c8d", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/281/41/30917.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "e80cf0cc85ff0d70305994146f6af4c462399bba", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
209417892
pes2o/s2orc
v3-fos-license
Effectiveness of 23-valent pneumococcal polysaccharide vaccine on elderly patients with colorectal cancer Abstract The commonly used vaccine for adults with a high risk of pneumonia is 23-valent pneumococcal polysaccharide vaccine (PPSV23). However, its effectiveness in patients with colorectal cancer has not been investigated. This study aimed to investigate the effectiveness of PPSV23 in reducing the risk of pneumonia among elderly patients with colorectal cancer. A total of 120,605 newly diagnosed patients with colorectal cancer were identified from the Taiwan National Health Insurance Research Database between 1996 and 2010. Of these patients, 18,468 were 75 years or older in 2007 to 2010, and 3515 received PPSV23. People aged 75 years or older have been considered eligible for receiving PPSV23 vaccination in Taiwan since 2007. The specific “vaccination period” of October 2008 to December 2008 was used to minimize the potential immortal time bias. Therefore, 893 patients who received PPSV23 outside this vaccination period or died before 2009 and 2960 unvaccinated patients who died before 2009 were excluded. After the propensity score was matched with a 1:3 ratio, 2622 vaccinated patients and 7866 unvaccinated patients were recruited. A multivariate log-linear Poisson regression model was performed and adjusted for potential confounders, including influenza vaccination, vaccination period, cancer treatment modalities, comorbidities, and sociodemographic variables. After 2 years of follow-up, the incidence rate of the pneumonia hospitalization of the vaccinated patients was significantly lower than that of the unvaccinated patients at 85.53 per 1000 person-years (PYs) of the former and 92.38 per 1000 PYs of the latter. The proportions of patients who had 2, 3, and >3 pneumonia hospitalizations per year were consistently lower in the vaccinated group than in the unvaccinated group (1.9% vs 2.0%, 0.5% vs 0.9%, and 0.7% vs 1.1%, respectively). After adjustment for covariates was made, PPSV23 vaccine was significantly associated with a reduced risk of pneumonia hospitalization, with an adjusted incidence rate ratio of 0.88 (P = .040). The overall pneumonia-free survival rate was also significantly higher in the vaccinated patients than in the unvaccinated patients (P = .001). PPSV23 vaccination was associated with a significantly reduced rate of pneumonia hospitalization in elderly patients with colorectal cancer. Introduction Colorectal cancer (CRC) is the third most commonly diagnosed malignancy and the fourth leading cause of cancer deaths in the world; it accounted for about 1.4 million new cases and almost 700,000 deaths in 2012. [1] It is also one of the leading causes of cancer-related deaths in the USA, Europe, and Asia. [2] Taiwan has a high human development index, 0.907 in 2018. Similar to many other developed countries, CRC is a major public health problem in Taiwan. According to a report of the Bureau of Health Promotion, Taiwan, CRC has been considered the most common malignancy in Taiwan since 2006, and its crude incidence rate was 65.84 per 100,000 people in 2015. [3] The standardized incidence rates of colon cancer and rectal cancer were 26.96 and 15.74 per 100,000 people in 2015 in Taiwan, respectively. [3] Cancer treatment modalities, such as surgery, radiotherapy, chemotherapy, and targeted therapies can impair the immune system and increase susceptibility to pneumonia. [4][5][6] Pneumonia is the most frequent type of infection in patients with cancer, and it is associated with high mortality rates. [7] In a German cohort of 89,007 patients with cancer, the standardized incidence rate of pneumonia increases by 21-fold (lung cancer), 4.3-fold (hematological malignancies), and 1.8-fold (gastrointestinal tract malignancies) to 1.7-fold (breast cancer) compared with that of the matched control cohort. [8] Schmedt et al [8] also reported that 30-day mortality in community-acquired pneumonia (CAP) cases is the highest in patients with lung cancer (20.0%), and this parameter ranges from 7.2% to 18.5% in CAP cases compared with other cancer subtypes. Pneumonia can increase mortality, number and severity of complications, length of hospitalization, and hospital-related costs in patients with cancer. [9] Among different pathogens causing pneumonia, Streptococcus pneumoniae is an important pathogen and still a major cause of morbidity and mortality worldwide. [10] Invasive pneumococcal disease among healthy adults is effectively prevented by 23-valent pneumococcal polysaccharide vaccine (PPSV23; 50%-85%), which was licensed in 1983. [11,12] The effectiveness of PPSV23 has, however, never been studied in patients with CRC. Anticancer therapies may affect immune responses to vaccination, and their ability to prevent the development of an adequate immune response to influenza or pneumococcal pneumonia vaccine remains controversial. A previous study showed that serum antibody response to influenza virus vaccine in patients receiving cancer chemotherapy is weak. Some studies have, however, demonstrated that pneumococcal vaccine can stimulate an adequate immune antibody response in patients with nonspecific cancer. [13][14][15] Another study also showed that the seroconversion rate of patients with CRC and receiving chemotherapy (36%) is lower than that of healthy volunteers without CRC (85%; P = .027). [16] For clinical effectiveness, no clinical follow-up studies on patients with CRC have been performed. Aging is another factor affecting the immune system. Agedependent changes are referred to as immunosenescence, and they are partially responsible for poor immune responses to infections and low efficacy of vaccination in elderly persons. [17] In this study, we investigated the effectiveness of PPSV23 in elderly patients with CRC and aged 75 years or older. Materials and methods Our retrospective cohort study involved a specific "vaccination period" from October 2008 to December 2008. Data, including comorbidities, were obtained from 1996 to 2010. Sources of data and ethics statement Data were obtained from the National Health Insurance Research Database (NHIRD) and released for research purposes by the National Health Research Institutes, Taiwan. The NHIRD contains medical claims data for approximately 99% of Taiwanese people. [18] This study was done in accordance with the Helsinki Declaration and approved by the institution review board (IRB) of our institution, that is, Dalin Tzu Chi Hospital of Buddhist Tzu Chi Medical Foundation (approval number, B10404001). The IRB waived the requirement for written informed consents from the patients involved because the researchers could not directly contact individual patients from this de-identified database. To ensure the accuracy of the claims, the National Health Insurance Administration (NHIA) performs quarterly expert reviews on every 50 to 100 ambulatory and inpatient claims filed by each medical institution. [19] False diagnostic reports are liable to severe penalties from the NHIA. [20] All claims data of patients with cancer between 1996 and 2010 were used. The databases contained ambulatory care claims, inpatient hospitalization claims, national cancer registration database, registry of catastrophic illness, and registry of beneficiaries, which recorded an individual's monthly income data. In Taiwan, the NHIA issues catastrophic illness certificates to all patients with pathologically confirmed malignant tumors. Patients and study groups A total of 120,605 patients with CRC were identified from the national cancer registration database and validated by the information from the catastrophic illness registry. In Taiwan, the policy of administering PPSV23 free of charge for people aged 75 years or older started in 2007. In this study, the effectiveness of vaccine for patients with CRC and receiving vaccine after cancer diagnosis was explored. Therefore, only patients who had CRC diagnosed before 2007 were included. Our study subjects were limited to those aged older than 75 years. The flowchart of the study subjects' enrollment is presented in Figure 1. A total of 18,468 elderly patients with CRC diagnosed before 2007 were included. Among them, 3515 received PPSV23, but 14,953 did not. The number of patients receiving PPSV23 vaccination during specific periods is shown in Table 1. Most patients (2622 patients, 74.6%) received PPSV23 from October 2008 to December 2008. The "vaccination period" was defined as October 2008 to December 2008 to reduce the potential immortal time bias associated with the competing risk by death, that is, patients who survived long tended to be healthier than who died early, and have a greater chance of receiving vaccination. Only patients who survived to the end of the vaccination period, that is, January 1, 2009, were included. As such, 893 patients in the vaccinated group and 2960 in the unvaccinated group were further excluded from the analyses, that is, patients who died before 2009 or received PPSV23 outside the defined vaccination period (Fig. 1). The follow-up period of the vaccinated and unvaccinated groups started on January 1, 2009, and ended on the date of withdrawal from the National Health Insurance (NHI) program, death, or study termination (December 31, 2010). Self-selection for vaccination might exist, considering the relatively low vaccination rate. Each vaccinated patient was subjected to propensity score matching to 3 unvaccinated patients to reduce potential confounding by indication that elderly people who suffered from a frequent pneumonia in the past tended to have a greater willingness to receive vaccination than the general elderly population. The propensity score was calculated from the patients' age on January 1, 2009, sex, and number of pneumonia hospitalizations over the past 3 years. A total of 2622 vaccinated patients and 7866 unvaccinated patients were recruited (Fig. 1). Measurements of endpoints and potential confounders The primary outcome in the study was all-cause bacterial pneumonia hospitalization (International Classification of Dis- Table 1 Months distribution of 23-valent pneumococcal polysaccharide vaccine vaccination in elderly patients with colorectal cancer. Numbers Percentage October In this study, all-cause bacterial pneumonia included invasive and noninvasive pneumonia and excluded viral pneumonia and influenza. The primary outcome was all-cause bacterial pneumonia rather than specific pneumococcal pneumonia because a definite pathogen culture result is unnecessary during pneumonia treatment. Therefore, the frequency of pneumococcal pneumonia is highly underestimated in clinical practice and in our Health Insurance Research Database (NHIRD), possibly resulting in a wrong conclusion. The potential confounders considered in this study were age, sex, influenza vaccination, vaccination period, cancer treatment modalities, comorbidity, and sociodemographic variables ( Table 2). Cancer treatment modalities, including surgery, radiotherapy, chemotherapy, and targeted therapy, were also adjusted. [4][5][6] The influenza vaccination status was also considered a potential confounder and adjusted in the analysis because most patients received PPSV23 and influenza vaccines. A number of major illnesses, such as coronary heart disease, congestive heart failure (CHF), asthma, interstitial lung disease, chronic obstructive pulmonary disease (COPD), liver cirrhosis, diabetes mellitus (DM), chronic kidney disease (CKD), stroke, and dementia, which could affect susceptibility to pneumonia, were included in our analysis. [21] These comorbidity data were obtained from ambulatory care and inpatient hospitalization claims in 1996 to 2008. People with higher health awareness would be more likely to be vaccinated than the general population, so several socioeconomic variables, including urbanization level, geographic region, and monthly income-based insurance premium, were also adjusted. Patients were grouped on the basis of urbanization level (i.e., urban, suburban, and rural) in accordance with the proposed classification scheme of Liu et al. [22] The urbanization level was adjusted because of the distinct urban-rural difference in medical care accessibility in Taiwan. [23] Statistical analysis The propensity score method was used for matching. The characteristics between the 2 study groups were compared. The incidence rate of pneumonia hospitalization was calculated as the ratio of the number of pneumonia hospitalizations to the number of person-years (PYs) of follow-up. The follow-up period of both study groups started on January 1, 2009, and ended on the date of withdrawal from the NHI program, death, or study termination (December 31, 2010). The incidence rate followed a Poisson distribution, so a multivariate log-linear Poisson regression model was used to calculate the incidence rate ratios (IRRs) with all covariates included. The Kaplan-Meier method was used to estimate the overall survival time. Two statistical packages [SAS (version 9.4; SAS Institute Inc, Cary, NC) and SPSS (version 12, SPSS Inc, Chicago, IL)] were used to analyze the data. A 2-sided P value of <.05 was considered statistically significant. Results The distribution of the demographic characteristics and comorbidities, including pneumonia hospitalization history, of the 2 groups is shown in Table 3). The proportion of the vaccinated patients with no and 1 pneumonia hospitalization per year was higher than that of the unvaccinated patients (89.1% vs 88.8%, 7.9% vs 7.2%; Table 4). The proportions of patients who had 2, 3, and >3 pneumonia hospitalizations per year were consistently lower in the vaccinated group than in the unvaccinated group (1.9% vs 2.0%, 0.5% vs 0.9%, and 0.7% vs 1.1%, respectively). After adjustment for confounders was made, our analysis showed that PPSV23 vaccination significantly reduced the pneumonia hospitalization risk, with an IRR of 0.880 (P = .04; Table 5). An adjusted IRR of sex significantly <1 (0.643, P < .001) indicated that men were more at risk of pneumonia hospitalization than women. The incidence rate of pneumonia hospitalization was increased by certain cancer treatment modalities, such as radiotherapy (adjusted IRR = 1.439, P < .001) and surgery (adjusted IRR = 1.158, P = .003), but this rate was not affected by other modalities, such as target therapy and chemotherapy. PPSV23 and influenza vaccinations are administered from October to December every year in Taiwan (Table 2). PPSV23vaccinated patients are much more likely to receive influenza vaccination than unvaccinated patients (92.1% vs 30.8% with P < .001; Table 2). In univariate and multivariate analyses, all covariates were, however, adjusted, and influenza vaccination had no significant effect on pneumonia hospitalization (IRR = 1.056, P = .247; adjusted IRR = 1.012, P = .83; Table 5). The overall survival was significantly better in the PPSV23vaccinated group than in the unvaccinated group (Fig. 2, P = .001). Discussion Our study indicated that pneumonia is a critical disease affecting elderly patients with CRC and aged 75 years or older. The clinical effectiveness of PPSV23 has never been studied in patients with CRC. In this population-based propensity score-matched cohort study, pneumonia hospitalization risk was decreased 12% in the vaccinated cohort. Our resulted also showed there were fewer patients in the vaccinated group with pneumonia hospitalizations ≥2 times per year, than in the unvaccinated group. In addition, vaccinated patients with CRC had higher survival rate than patients unvaccinated with PPSV23. Although anticancer For the clinical benefit of PPSV23, we found that PPSV23 can significantly reduce the hospitalization frequency and mortality of patients with lung cancer during active anticancer treatment. [24] Another study also showed that PPSV23 vaccination is associated with a significantly reduced rate of pneumonia hospitalization in survivors of long-term cancer. [25] In our study, PPSV23 was also effective in elderly patients with CRC. It could also be considered a feasible strategy for coping with the high risk of pneumonia in elderly patients with CRC because the cost of PPSV23 is low. Our results could encourage doctors to recommend pneumococcal vaccine for patients with cancer because we approved the effectiveness of PPSV inoculation after cancer diagnosis. In clinical practice, oncologists often focus on cancer treatment and disregard the importance of pneumococcal vaccine for elderly people and patients with cancer. The optimal timing for vaccination is an interesting question. The effectiveness of pneumococcal vaccine inoculated before cancer diagnosis is still unknown. Our PPSV-related studies on patients with lung cancer, survivors of long-term cancer, and patients with CRC have included patients who received PPSV23 after cancer diagnosis. [24,25] Time interval between vaccine administration and chemotherapy initiation has, however, been rarely studied in adult patients with cancer. Choi et al [26] investigated optimal vaccination timing by vaccinating patients 2 weeks before or on the day of chemotherapy initiation to determine the antibody response of patients with CRC to pneumococcal conjugate vaccine 13. They found no significant differences. Therefore, the clinical effectiveness of vaccines between different vaccine periods should be further investigated. For the vaccinated patients, the pneumonia incidence in our study was still high possibly because of several reasons. The most important reason was that the endpoint of this study was allcause bacterial pneumonia rather than specific pneumococcal pneumonia. The second reason was that patients were very old, that is, they were 75 years or older. The third reason was that their cancer status or cancer treatments, such as chemotherapy/ radiotherapy, resulted in their relative immunosuppression status. Influenza infection may predispose some patients to bacterial pneumonia, but influenza vaccination did not decrease the number of bacterial pneumonia hospitalizations in this study possibly because of the following: our endpoint outcome was strictly bacterial pneumonia, not viral pneumonia, and influenza; some circulating virus strains were covered by the influenza vaccine in that year; and our endpoint was hospitalized pneumonia, which is more severe than CAP. In our study, surgery included laparoscopic and open surgery. Surgery and radiotherapy were associated with a high risk of pneumonia hospitalization in elderly patients with CRC. Many certain comorbidities, such as CHF, asthma, interstitial pulmonary disease, COPD, DM, CKD, stroke, and dementia, increased the risk of pneumonia hospitalization in this study. Jackson et al [27] identified CHF, asthma, COPD, DM, stroke, dementia, and lung cancer as risk factors of pneumonia in general people aged 65 years or older; they also identified CHF, asthma, COPD, and dementia as risk factors in elderly patients with cancer. In a multicenter and retrospective cohort study in South Korea, old age, more comorbidities, ulcer disease, history of pneumonia, and smoking are associated with an increased incidence of pneumonia within 1 year after cancer surgery. [28] Study strengths This study had several strengths. First, it was a nationwide population-based study that included all patients with CRC and all hospitals in Taiwan, leaving a low chance of selection bias and attrition bias (loss to follow-up) and having a relatively large sample size. Second, the utilization of propensity score matching strategy, with age, sex, and previous personal pneumonia history, to select unvaccinated patients also helped Table 3 Incidence density of pneumonia hospitalization in elderly patients with colorectal cancer vaccinated and unvaccinated with 23-valent pneumococcal polysaccharide vaccine. PYs No reduce confounding by indication, that is, elderly people who suffered from frequent pneumonia would have a greater willingness to receive vaccination than the general elderly population. Lastly, a PYs approach was used to determine the occurrence of multiple pneumonia incidence, thereby reducing the potential bias due to different lengths of follow-up between vaccinated and unvaccinated groups. This finding was important because of the relatively short life expectancy of elderly patients with CRC (age older than 75 years). Study limitations Our study also had several limitations. First, we conducted an observational nationwide population-based matched cohort study rather than a randomized trial, so our study was still exposed to certain unmeasured confounders, even though our patients were matched with propensity score and analyzed through multivariate analysis. Second, this study did not collect cancer stage information. Cancer treatment modalities, such as Table 5 Crude and adjusted incidence rate ratio of pneumonia hospitalization in elderly patients with colorectal cancer. surgery, chemotherapy, radiotherapy, and target therapy, which are, however, relevant to individual cancer stage, were included in our analysis. The PYs approach eliminated the effect of different lengths of follow-up due to different cancer stages. Third, the database used was limited to routinely collected data for the National Health Insurance system, that is, it does not include nonroutinely collected data, such as smoking personal history, although COPD was included in the adjustment in this study. Fourth, the conclusion of this population-based cohort study was limited to the patients with CRC of this age group because the "free vaccine" policy applies only to those older than 75 years. Conclusion PPSV23 vaccination was associated with a significantly reduced rate of pneumonia hospitalization in elderly patients with CRC.
2019-12-19T09:19:21.496Z
2019-12-01T00:00:00.000
{ "year": 2019, "sha1": "d2d69dae09da93c0bf1177cfe0bffa7f74a203cd", "oa_license": "CCBY", "oa_url": "https://europepmc.org/articles/pmc6922596?pdf=render", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "8d125e2c607f1e8fb670ee341f24f30c89a85521", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
246060154
pes2o/s2orc
v3-fos-license
Comparative Evaluation of the Rheological and Proppant Handling Capability of Detarium microcarpum as a Viscosifier in Hydraulic Fracturing Fluid Design The most frequent viscosifier used in hydraulic fracturing fluid design is guar and its derivatives. However, Guar leaves residues in solution, it is unstable at higher temperatures, and is not immune to market dynamics of demand and supply. This research aims to source for an alternate hydrocolloid for hydraulic fracturing fluid development. Detarium microcarpum (local), Cyamopsis tetragonoloba(imported) and polyanionic cellulose-regular PAC-R (Imported) were sourced, isolated, and used in carrier fluid design. The rheological properties of the three fluids were investigated at 27°C, 57°C, and 85°C. The rheological models were generated and compared with the imported samples as the control. Also, to analyse the proppant handling capacity of each of the carrier fluids, the geometry of the proppant grains and the rheology of the carrier fluids were used to compute the coefficient of drag, the drag force and the settling velocity of the carrier fluids. Original Research Article Eromosele et al.; JENRR, 9(3): 65-79, 2021; Article no.JENRR.81080 66 INTRODUCTION Hydraulic fracturing is a rock stimulation process that involves injecting fracturing fluids at high pressure and flow rates into the rock to improve permeability and build a network of pore spaces. The goal is to increase the rock's conductivity and the surface area that contributes to flow. The oil and gas industry has been able to explore the recovery of oil and gas from previously unexplored tight and ultra-tight reservoirs due to the success of hydraulic fracturing operations. Fracturing fluids are essential in the hydraulic fracturing process for enhancing oil and gas production in porous medium. API RP 13M (2018) specifies that hydraulic fracture fluids must have adequate viscosity to originate and propagate hydraulic fractures, as well as to suspend and convey propping agents deep into the fracture. The following rheologically related features should be present in fracturing fluids: adequate viscosity, low treating pipe friction, shear stability, thermal stability, low to moderate fluid loss properties, and controlled degradability. The base fluid, which might be water, oil, or foam, viscosifiers or polymers; crosslinkers, breakers, and proppants are all important components of a typical hydraulic fracturing fluid. Some additives could be added depending on the fluid's characteristic quality or the reservoir's nature. Biocides, buffers, clay stabilizers, fluid loss additives, friction reducers, and surfactants are examples of such additives. This research focuses on the polymer or viscosifier, which is one of the most important components of a hydraulic fracturing fluid. Galactomannan, as A key Ingredient in Biopolymers Galactomannan is a heterogeneous carbohydrate found throughout nature. They are primarily found in the endosperm of leguminosae seeds. They are mostly made up of mannose and galactose, in varying proportions according to the species. They can be employed directly or as derivatives in their natural condition. The chemical structures, chain length, availability of Cis-OH groups, and replacements all influence the properties of galactomannans [1][2][3][4][5]. Higher solubility is achieved by increasing substitution in the main chain. Galactomannan is an important enhancing agent in procedures that require a hydrophilic system to be thickened, suspended, coated, and so on. Because they have similar sugar compositions, galactomannan from Leguminosae seeds is a feasible alternative source for polysaccharides utilized in industries such as guar and locus bean gums [4,5]. Variations in substitution degree and crosslinking ability may, however, result in distinct chemical characteristics. J. Du et al. [6] created a hydraulic fluid by crosslinking an ionic polymer gel (hydroxypropyl trimethylammonium chloride guar-cationic guar) with a bola surfactant fluid (bola-carboxylate polypropylene glycol). Due to the influence of the dual systems, it was claimed to have significantly better properties and distinctive traits. When the temperature rises, the viscosity of the fracture fluid rises abruptly, with remarkable selfassembly recovery from shearing. Due to the creation of a network of structure and supramolecular microspheres at varying pH, it also demonstrates pH-responsive viscosity variations and modest permeability impairment. The most used synthetic polymer, according to [7], is polyacrylamide (PAM) and its variants. The polymers in PAM and its derivatives are treated to acrylamino hydrolysis, which reduces the fluid's thermal stability. Due to the lack of crosslinkable groups, the viscosity, elasticity, and thermal stability of these polymers are severely constrained. He offered a new nonresidual fracturing fluid that he developed by studying the structure of 2-acrylamido-2methylpropanesulfonic acid (AMPS). The thermal endurance of the fluid will be improved by adding the high-temperature-tolerant groups 2acrylamide and 2-methylenepropanesulfonic acid (AMPS) to the PAM. Additionally, the carboxyl provided by the acrylic acid incorporated into the polymer molecules is capable of crosslinking with the multivalent transition metal ion, increasing the viscosity of the fluid [8]. proposed a new fracturing fluid that would overcome the drawbacks of guar and its derivatives (high insoluble residue, poor shear resistance, and pore throat plugging), viscoelastic surfactants VES (loss of filtrates, difficulty in breaking gel, and high fluid cost), and the Hydrophobic associating water-soluble polymer HAWSP (high insoluble residue, poor shear resistance, and pore throat plugging) (high initial viscosity, equipment damage before operation). Prior to this study, some researchers claimed that sacrificing viscosity for high elasticity outweighs any benefit that a high viscosity might provide, because high elasticity improves sand carrying capacity and reduces friction between the fluid and the equipment [9][10][11][12]. Hydrophobic association interactions, electrostatic bridge effects, hydrogen bonds, and Van Der Waal forces can all help increase the elasticity of polymer solutions. HELV was made by copolymerizing acrylamide (AM), acrylic acid (AA), 4-isopropenylcarbamoylbenzene sulfonic acid (AMBS), and N-(3-methacrylamidopropyl)-N, N-dimethydodecan-1-aminium (DM-12). By incorporating a benzene ring, sulfonates, and long hydrophobic chains into the polymer structure, a copolymer solution with low viscosity and great flexibility was created. A new hydraulic fracturing fluid with exceptional viscoelasticity and thixotropy was developed [13]. proposed a novel sort of fluid that injected two types of liquid at the same time with no proppant. Fracturing fluid and supporting solids are the two forms of fluid. At high temperatures, a type of fluid known as phase change liquid (PCL) will solidify and serve as a proppant to prevent fracture closure. The second non-phase change liquid (NPCL) undergoes no phase shift during the fracturing process and functions similarly to standard fracturing fluid [14]. investigated the effects of hydroxyl groups and oxygen atoms on the rheological and electrokinetic features of shear thickening time as a function of chain length and branching of carrier fluid (STF). Carrier fluids included ethylene glycol, triethylene glycol, 1,3propanediol, glycerine, poly (propylene glycol) of various molecular weights, and poly(propylene glycol) triol (dispersants) [15][16][17][18]. The solid phase was silica powder with an average particle size of 100nm. He measured the zeta potential, particle size distribution, steady state, and dynamic rheological properties. The findings reveal that varying the number of -OH group and oxygen atoms, as well as chain length and branching of carrier fluids, has a substantial impact on intermolecular interactions and that rheological features of a hydraulic fracturing fluid may be controlled [19,20]. High viscosity friction reducers (HVFRs), which are typically high molecular weight polyacrylamides, have been recommended as a biopolymer alternative [21]. In over 26 case studies, the fluid demonstrated greater proppant transport capabilities, nearly 100 percent maintained conductivity, cost savings, a 50 percent decrease in chemical usage, less operational equipment on site, a 30 percent reduction in water consumption, and less environmental concerns. The control fluid in this study is PAC R, which belongs to the polyacrylamides group and is used in conjunction with guar [22]. looked at a new polyacrylamidebased synthetic polymer. For usage in high temperature and high salt reservoirs, polyacrylamide-co-acrylic acid-co-2-acrylamido-2-methyl-1-pro-panesulfonic acid (P3A) was developed. At greater temperatures and salt reservoirs, they performed better than guar. They came to the conclusion that at higher temperatures, polyacrylamide is a good alternative for the thermally unstable guar. Detarium microcarpum as a Viscosifier Detarium microcarpum is a common tropical food ingredient used to change the rheology of soups in African cuisines. They serve as thickeners, emulsifiers, and stabilizers in soups, as well as imparting distinct flavors [23]. The polysaccharide component of the seed endosperms is responsible for the thickening qualities of these seed flours. The seed of D. microcarpum contains around 60% of a watersoluble polysaccharide, which is primarily a xyloglucan. These seeds' flours have a distinct behavior in hot water, exhibiting varying degrees of viscoelastic properties [24]. According to the Uzomah and Odusanya, D. microcarpum seed flour contains the following components: ash (1.4-3.5%), protein (27-37%), fat (14.45-15%), crude fiber (2.76-2.9%), and carbohydrate (39-49.21%) [25]. hulled and dried the Detarium microcarpum seed at room temperature, and the endosperms were mixed into particle size and defatted in a soxhlet extractor with n-hexane for 12 hours. The flour was made from the defatted samples. Approximately 50 g of flour was extracted for 72 hours using ethanol. The ethanolic extract was then concentrated under vacuum at 45°C in a rotary evaporator until nearly 90% of the solvent was removed. This was put into a weighted crucible and dried in a heated water bath to a constant weight. Constitutive Equations used in the Analysis of the Carrier Fluids As standard hydraulic fracturing fluid exhibits Herschel-Buckley behavior, the fluids were thought to be Herschel-Buckley fluids at least until it could be demonstrated differently. The rheology, drag velocity, and settling velocity were calculated using the constitutive equations below. Objective and Significance of the Study The conventional hydraulic fracturing fluid viscosifier-Cyamopsis tetragonoloba (CT) and the conventional drilling mud viscosifier-PAC-R were used as benchmarks to examine the suitability of Detarium microcarpum (DM) as a polymer in hydraulic fracturing fluid design. The study will show how Detarium microcarpum (DM) compares to guar gum and PAC-R in terms of performance. In addition, using a locally available biopolymer for hydraulic fracturing fluid design would drastically reduce the cost of both the fluid design and the hydraulic fracturing process. Sample Collection The DM seed was bought at Rumuokoro Market,Port Harcourt, while the CT and PR were bought processed from Joechem Chemicals, Choba, Uniport. Procedure The Detarium microcarpum(DM) seed is dehulled and the endosperm is pulverized using a hammer mill. A blender is used to blend the pulverized seed into a fine powder. To extract the oil from the powder, it is wrapped with crystalline filter paper and placed in a Soxhlet extractor. Propanol is used for this. In a continuous reflux procedure, the content is left in the Soxhlet extractor for three days. The resulting powder was baked for 6 hours at 80 o C in an oven.The obtained DM powder is mixed separately with the already processed CT and PR at a concentration of 10g per litre of water. The three samples are placed in a Fann viscometer, with dial readings collected at 3, 6, 100, 200, 300, and 600 RPM and 27 o C. At temperatures of 57°C and 85°C, the procedure is repeated. The travel time of proppant sand particles of 78% 0.002m grain diameter (ASTM 10) down the burettes is taken and used as the raw data to compute the drag force and the settling velocity with DM, CT, and PR at a concentration of 10g/l in three burettes. DATA PRESENTATION AND DISCUSSION OF RESULTS Appendix 2 shows the rheology of Detarium microcarpum(DM), Cyamopsis tetragonoloba (CT), and Polyanionic-cellulose filtration control additive PAC R (PR) after 10 sec and 10 min in a viscometer at 27°C, 57°C, and 85°C. Equations 4 and 5 were used to convert the readings to oilfield units, which are highlighted on 4,5,6. For the diameter of the proppant sand, the sizes from sieve analysis are converted using the sieve conversion chart in appendix 1. Table 1 shows the dial readings of DM at temperatures of 27°C, 57°C, and 85°C, generated using equations 1-7 from raw data on appendix 2, table 2. In Table 1, the shear strain and shear stress at 27°C, 57°C, and 85°C are computed and presented. With an increase in temperature, DM shows a decrease in shear stress. With a reduction in shear strain, there is also a reduction in shear stress. With increasing temperature, the flow behavior index, n, and the consistency factor, k, obtained from equations 6 and 7, decrease. Table 2 shows CT dial readings at temperatures of 27°C, 57°C, and 85°C, generated using equations 1-7 and raw data from appendix 2, table 3. In Table 2, the shear strain and shear stress at 27°C, 57°C, and 85°C are computed and presented. With a rise in temperature, Cyamopsis tetragonoloba shows an overall decrease in shear stress. With a reduction in shear strain, there is also a reduction in shear stress. The flow behavior index of Cyamopsis tetragonoloba, on the other hand, increases as the temperature rises. However, when the temperature rises, the consistency factor drops. Table 3 shows the dial reading of the PAC R at temperatures of 27°C, 57°C, and 85°C, generated using equations 1-7 from the raw data on appendix 2, table 4. In Table 3, the shear strain and shear stress at 27°C, 57°C, and 85°C are computed and presented. With an increase in temperature, PAC R shows an overall decrease in shear stress. With a reduction in shear strain, there is also a reduction in shear stress. The flow behavior index has a small slant to it; it rises at 57°C and falls slightly at 80°C. The consistency factor, on the other hand, follows a predictable pattern of diminishing as the temperature rises. The DM has the best rheology at the laboratory temperature of 27 o C, as seen in Fig 1. Rheology is reduced at higher temperatures, such as 57°C and 85°C. The curves are densely linked, which is an intriguing aspect of this discovery. It means there isn't much of a variation in rheology between 57°C and 85°C. Also, with a yield point of 0.0316 lbf/100ft 2 At 27 o C, the shear stress-shear strain plot (shown in Fig 2) show the rheological behavior. The three temperatures are found to be equidistance. This indicates that when the temperature rises, the polymer continues to denaturate. It means there isn't much of a variation in rheology between 57°C and 85°C. Also, with a yield point of 0.1293 lbf/100ft 2 at 27 o C, the yield point is the highest. At 57°C and 85°C, the yield points are 0.0495 lbf/100ft 2 and 0.0191 lbf/100ft 2 , respectively. The plot of shear-stress against shear-strain for the PR at various temperatures (Fig 5) shows close knitted behavior at lower shear strains and equidistance close-knitted behavior at higher shear strains. It means there isn't much of a variation in rheology between 57°C and 85°C. Also, with a yield point of 0.0449lbf/100ft 2 at 27 o C, the yield point is the highest. At 57°C and 85°C, the yield points are 0.0371lbf/100ft 2 and 0.0284lbf/100ft 2 , respectively. At 27°C, the three biopolymers are shown in Fig. 4. The best rheology is seen in CT, followed by PR, and finally DM. The rheology of the three biopolymers also shows an equidistance difference. This implies that there is a significant difference in their laboratory performances, CT > PR > DM. At lower temperatures, remember that CT is the industry standard. Also, with 0.1293lbf/100ft2, CT has the highest yield. The three biopolymers are shown in Fig. 5 at 57°C. The best rheology is seen in CT, followed by PR, and finally DM. The rheology of the three biopolymers also shows an equidistance difference. The gap between the DM and PR stays as large as it was at 27 0 C, however the gap between the CT and PR gradually reduces at 57 o C, indicating that PR improves at higher temperatures. In Fig. 4, the rheological behavior of the three polymers at 27 o C in the laboratory revealed CT to be the best by a considerable margin. The PR is a close second. The difference between CT and PR has shrunk at a higher temperature of The DM remained at the bottom of the heap. This suggests that the PR could be an effective substitute for CT in a job involving hightemperature hydraulic fracturing. The consistency indicator, k of CT, indicates a drop in value. The PR and the DM consistency index, on the other hand, remains flat over the temperature range. This demonstrates PR and DM's efficacy at greater temperatures. The temperature denaturing problem that is associated with CT, may most likely not occur in DM and PR. Drag Force and Settling Velocity The proppant handling capability of the three polymers is shown in Table 4. Equations 8 through 13 were used to create the table. The best drag or buoyancy force is 4.415 x 10 -5 N for PR. This means that the polymer sediment has the least potential to settle out of the solution. The settling or terminal velocity is best with CT, which has the slowest settling velocity of 0.8mm/s and hence will settle out of its solution in the shortest amount of time. CONCLUSIONS From this work, the following findings can be drawn: i. The Hershel Bulkley fluid behaviors were seen in the carrier fluid made from the three polymers. ii. The flow behavior index for the three polymers was calculated, and it was discovered that the flow behavior index increased as the temperature rose. At a concentration loading of 10g/l, the commercial CT had the highest values, followed by PR, and then DM. iii. The flow consistency index, on the other hand, declines as temperature rises for all three polymers, with no clear trend within the polymer. iv. At greater shear rates, CT has the highest rheology and is still the best polymer. v. At higher temperatures, the PR demonstrated the highest rheology, suggesting that it could be the best substitute for CT. vi. At higher temperatures, the PR had the highest rheology and could be the best substitute for CT. vii The best drag force is PR, while the optimal settling velocity is CT. viii. The DM is ineffective as a replacement for commercial CT. CONTRIBUTION TO KNOWLEDGE The rheological and proppant carrying capacities of a hydraulic fracturing fluid were determined using both indigenous and commercial polymers. Although the characteristics showed a trend, the indigenous polymer is not suitable for use as a hydraulic fracturing fluid polymer. Future researches can benefit from the indigenous polymer to investigate if it may be used as a hydraulic fracturing operation polymer.
2022-01-20T16:12:43.526Z
2021-12-20T00:00:00.000
{ "year": 2021, "sha1": "343feb3bc9d59a8f7c6ea2002114d62d27997730", "oa_license": null, "oa_url": "https://www.journaljenrr.com/index.php/JENRR/article/download/30236/56740", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "a6420e64cfc282523279eb8d1c991089b721a4ca", "s2fieldsofstudy": [ "Environmental Science", "Materials Science", "Engineering" ], "extfieldsofstudy": [] }
230594086
pes2o/s2orc
v3-fos-license
Outcomes of Cryptococcus meningoencephalitis and associated magnetic resonance imaging ndings Woo-Jin Lee Seoul National University Hospital Young Jin Ryu Seoul National University Bundang Hospital Jangsup Moon Seoul National University Hospital Soon-Tae Lee Seoul National University Hospital Keun-Hwa Jung Seoul National University Hospital Kyung-Il park Seoul National University Hospital Manho Kim Seoul National University Hospital Sang Kun Lee Seoul National University Hospital Kon Chu (  stemcell.snu@gmail.com ) Seoul National University Hospital Introduction Cryptococcus neoformans meningoencephalitis is a serious central nervous system (CNS) complication in immunocompromised patients and is associated with a high mortality rate. [1][2][3][4] Although protocols including the administration of intravenous amphotericin B combined with ucytosine for acute induction treatment and uconazole for consolidation and long-term maintenance have been established as standard regimen, 2-5 the clinical outcomes are considerably heterogeneous and a signi cant portion of patients with mild baseline severity neurologically deteriorate and end up with death or permanent sequelae. 1,3,6,7 Prognostic factors for poor outcomes include old age, higher antigen titer in the cerebrospinal uid (CSF), larger ex vivo capsule size of the fungus, increased or decreased intracranial pressure (ICP), high peripheral white blood cell (WBC) count, low body weight, anemia, and features that constitute encephalitis such as reduced Glasgow coma scale (GCS) scores or presence of a seizure; however, these markers do not account for the neurological outcomes. 1,8,9 Additionally, a marker that re ects the disease pathomechanism and estimates the risk of disease progression and poor neurological outcome is still lacking. 6,7 The major route of entry of Cryptococcus into the CNS might be the key to explain the mechanism of disease progression and subsequent poor outcomes. Leukocyte-bound or free Cryptococci can exit the small-sized vessels in the brain and are accumulated in the perivascular space of the CNS, especially the peri-venular space. 10 Considering that the peri-venular space lacks pial membrane, 11 it can be postulated that the degree of peri-venular ow stagnation caused by the accumulated Cryptococcus might determine the risk of Cryptococcus invasion into the brain parenchyma, manifesting as the progression of disease. 4,10 Enlarged perivascular space (ePVS) is a common brain magnetic resonance imaging (MRI) feature associated with Cryptococcus meningoencephalitis. 12,13 Given that ePVS might re ect the perivascular CSF ow stagnation caused by Cryptococcus accumulation, its degree might predict the risk of disease progression and poor outcomes. Similarly, other MRI ndings such as parenchymal cryptococcoma or hydrocephalus might be utilized to monitor the neurological deterioration due to disease progression. 12,13 In this study, we hypothesized that brain MRI ndings might re ect the pathomechanism underlying disease progression and predict the outcomes of Cryptococcus meningoencephalitis, and analyzed the brain MRI ndings, their serial changes, and its association with the disease progression and outcomes. Study subjects This retrospective cohort study initially included all consecutive individuals admitted to the neurology department of the Seoul National University Hospital between January 2000 and December 2019 who were diagnosed with Cryptococcus meningoencephalitis. Among the initially included 117 individuals, the nal study population was de ned according to the following criteria: (1) underwent baseline brain MRI evaluation; (2) availability of clinical, treatment, laboratory, and long-term (> 6 months) neurological outcome data. According to the criteria, 33 patients without brain MRI evaluations and eight with inadequate data were sequentially excluded and the remaining 76 individuals were included in the study analysis. Diagnosis of Cryptococcus meningoencephalitis was based on the detection of the Cryptococcus antigen in CSF by latex agglutination or by lateral ow assay along with or without detecting Cryptococcus in CSF by culture or India ink assay. 2,14,15 The design of this study was approved by the institutional review board of the Seoul National University Hospital (SNUH) and the study was performed in compliance with the SNUH IRB regulations and the International Conference on Harmonisation guideline for Good Clinical Practice. Written informed consent was obtained from each patient or the patient's legal surrogate. Clinical and laboratory evaluation Along with the demographic information, patients' underlying immune status was reviewed and the causes of immunode ciency were categorized as follows: Human Immunode ciency Virus (HIV) infection, hematologic malignancy, solid organ cancer, post-transplant status, and long-term use of highdose immune suppressants (for indications other than cancer treatment or post-transplantation immunosuppression). 4,16 At baseline, encephalitis feature was de ned according to the 2013 Consensus Statement of the International Encephalitis Consortium diagnostic criteria as: (1) altered mental status lasting more than 24 hours without an alternative cause and (2) 3 or more of the followings: documented fever (> 38.0 °C); seizures not fully attributable to a preexisting seizure disorder; new onset of focal neurologic ndings; CSF WBC count ≥ 5/mm 3 ; and abnormal brain MRI ndings suggestive of encephalitis. 17 Baseline GCS score and modi ed Rankin Scale (mRS) score data were also obtained from the patients' medical records. 1 CSF analysis included the evaluation of protein levels, WBC counts, and the elevation in the opening pressure (≥ 20 cmH2O). 8 CSF Cryptococcus antigen titer was evaluated semi-quantitatively, and high antigen titer was de ned as antigen detection in > 1:1 000 dilution. 1 Treatment pro le analysis Intravenous amphotericin (0.7-1.0 mg/kg/day) with or without ucytosine (100 mg/kg/day) or uconazole (400-800 mg/kg/day) was used for the induction treatment period (within 2 weeks from the treatment initiation). Oral uconazole was used during the consolidation (8 weeks after the induction treatment) and long-term maintenance treatment period in most patients (74/76, 97.4%). [2][3][4][5] Treatment pro les with the durations of each treatment regimen were reviewed. Outcome analysis The scores on mRS was obtained at the time of treatment initiation, at 2 weeks, 10 weeks, and at 6 months. As a primary outcome, a mRS score of > 2 was designated as 6-month poor neurological outcome. Serial follow-up CSF data at 2 weeks (window time of ± 3 days), at 10 weeks (window time of ± or multiple discrete T2/FLAIR hyperintensity lesions with T1 hypointensity in brain parenchyma. 18 Hydrocephalus was de ned as the Evans' index (the ratio of the maximal frontal horn width of lateral ventricle to the transverse inner skull diameter) of ≥ 0.3 ( Fig. 1). To evaluate the serial changes in the MRI parameters, follow-up MRIs at 2 weeks (window time of ± 3 days), at 10 weeks (window time of ± 2 weeks), and at 6 months (window time of 5 months to 10 months) were analyzed, if available. Statistical analysis SPSS 25.0 (SPSS Inc., Chicago, IL, USA) was used for the statistical analyses. Data were reported as numbers (percentages), means ± standard deviations, or medians [interquartile ranges, IQR]. In univariate analyses, Pearson's chi-square test and Student's t-test were used. To evaluate factors associated with a poor neurological outcome, logistic regression analyses were performed including the parameters with a P < .10 in univariate analyses using a backward elimination method. Age was included in the nal model of every regression analysis. Regression analyses were also performed separately for the subgroup without an encephalitis feature at baseline. Regression analysis for mortality was not performed due to its low frequency. Receiver operating characteristic (ROC) curve were drawn to evaluate the prognostic value of the factors derived from the regression analysis and to designate a cut-off value for predicting a poor outcome. For every analysis, a P value < .05 was considered statistically signi cant. Inter-reader reliability for the MRI parameters were evaluated using Cohen's κ values Data Availability The datasets generated and/or analyzed during the current study are available from the corresponding author on request. The cumulative number of patients who achieved CSF antigen clearance at 10 weeks was 50 (65.8%). At 6 months, median mRS score was 2 [0-4] and mortality rate was 15 (19.7%). Detailed clinical, CSF, baseline MRI, treatment, and outcome pro les are described in Table 1. Numbers of patients with magnetic resonance image (MRI) evaluations included at each time points were 76, 40, 49, and 59 at baseline, 2-week, 10-week, and 6-months, respectively.. Results In the univariate analysis, poor 6-month outcome was associated with baseline encephalitis feature (P<.001), elevated CSF opening pressure (P<.001), the presence of periventricular extension (P=.001), cryptococcoma (P=.022), hydrocephalus (P<.001), and higher scores of GB-, CS-, and total ePVS (all P<.001) in the baseline MRI. Demographics, underlying immune status, treatment pro les were comparable between the groups with poor or good outcomes (Table 1). Subsequently, logistic regression analysis indicated that total ePVS score (Odds ratio [OR]: 5.068, 95% CI: 1.627−15.785 for 1 score increment, P=.005) was independently associated with a poor 6-month outcome. The association of the periventricular lesion extension was marginal (P=.056). When total ePVS score was dichotomized, an ePVS score of ≥5 In the univariate analysis for the factors associated with 6-month mortality, the mortality of 6-month was associated with baseline encephalitis feature (P=.013), elevated CSF opening pressure (P=.032), periventricular lesion extension (P<.001), cryptococcoma (P=.013), and higher scores of GB-, CS-, and total ePVS (all P<.001) in the baseline MRI (Table 3). Due to low frequency (n=15), multivariate analysis for mortality was not performed. We compared the clinical pro les and the outcomes between the groups with or without baseline encephalitis feature. The group with baseline encephalitis feature was associated with a higher frequency of HIV infection (P=.015), GCS score <15 (P<.001), elevated CSF opening pressure (P<.001), the presence of periventricular extension (P<.001), cryptococcoma (P=.015), and hydrocephalus (P=.021) in the baseline MRI, and lower baseline mRS scores and higher scores of GB-, CS-, and total ePVS, compared to the group without encephalitis feature (all P<.001). The 6-month mRS score was also lower in the subgroup with baseline encephalitis feature (P<.001, Supplemental Table 2). Risk score for a poor 6-month outcome was calculated by summing up the number of the factors associated with poor outcomes (encephalitis feature, ePVS score ≥5, and periventricular lesion extension), with a score range of 0−3. In ROC curve analysis for the total study population, the risk score predicted a poor 6-month outcome with area under the curve (AUC) of 0.978 (95% CI: 0.950−1.000, P<.001) and 6-month mortality with AUC of 0.836 (95% CI: 0.745−0.927, P<.001 , Fig 2A and 2B). The risk score of 2 predicted a poor 6-month outcome with a sensitivity of 94.1% and a speci city of 95.2%, and 6month mortality with a sensitivity of 93.3% and a speci city of 67.2%. For the subgroup without baseline encephalitis feature, the risk score predicted a poor 6-month outcome with AUC of 0.952 (95% CI: 0.896−1.000, P<.001) and 6-month mortality with AUC of 0.870 (95% CI: 0.764−0.978, P=.003, Fig 2C and 2D). In this subgroup, the risk score of 2 predicted a poor 6-month outcome with a sensitivity of 85.7% and a speci city of 95.0%, and 6-month mortality with a sensitivity of 83.3% and a speci city of 81.2%. Two-week follow-up MRI evaluation data were available for 40 (59.7%), 10-week follow-up MRI for 49 (64.5%), and 6-month follow-up MRI for 59 (77.6%) patients. The median number of MRI evaluations analyzed per patient was 3 . When the serial changes in the MRI parameters were analyzed in association with the changes in mRS scores, the subgroup with baseline ePVS score ≥5 showed gradual deterioration in the mRS score along with progressive increment of the frequency of cryptococcoma and hydrocephalus (Fig 3A), whereas the subgroup with baseline ePVS score <5 showed gradual improvement in the mRS score and maintained a low frequency of periventricular lesion extension, cryptococcoma, and hydrocephalus in the follow-up MRIs (Fig 3B). A similar trend was observed in the subgroup without baseline encephalitis feature (Fig 3C and 3D, see Fig 4 for representative cases). The pro les of the MRI parameters between the groups evaluated using 1.5-T or 3.0-T MRI machines were comparable (Supplemental Table 4). Discussion This study demonstrated clinical and MRI parameters associated with the progression and the poor outcomes of Cryptococcus meningoencephalitis. Along with the baseline encephalitis feature, a high ePVS score and periventricular lesion extension were independently associated with poor outcomes. Especially, presence of two or more risk factors at baseline showed high association with poor neurological outcomes and mortality, indicating that these might serve as prognostic markers. Given that the association was still valid for the subgroups without baseline encephalitis feature, these prognostic factors not only re ect the disease severity but also predict the risk of progression. Additionally, neurological deterioration manifested in brain MRI data as development of cryptococcoma and progression of hydrocephalus, which suggests that these MRI markers can also be used for monitoring the progression of disease. According to a large-sized prospective study of 501 patients with HIV infection, high fungal burden in the CSF, altered mental status, old age, high peripheral WBC count, low body weight, anemia, and low CSF opening pressure were associated with 10-week mortality. 1 Further, combination treatment of amphotericin and ucytosine at induction period reduced 10-week mortality while uconazole-based induction treatment was associated with higher mortality. 3 However, these studies mainly focused on mortality or fungal clearance in the CSF, while the neurological outcome of Cryptococcus meningoencephalitis has not been investigated in depth. In this regard, this is the rst study to describe the dynamic neurologic course of the disease, and demonstrate the early accessible MRI factors that are useful to predict or monitor the neurological outcomes. Notably, the outcome predictors in the current study re ect the distinct pathomechanism underlying the progression of Cryptococcus meningoencephalitis and can therefore be related with the previously reported prognostic factors for mortality. High fungal antigen titer and larger fungus capsule size might also contribute to a mechanical stagnation of CSF ow, especially in the peri-venular space which has small diameter. 1,9,10 Therefore, these factors can be correlated with the enlarged PVS in the baseline MRI which re ects the degree of CSF stasis. Altered mental status is a factor constituting a baseline encephalitis feature and is also related to MRI indicators of the parenchymal invasion of the Cryptococcus, such as periventricular lesion extension and cryptococcoma. 1 Increased ICP might also be the consequence of CSF recirculation failure resulting from the wide-spread cryptococcus accumulation over the perivascular space and manifest as the progression of hydrocephalus in MRI. 8,9 In addition to outcome prediction and disease monitoring, the ndings of the current study can also be useful for risk estimation and deciding the treatment strategy. For the patients with ≥ 2 baseline risk factors, higher combination or higher dose of anti-fungal treatment could be used to prevent the patient deterioration. [19][20][21] Additionally, frequent follow-up brain imaging to monitor the progression might be bene cial for the timely detection and early intervention to lower the ICP or other neurological complications. The current study has several limitations. First, as a retrospective study, the number and the interval of CSF and MRI evaluations, and treatment regimen were heterogeneous and not standardized. Second, the study population with baseline MRI evaluation might bear a potential source of selection bias, as this criteria might exclude patients with severe or unstable baseline clinical status. Different subpopulations included in each time point of the serial MRI data analysis also warrant a careful interpretation of the result. Third, this study included both 1.5-T and 3.0-T machines, although baseline and follow-up images were obtained from the same scanner and the MRI parameter pro les between the groups with different MRI powers were comparable. Fourth, the rate of fungal clearance in CSF, one of the major parameters for determining treatment outcome, was not analyzed in this study. 22 Additionally, careful discrimination ePVS and periventricular lesion extension from aging-related cerebral small-vessel disease is warranted, although their associations with outcomes were signi cant after adjusting for age. Tables Table 1. Comparison of the clinical, laboratory, and treatment profiles between the groups with or without poor outcomes.
2020-12-17T09:07:19.489Z
2020-12-10T00:00:00.000
{ "year": 2020, "sha1": "bd1d32ea521978da7eea84614574605703b3a4d1", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-120871/v1.pdf?c=1631880872000", "oa_status": "GREEN", "pdf_src": "MergedPDFExtraction", "pdf_hash": "d6b530eb54b96e2bc2b7c2fd8f834bc8d43ff260", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
203692080
pes2o/s2orc
v3-fos-license
Use of PEGylated Recombinant Human Growth Hormone in Chinese Children with Growth Hormone Deficiency: A 24-Month Follow-Up Study Objective Once-weekly PEGylated recombinant human growth hormone (rhGH) is the sole long-acting GH formulation available currently for pediatric patients with GH deficiency (GHD). The aim of this study was to evaluate the efficacy and safety of PEGylated rhGH therapy compared to daily rhGH therapy in GHD children treated for two years. Methods A total of 98 children (49 children for the PEGylated rhGH group and 49 children for the daily rhGH group) with GHD were enrolled in this single-center, prospective, nonrandomized cohort study. PEGylated rhGH or daily rhGH was administered for 2 years. Height, height SDS, height velocity (HV), IGF-1, bone age (BA), and adverse events were determined throughout the treatment. Results HV significantly increased over the baseline and was similar in both groups. In the PEGylated rhGH cohort, the mean ± SD HV was improved from 3.78 ± 0.78 cm/y at the baseline to 12.44 ± 3.80 cm/y at month 3, to 11.50 ± 3.01 cm/y at month 6, to 11.00 ± 2.32 cm/y at month 12, and finally 10.08 ± 2.12 cm/y at month 24 in the PEGylated rhGH group. In the daily rhGH group, HV was 3.36 ± 1.00 cm/y at baseline, increasing to 12.56 ± 3.71 cm/y at month 3, to 11.82 ± 2.63 cm/y at month 6, to 10.46 ± 1.78 cm/y at month 12, and to 9.28 ± 1.22 cm/y at month 24. No serious adverse event related to PEGylated rhGH or daily rhGH occurred during the 24-month study. Conclusion PEGylated rhGH replacement therapy is effective and safe in pediatric patients with GHD. The adherence to once-weekly PEGylated rhGH therapy is superior to daily rhGH in children with GHD. Introduction Pediatric patients with growth hormone deficiency (GHD) entail years of injections with recombinant human growth hormone (rhGH). As the rhGH's serum half-life is short, only 3.4 hours after subcutaneous (SC) injection and 0.36 hours after intravenous (IV) injection [1], the most widely used regimen is daily SC injections, which can be distressing and inconvenient for some patients and can lead to poor compliance and dissatisfying treatment outcomes. In teenagers, approximately 23% omit two or more injections per week [2]. Several long-acting formulations via different pharmacological strategies (e.g., sustainedrelease preparations, prolonged half-life derivatives, and new injectors such as electronic injection pen) have been studied in pediatric GHD patients, with the hope of improved compliance, and without adverse effects [3]. PEGylation, a pharmacological technology to prolong the serum half-life of therapeutic proteins through covalent modification of proteins with polyethylene glycol (PEG), is deemed as one of the most successful techniques to increase solubility and physical and chemical stability and in concert with avoidance of toxicity and immunogenicity [4]. In China, once-weekly PEGylated rhGH is the sole long-acting GH formulation available currently for pediatric patients with GHD. By conjugating a branched PEG molecule to amino groups of rhGH, thereby increasing the hydrodynamic size of GH and reducing renal clearance, the circulating hormone has a prolonged duration of action. Phase II and III multicenter, randomized studies from 6 hospitals in China confirmed that PEG-rhGH at a dose of 0.2 mg/kg/week is effective and safe for children with GHD during 25 week treatment [5]. In the phase II and III study, significantly greater improvement in the height standard deviation scores was associated with PEG-rhGH through the treatment. In summary, we evaluated the longer-term efficacy and safety of PEGylated rhGH for two-year treatment of GH-deficient children. Patients. e eligible patients (61 males and 37 females, age from 3 to 13 years) were prepubertal (Tanner stage 1), with the diagnosis of GHD as determined by the following inclusion criteria: (1) short stature with height standard deviation score (HtSDS) <− 2 based on the Chinese general population standard for age or <3rd percentile for chronological age (CA) or height velocity (HV) <5 cm/year; (2) peak GH concentrations less than 10 ng/dl in response to two pharmacological agents on two separate days (arginine 0.5 g/kg, maximum dose of 30 g, and levodopa 10 mg/kg, maximum dose of 500 mg); (3) bone age (BA) less than 10 years for girls and 12 years for boys, with a minimum of 1 year delay compared to the CA. All patients (n � 98) were diagnosed with isolated GHD. Hypothalamic-pituitary magnetic resonance imaging (MRI) was performed to exclude masses or congenital malformations. Prior to GH treatment, written informed consent from a parent or legal guardian was signed. Patients were excluded if they had growth failure related to other causes, such as diabetes mellitus, impaired fasting glucose, tumors, congenital skeletal abnormalities, congenital heart disease, chronic illness, confirmed diagnosis of an eponymous syndrome (e.g., Turners, Noonan, Prader-Willi, and Russell Sliver), or poorly controlled MPHD. Every patient received relevant extensive education and training on the use and basic pharmacokinetics of rhGH before the initiation of rhGH treatment. 2.2. Treatments. 49 patients received PEGylated rhGH (GeneScience Pharmaceuticals, Changchun, People's Republic of China) at a once-weekly dose of 0.2 mg/kg/week. In contrast, 49 patients received daily SC. rhGH injection (Jintropin AQ, GeneScience Pharmaceuticals) at a dose of 0.30 mg/kg/w. ese two kinds of rhGH were self-paid by the parents. No insurance coverage was available for these children. e treatment and follow-up were performed in Shandong Provincial Hospital Affiliated to Shandong University between May 1, 2012 and June 30, 2018. e protocol of this study was approved by the Ethics Committee of Shandong Provincial Hospital (Jinan, China). Study Measurements. All patients were assessed at the baseline and 3, 6, 12, and 24 months after initiation of treatment. At baseline and each interval, height and weight were measured by the same auxologist. HV, HtSDS, and body mass index (BMI) were calculated. Blood samples were obtained for insulin-like growth factor 1 (IGF-1), thyroid function (free thyroxine FT4, free triiodothyronine FT3, and thyroid-stimulating hormone TSH), adrenocorticotropic hormone (ACTH), cortisol, glucose metabolism (glycated hemoglobin HbA1c and fasting blood glucose), renal function (urea nitrogen BUN, and creatinine), and complete blood count. All these blood samples were obtained after an overnight fast. Serum concentration of GH and IGF-1 were measured by using chemiluminescence assay (Immulite 2000; Siemens Health Care Diagnostics, USA). Intra-assay and interassay coefficients of variation (CV) declared by the manufacturer were 2.5% and 7.5%, respectively, for IGF-1 measurement. Intra-assay and interassay CVs for GH concentration are 4.5% and 5.8%, respectively. BA radiography was determined by the TW3 method [6]. In addition, injection-site reactions and tolerability were monitored to assess the possible side effects of GH treatment. All of the children completed hypothalamicpituitary MRI, which was used by a 3.0 T magnetic resonance scanning machine (Siemens & Co, Germany). Pituitary height was measured and compared with normal values for the corresponding age [7]. We transformed IGF-1 into IGF-1 standard deviation scores based on normative values from a normal population [8]. e data are presented as mean ± SD, and 95% confidence intervals were used where indicated. e changes in HV, HtSDS, BA, IGF SDS [8], and blood measurements were compared using the paired t test. Independent t tests were used to assess indexes between the two groups. Multiple regression analysis was applied to analyze the relationship between 12-month HV and the baseline characteristics. Patients. A total of 100 patients were enrolled (PEGylated rhGH group, n � 49 or daily rhGH group, n � 49) and received GH treatment. e study was not randomized: parents often decided which GH therapy they preferred, and notably, the PEGylated rhGH preparation is more expensive. All patients accomplished 12 months of treatment (PEGylated rhGH group, n � 49, vs daily rhGH group, n � 49), and 85 (87%) patients completed 24 months (PEGylated rhGH group, n � 38, vs daily rhGH group, n � 47). In the PEGylated group, all 38 patients were with PEGylated rhGH throughout. Five patients of the PEGylated rhGH group switched to using daily rhGH in view of the high price of PEGylated rhGH. Two patients were lost to follow-up after the 12-month visit due to concern on safety of rhGH. Four patients were still in follow-up that did not reach the 24-month milestone. In the daily rhGH group, 2 patients were out of touch after 12 months of daily rhGH treatment. Baseline Characteristics. e baseline characteristics of the enrolled patients are summarized in Table 1. ere was no statistically significant difference between groups for age, height, IGF-1 SDS, and BMI at entry. All of the patients were preadolescent, and bone age (BA)/chronological age (CA) indicated retardation of bone maturation. 2 International Journal of Endocrinology Discrete variables for univariate analysis showed no significant relationship between 12-month HV and sex. Moreover, analysis of continuous baseline characteristics found that pretherapy HV, BA, BMI, and height of hypophysis had no relationship with 12-month HV in the multiple regression analysis separately (p values were 0.1825, 0.0069, 0.2942, and 0.3104, respectively). HV depended on age and the maximum stimulated serum GH concentration negatively (p values were 0.0065 and 0.0368, respectively) (Figures 2 and 3). During the period of PEGylated rhGH or daily rhGH treatment, HtSDS increased gradually in all patients with the passage of treatment time. e respective HtSDS values for Figure 4). Notably, at each visit milestone, there was no statistical difference between the two treatment groups. e pattern of change in mean HtSDS was consistent with catch-up growth. e mean change in HtSDS was significant from the baseline to 24 months between these two groups (p ≤ 0.001). ere was no difference in bone advancement between the two groups following 24 months of treatment. Bone Age. Mean change (SD) from baseline to 12 months in bone age was 1.08 ± 0.48 years in the PEGylated rhGH group (n � 49) and 1.11 ± 0.49 years in the daily rhGH group (n � 49). ere was no statistical difference in these two groups (p � 0.92). e increased BA from the baseline to 24 months was 1.96 ± 0.73 years in the PEGylated rhGH group (n � 38) and 2.27 ± 0.93 years in the daily rhGH group (n � 47). ere was also no statistical difference of BA during the two-year treatment (p � 0.103). Safety. Adverse events in the PEGylated group were the same as those in the daily rhGH group. ere have been no confirmed cases of type 1 or type 2 diabetes mellitus with either rhGH treatment. e glucose level of PEGylated rhGH group was 4.75 ± 0.69 mmol/L at the baseline, 5.14 ± 0.43 mmol/L at month 12, and 5.13 ± 0.52 mmol/L at month 24. For the daily rhGH group, the glucose level was 4.84 ± 0.54 mmol/L at the baseline, 5.29 ± 0.43 mmol/L at month 12, and 5.26 ± 0.43 mmol/L at month 24. ere was no significant difference during the treatment of PEGylated rhGH at month 12 (p � 0.106) or month 24 (p � 0.310) so as the change of HbA1c (p � 0.310 at month 12, p � 0.888 at month 24). Injection-site lipoatrophy was not encountered in either group. No significant peripheral edema, headache, or injection-site reactions were noted. No severe adverse event was detected during the treatment between these two groups. Of note, 2 patients in the PEGylated group developed hypothyroidism vs 3 patients in the daily rhGH group. Administration of low dose L-thyroxin has normalized thyroid function. All of the side effects in both groups of patients are illustrated in the Table 2. Daily rhGH PEGylated rhGH Height standard deviation score Discussion In order to increase adherence to growth hormone therapy in the previous study, several formulations have been developed. PEGylation reduces renal clearance. After the discontinuation of early PEGylated rhGH named PHA-794438 and NNC126-0083, Jintrolong ® is the only commercially available PEGylated rhGH currently [3]. e results of our analysis demonstrated that Jintrolong ® was effective, well-tolerated, and convenient in Chinese children with growth deficiency. ere was no difference in augmentation of HV between the PEGylated rhGH group and the daily rhGH group at each visiting time. As previously reported, HV was not related to sex, BA, pretherapy HV, height of hypophysis, and BMI. e growth rate is negatively correlated with age and peak GH levels. Replacement of rhGH therapy in GHD children at an early age is more likely to attain catch-up growth and a normalization of adult height [9]. GHD children will improve their chances of achieving their genetic height potential because of early rhGH therapy [10]. During the first year treatment in these two groups, the BA was both advanced nearly one year, indicating no undue advancement of skeletal maturation. Bone maturation and height progression were also parallel during the two-year treatment. An early PEGylated rhGH preparation named NNC126-0083 was due to the unsatisfactory once-weekly IGF-1 profile [11]. Using Jintrolong ® as a new kind of PEGylated rhGH, IGF-1 level increased steadily [12]. Our study demonstrated that the concentration of IGF-1 in both PEGylated rhGH group and daily rhGH group reached the upper normal range during the first 6 months. Some guidelines also recommended the level of IGF-1 to adjust the dose of rhGH [13]. Serum IGF-1 level can also assess the adherence to rhGH injections [14]. It remains contestable whether high serum IGF-1 concentrations result in better height outcome or long-term risks [15]. Our follow-up investigation also showed that a weekly PEGylated rhGH had better safety than a daily rhGH. For example, values of fasting glucose and glycosylated hemoglobin remained unchanged from the baseline to 12 months and 24 months in the PEGylated rhGH group and the daily rhGH group. No patients developed diabetes. Several studies have reported a link between the development of diabetes and rhGH treatment [16]. GH may impact glucose homeostasis through a negative direct and indirect effect on the sensitivity of insulin. However, other studies have not established the same relationship. Baronio et al. conducted a median period of 6-year surveys of 99 GHD children and no deterioration in glucose homeostasis was found [17]. In terms of glucose homeostasis, this study confirmed the safety of GH treatment in GHD children and affirmed that regular glucose tolerance tests were unnecessary. In our trial, the common adverse events were transient, mild, and consistent with safety events reported in the labels for rhGH products. Mild, easily treated hypothyroidism was found in 5 patients (3 in PEGylated group and 2 in daily group). In all these 5 patients in both groups, we observed a decrease in the serum FT4 level and no changes in the serum TSH level after rhGH administration. We inclined to the type of central hypothyroidism. Different mechanisms have been suggested to explain the relationship between GH and thyroid function. One mechanism demonstrated an inhibition of TSH release via an increased somatostatinergic tone or by a T3 negative feedback mechanism within the thyrotropes due to increase in T3 production from T4 deiodination at central level. e other mechanism suggests an increase in extrathyroidal conversion of T4 to triiodothyronine (T3) chiefly mediated directly by GH, or through IGF-1. is also can be found by reducing reverse-T3 (rT3) and/or increasing T3/T4 ratio during rhGH therapy at the peripheral level [18][19][20]. As for long-term safety, a cohort comprising 23,984 patients treated with rhGH in 8 European countries since introduction in 1989 did not find whether rhGH therapy affects the risk of cancer incidence or mortality [21]. e latest data from GeNeSIS observational program also found no increase in risk of mortality comparing rhGH-treated children with children in the general population [22]. In the study, none of the patients reported injection-site lipoatrophy after 24-month PEGylated rhGH injection. A previous study reported that a total of 13 cases of injectionsite lipoatrophy among 105 subjects with GHD treated with PEGylated rhGH named PHA-794428 [23]. All lipoatrophy lesions in these cases resolved in 8-12 weeks. Acquired lipoatrophy is generally localized and refers to a limited, well-circumscribed subcutaneous depression matching an area of fat loss. Long-acting rhGH with different pharmacokinetic and pharmacodynamic profiles compared to daily rhGH should continue surveillance during and in the years after treatment and even in old age in those who continue therapy [24]. In conclusion, PEGylated rhGH will play a more important role in GHD children for long-term rhGH replacement. It will increase convenience and compliance in GHD children without increasing safety concerns. Meanwhile, improvements of PEGylated rhGH in the therapeutic aspect were similar to that of daily rhGH. Weekly injection will reduce the needle-related fear of children. e early treatment of rhGH was proven to improve HV, final adult height, and psychosocial development of pediatric patients. e long-term efficacy of PEGylated rhGH in children with GHD needs to be explored further to assess its potential superiority. Data Availability e data used to support the findings of this study are included within the article. All of the data in this study were collected by using query system of medical record of Shandong Provincial Hospital Affiliated to Shandong University and by enquiring the parents of GHD children in clinic. Ethical Approval All procedures performed in studies involving human participants were in accordance with the ethical standards of International Journal of Endocrinology Shandong Provincial Hospital Affiliated to Shandong University and with the 1964 Helsinki Declaration and its later amendments or comparable ethical standards. Consent Informed consent was obtained from all individual participants included in the study. Conflicts of Interest All authors declare that they have no conflicts of interest. Authors' Contributions Li conceptualized the study. Guimei Li was responsible for conceptualization, methodology, writing review, and editing. Yu Qiao was responsible for data curation, formal analysis, writing original draft, review, and editing. Zengmin Wang was responsible for formal analysis and investigation. Jinyan Han was responsible for investigation and methodology.
2019-09-26T09:05:11.205Z
2019-09-19T00:00:00.000
{ "year": 2019, "sha1": "f4d8f7d3dfebc9289bee1aa2ed34eb05a5ad88c6", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/ije/2019/1438723.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "9df63f0a25ab70a3c014e1f33afae4101cd74de8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
761645
pes2o/s2orc
v3-fos-license
Results from transcranial Doppler examination on children and adolescents with sickle cell disease and correlation between the time-averaged maximum mean velocity and hematological characteristics: a cross-sectional analytical study ABSTRACT CONTEXT AND OBJECTIVE: Transcranial Doppler (TCD) detects stroke risk among children with sickle cell anemia (SCA). Our aim was to evaluate TCD findings in patients with different sickle cell disease (SCD) genotypes and correlate the time-averaged maximum mean (TAMM) velocity with hematological characteristics. DESIGN AND SETTING: Cross-sectional analytical study in the Pediatric Hematology sector, Universidade Federal de São Paulo. METHODS: 85 SCD patients of both sexes, aged 2-18 years, were evaluated, divided into: group I (62 patients with SCA/Sß0 thalassemia); and group II (23 patients with SC hemoglobinopathy/Sß+ thalassemia). TCD was performed and reviewed by a single investigator using Doppler ultrasonography with a 2 MHz transducer, in accordance with the Stroke Prevention Trial in Sickle Cell Anemia (STOP) protocol. The hematological parameters evaluated were: hematocrit, hemoglobin, reticulocytes, leukocytes, platelets and fetal hemoglobin. Univariate analysis was performed and Pearson's coefficient was calculated for hematological parameters and TAMM velocities (P < 0.05). RESULTS: TAMM velocities were 137 ± 28 and 103 ± 19 cm/s in groups I and II, respectively, and correlated negatively with hematocrit and hemoglobin in group I. There was one abnormal result (1.6%) and five conditional results (8.1%) in group I. All results were normal in group II. Middle cerebral arteries were the only vessels affected. CONCLUSION: There was a low prevalence of abnormal Doppler results in patients with sickle-cell disease. Time-average maximum mean velocity was significantly different between the genotypes and correlated with hematological characteristics. INTRODUCTION Sickle cell disease is a hemoglobinopathy characterized by the presence of more than 50% hemoglobin S on hemoglobin electrophoresis.The homozygote SS form is known as sickle cell anemia.Hemoglobin S can occur in association with other anomalous types of hemoglobin, and among these are hemoglobin C and thalassemia beta, thus forming different genotypes of sickle cell disease: hemoglobin SC disease and S-beta thalassemia. 1 It has been estimated that more than seven million people have hemoglobin S in Brazil, and that more than 25,000 to 30,000 people have the homozygote form, with more than 3,500 new cases born every year. 1,2n the S-beta globin gene, a normal codon (GAG) is replaced by another (GTG), thus resulting in exchanging the sixth amino acid of the beta globin.This exchange of glutamic acid for valine causes polymerization of hemoglobin S, when exposed to media with low oxygen tension, which leads to sickling of red blood cells.This vasoocclusive process is responsible for most of the clinical manifestations of sickle cell disease. 3,4troke is one of the most feared clinical complications from this disease because of its high morbidity-mortality. 5 In children and adolescents, the most common form is ischemic stroke, and around 5-10% of patients with sickle cell anemia present stroke by the age of 20 years. 3,5,6The physiopathology of ischemic stroke differs from the mechanisms of the other complications of the disease.In autopsy studies, arteriography and other diagnostic methods, it has been observed that ischemic stroke in cases of sickle cell anemia is caused by hypertrophy of the intimal layer of the intracranial arteries, with proliferation of fibroblasts and smooth muscle of blood vessels. 3,7,8][11][12][13][14] The normal blood flow velocities obtained through transcranial Doppler are influenced by several factors, of which the main three determinants are the difference in pressure gradient along the vessel, the vessel length and cross-sectional area (caliber) and the blood viscosity.][17][18][19][20] Adams et al. conducted several studies using transcranial Doppler among individuals with sickle cell anemia to demonstrate the usefulness of this examination for evaluating stroke risk.They defined normal blood flow velocity as values of up to 170 cm/s, intermediate values as 170 to 200 cm/s and values greater than 200 cm/s as critical, with a high risk of developing stroke. 8,18,19,21Among these patients, if they had abnormal results from two Doppler examinations, they received a transfusion for primary stroke prevention. 9Blood transfusion was used because it is highly effective in reducing the risk of recurrence stroke in sickle cell anemia. 7he clinical picture of sickle cell disease is very variable, and hematocrit levels, leukocyte counts and hemoglobin S and fetal hemoglobin percentages correlate with the clinical severity of these patients' conditions. 22In this light, and given that these parameters have been little studied in relation to other genotypes of sickle cell disease, our aim in this study was to evaluate the results from transcranial Doppler among patients with sickle cell disease who were being followed at the Pediatric Hematology Outpatient Clinic of Universidade Federal de São Paulo (Unifesp), correlating the time-averaged maximum mean velocity obtained through transcranial Doppler with the different genotypes of sickle cell disease and the hematological characteristics. OBJECTIVE To evaluate the results from transcranial Doppler among patients with sickle cell disease and correlate the time-averaged maximum mean velocity obtained through transcranial Doppler with the different genotypes of sickle cell disease and the hematological characteristics. METHODS This was a single-center cross-sectional study on all patients with sickle cell disease between the ages of 2 and 18 years who came to the Pediatric Hematology Outpatient Clinic of Unifesp in 2004 and 2005. The inclusion criteria were that the patients needed to have no previous clinical diagnosis of stroke and no acute crises. Patients were excluded if they did not cooperate with transcranial Doppler examination, if they did not agree to undergo the examination, if they were using hydroxyurea or if they had received red blood cell transfusions over the preceding three months. This study was approved by the Research Ethics Committee of Unifesp (no.149/03).An informed consent statement was obtained from all the adults responsible for the patients participating in the study. All the transcranial Doppler examinations were performed by the same professional, who had been trained to perform transcranial Doppler (Silva GS), using the Nicolet EME-Companion TC2000 apparatus with a 2 MHz transducer and following the criteria of the STOP (Stroke Prevention Trial in Sickle Cell Anemia) study. 9From this examination, the time-averaged maximum mean velocities were determined every 2 mm along the following arteries: bilateral internal carotid arteries, bilateral anterior cerebral arteries, anteriormiddle cerebral artery bifurcations, middle cerebral arteries, bilateral posterior cerebral arteries, bilateral vertebral arteries and basilar artery.The highest value from the right or left middle cerebral arteries, bilateral internal carotid arteries or anterior-middle cerebral artery bifurcations was taken as the time-averaged maximum mean velocity for each patient and was used in the data analysis.When the timeaveraged maximum mean velocity result was conditional or abnormal, the examinations were repeated until two consecutive abnormal results were obtained. The blood tests performed were total hemoglobin and hematocrit assays; leukocyte and platelet counts, performed in an electronic counter (Coulter model Ssr or Coulter model T-890); hemoglobin S assay by means of hemoglobin electrophoresis on cellulose acetate with tris-EDTA-borate buffer (pH 8.6); and fetal hemoglobin assay, quantified by means of alkaline denaturation. The patients were divided into two groups, in accordance with the clinical and hematological differences already recognized in the literature.One group was composed of patients with the homozygote form and Sβ 0 thalassemia (group I), and the other was composed of patients with SC hemoglobinopathy and Sβ + thalassemia (group II).The results were described through calculations of means and standard deviations for variables with normal distribution.Hematological parameters predictive of time-averaged maximum mean velocities were investigated by means of univariate analysis.Pearson's coefficient was calculated to evaluate correlations between the time-averaged maximum mean velocity and the hemoglobin S percentage, fetal hemoglobin percentage, hemoglobin level, hematocrit percentage, platelet count, leukocyte count and reticulocyte percentage.The Statistical Package for the Social Sciences (SPSS) 10.0 software (SPSS Inc., Chicago, United States) was used for the statistical analyses.P values < 0.05 were taken to be statistically significant. RESULTS By the end of the study period, 85 patients with sickle cell disease had been evaluated.Group I was composed of 62 patients and group II was composed of 23 patients (Table 1).There were no statistically significant differences between the groups with regard to age and sex. We observed statistically significant differences between the two groups for all the hematological parameters (Table 2). Group I presented a negative correlation between the time-averaged maximum mean velocity and the hematocrit percentage and between the time-averaged maximum mean velocity and the hemoglobin level.Group II presented a positive correlation between the time-averaged maximum mean velocity and the reticulocyte count and a negative correlation between the time-averaged maximum mean velocity and the fetal hemoglobin percentage (Table 3). Only one patient in group I presented an abnormal velocity (1.6%).This patient with abnormal velocity had a conditional result (190 cm/s) in the first transcranial Doppler that was performed and, in the rep- etitions, presented abnormal velocity in two consecutive examinations (time-averaged maximum mean velocities of 220 cm/s and 240 cm/s).Five patients (8.1%) in group I presented conditional time-averaged maximum mean velocities (Table 4). It was observed that the arteries affected in these patients were the right or left middle cerebral artery in all the cases, both in the patient with abnormal and in the patients with conditional time-averaged maximum mean velocities. DISCUSSION The use of transcranial Doppler has become an important tool for stroke prevention among patients with sickle cell anemia.Stroke occurs four times more often among sickle cell anemia patients than among patients with SC hemoglobinopathy or Sβ thalassemia. 5n the STOP study, which was carried out in 1998 and is the biggest study on transcranial Doppler in relation to patients with sickle cell anemia within the pediatric age group, the proportion of abnormal results was 9.7%. 9Thus, the results differed from those of the present study, in which only one patient with a time-averaged maximum mean velocity greater than 200 cm/s was found, representing 1.6%.][25] In Sergipe, in 2005, Melo et al. gathered data on 34 patients with sickle cell anemia, aged less than 18 years, and compared them with 80 controls.Among the results from the patients, none of them (0.0%) presented abnormal time-averaged maximum mean velocity and four (11.7%) presented conditional results. 24n São Paulo, Park et al. evaluated 77 patients with sickle cell disease, aged between 2 and 16 years.They found two patients (2.6%) with abnormal time-averaged maximum mean velocity and 11 (14.3%) with conditional results. 23n Minas Gerais, in 2008, Silva et al. evaluated 153 children with sickle cell anemia, aged 2 to 16 years.They found seven patients (4.6%) with abnormal time-averaged maximum mean velocity and 16 (10.4%)with conditional results. 25n 1999, Kinney et al. 22 investigated correlations of risk factors for stroke among patients with sickle cell anemia and found a positive correlation with the Senegal haplotype.In Brazil, the most common haplotype is Bantu, which may be a factor contributing towards this difference.][28][29][30] Group II did not present any abnormal or conditional result from transcranial Doppler, probably because of the lower severity of these patients' condition.So far, there are no data in the literature indicating screening for stroke risk using transcranial Doppler among other genotypes of sickle cell disease.In 1990, Adams et al. evaluated patients with sickle cell anemia and SC hemoglobinopathy, although only 10 SC hemoglobinopathy patients were included in the sample (190 patients) and they were analyzed together.In other studies, only patients with sickle cell anemia were included. 9,10,15,21[23][24][25] No comparison with the literature could be made with regard to the correlation that was found in group II between time-averaged maximum mean velocities and the reticulocyte percentages and fetal hemoglobin levels, since no such data is available in the literature. There was a statistically significant difference in the time-averaged maximum mean velocities between the groups, such that it was greater in group I.This was probably because of the lower hemoglobin levels and lower hematocrit percentages in this group (P < 0.01). Abnormalities were only observed in the middle cerebral artery, in both cerebral hemispheres, and these were found both in the patient with abnormal time-averaged maximum mean velocities and in the patients with conditional velocities.In 1992, Adams found that around 60% of the middle cerebral arteries were affected, and that the internal carotid arteries and anterior cerebral arteries were also affected. 8,21ur results emphasize the importance of performing transcranial Doppler on cases of sickle cell anemia.National studies in Brazil involving the use of transcranial Doppler on large numbers of patients with sickle cell disease are needed, given that some findings differ from data in the literature.It is still uncertain whether there is a determining factor that would explain such differences, or whether the sample size would explain these findings. Transcranial Doppler does not seem to be a priority examination in relation to the other genotypes of sickle cell disease.Longitudinal follow-up studies on patients with different sickle cell genotypes and their respective transcranial Doppler findings are needed to confirm this hypothesis. CONCLUSION There was a low prevalence of abnormal Doppler results in patients with sickle-cell disease.Time-average maximum mean velocity was significantly different between the genotypes and correlated with hematological characteristics. Table 4 . Frequencies of time-averaged maximum mean velocity result categories from transcranial Doppler on two groups of patients with sickle cell disease Doppler; TAMM = time-averaged maximum mean velocity.normal TAMM: < 170 cm/s; conditional TAMM: between 170 and 200 cm/s; abnormal TAMM: > 200 cm/s. Table 1 . Demographic characteristics of the patients with sickle cell disease (groups I and II) who underwent transcranial Doppler examination Table 2 . Comparison between hematological parameters and timeaveraged maximum mean velocity obtained using transcranial Doppler in groups I and II Table 3 . Pearson correlation coefficient between mean maximum velocity relating to transcranial Doppler and hematological parameters, for groups I and II Pearson's correlation coefficient results were classified as follows: Negative value: inverse correlation; Positive value: positive correlation; None: 0.00; Weak: between 0.01 and 0.29; Medium: between 0.30 and 0.59; Strong: between 0.60 and 0.89; Very strong: between 0.90 and 0.99; and Perfect: 1.00.*Statistical significance: P < 0.01.
2017-05-31T13:56:18.124Z
2011-05-01T00:00:00.000
{ "year": 2011, "sha1": "029cbaf9dfa75332458c5cd8e515475671c12a70", "oa_license": "CCBY", "oa_url": "https://www.scielo.br/j/spmj/a/Znvhg5MQnF9rCY67qwNXzVv/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "029cbaf9dfa75332458c5cd8e515475671c12a70", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
15618637
pes2o/s2orc
v3-fos-license
An answer to a question of Coleman on scattered sets The aim of this paper is to show that every scattered subset of a dense-in-itself semi-$T_D$-space is nowhere dense. We are thus able to answer a recent question of Coleman in the affirmative. In terms of Digital Topology, we prove that in semi-$T_D$-spaces with no open screen, trace spaces have no consolidations. Introduction It is well-known that in dense-in-themselves T 1 -spaces, all scattered subsets are nowhere dense. This result was established by Kuratowski in the proof that in T 1 -spaces the finite union of scattered subsets is scattered. In a recent paper Coleman asked the following question [1,Question 4]: Is it true that in dense-in-themselves, T D -spaces all scattered sets are nowhere dense? In what follows, we will show that even in dense-in-themselves semi-T D -spaces all α-scattered sets are nowhere dense. The question of Coleman is in fact very well motivated not only because it is interesting to know how low one can go on the separations below T 1 and still have the scattered sets being nowhere dense but also from a 'digital point of view'. In terms of Digital Topology, we will prove that in semi-T D -spaces with empty open screen, trace spaces have no consolidations. In Digital Topology several spaces that fail to be T 1 are very often important in the study of the geometric and topological properties of digital images [6,7]. Such is in fact the case with the major building block of the digital n-space -the digital line or the so called Khalimsky line. This is the set of the integers, Z, equipped with the topology K, generated by G K = {{2n − 1, 2n, 2n + 1}: n ∈ Z}. A fenestration [7] of a space X is a collection of disjoint nonempty open sets whose union is dense. The consolidation A + [7] of a set A is the interior of its closure. When there is a fenestration of a space (X, τ ) by singletons, the space (X, τ ) is called α-scattered [4] or a trace space [7]. For example, in the digital line the collection {{n} : n ∈ Z and n is odd} is a fenestration of (Z, K). All scattered sets are α-scattered by not vice versa [4]. In T 0 -spaces without isolated points, we may encounter a trace space which fails to be nowhere dense [1]. Nevertheless, as we will show, with the presence of the very weak separation 'semi-T D ', in spaces with no isolated points we have all α-scattered sets being nowhere dense. A topological space X is a called a T D -space if every singleton is locally closed or equivalently if the derived set d(x) is closed for every x ∈ X. Recall that X is a semi-T D -space if every singleton is open or semi-closed [3]. Recall that a subset A of a space (X, τ ) is called Note that every open and every dense set is locally dense. (1) X is a dense-in-itself semi-T D -space. (4) There are no locally dense singletons in X. Proof. (1) ⇒ (2) Let x ∈ X. Since X is a semi-T D -space, {x} is open or semi-closed [3]. Since X is dense-in-self, {x} is semi-closed. On the other hand, in any topological space every singleton is locally dense (= preopen) or nowhere dense [5]. If {x} is preopen, then it must be (due to semi-closedness) regular open. As X has no isolated points, we conclude that {x} is nowhere dense. (2) ⇒ (3) Obvious, since the ideal of nowhere dense sets is closed under finite additivity. (3) ⇒ (4) Follows easily from the fact that singletons are either locally dense or nowhere dense. (4) ⇒ (1) If some point x ∈ X were isolated, then it would be locally dense. This shows that X is dense-in-itself. That X is a semi-T D -space follows easily from the fact that nowhere dense sets are semi-closed. (ii) Let (X i , τ i ) i∈I be a family of topological spaces such that at least one of them is a dense-in-itself semi-T D -space. Then the product space X = i∈I X i is also dense-in-itself and semi-T D . Now, we can apply the result above in order to show that the α-scattered subsets of the density topology are in fact its Lebesgue null set. Recall that a subset exists and is equal to d. Set φ(E) = {x ∈ R: d(x, E) = 1}. The open sets of the density topology T are those measurable sets E that satisfy E ⊆ φ(E). Note that the density topology T is finer than the usual topology on the real line. Corollary 2.4 The trace spaces (i.e., the α-scattered subsets) of the density topology are precisely its Lebesgue null set. Proof. Follows from Theorem 2.3 and the fact that a subset A of the density topology is nowhere dense if and only if it is a Lebesgue null set [8]. ✷
2014-10-01T00:00:00.000Z
1998-10-12T00:00:00.000
{ "year": 1998, "sha1": "c2334527b4f11a41bf17ea0117b4371db7811a97", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "d7dcdc84040ccc6137ee3b170af99e0010db2dd3", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
17229004
pes2o/s2orc
v3-fos-license
HCO mapping of the Horsehead : Tracing the illuminated dense molecular cloud surfaces Far-UV photons strongly affect the physical and chemical state of molecular gas in the vicinity of young massive stars. We have obtained maps of the HCO and H13CO+ ground state lines towards the Horsehead edge at 5'' angular resolution with a combination of IRAM PdBI and 30m observations. These maps have been complemented with IRAM-30m observations of several excited transitions at two different positions. Bright formyl radical emission delineates the illuminated edge of the nebula, with a faint emission remaining towards the shielded molecular core. Viewed from the illuminated star, the HCO emission almost coincides with the PAH and CCH emission. HCO reaches a similar abundance than HCO+ in the PDR (~1-2 x10^{-9} with respect to H2). Pure gas-phase chemistry models fail to reproduce the observed HCO abundance by ~2 orders of magnitude, except if reactions of OI with carbon radicals abundant in the PDR (i.e., CH2) play a significant role in the HCO formation. Alternatively, HCO could be produced in the PDR by non-thermal processes such as photo-processing of ice mantles and subsequent photo-desorption of either HCO or H2CO, and further gas phase photodissociation. The measured HCO/H13CO+ abundance ratio is large towards the PDR (~50), and much lower toward the gas shielded from FUV radiation (<1). We propose that high HCO abundances (>10^{-10}) together with large HCO/H13CO+ abundance ratios (>1) are sensitive diagnostics of the presence of active photochemistry induced by FUV radiation. Introduction Photodissociation region (PDR) models are used to understand the evolution of far-UV (FUV; hν <13.6 eV) illuminated matter both in our Galaxy and in external galaxies. These sophisticated models have been benchmarked recently (Röllig et al. 2007) and are continuously upgraded (e.g., Goicoechea & Le Bourlot 2007;González-García et al. 2008). Given the large number of physical and chemical processes included in such models, it is necessary to build reference data sets, which can be used to test the predictive accuracy of models. Our team has contributed to this goal by providing a series of high resolution interferometric observations of the Horsehead nebula (see Pety et al. 2007b, for a summary). Indeed, this source is particularly well suited because of its favorable orientation and geometry, and its moderate distance (∼400 pc; . We have previously studied the carbon (Teyssier et al. 2004; Send offprint requests to: e-mail: maryvonne.gerin@lra.ens.fr ⋆ Based on observations obtained with the IRAM Plateau de Bure interferometer and 30 m telescope. IRAM is supported by INSU/CNRS (France), MPG (Germany), and IGN (Spain). ⋆⋆ Current address: Departamento de Astrofísica. Universidad Complutense de Madrid, Spain. Pety et al. 2005) and sulfur chemistry (Goicoechea et al. 2006), and detected the presence of a cold dense core, with active deuterium fractionation (Pety et al. 2007a). The formyl radical, HCO, is known to be present in the interstellar medium since the late 70's (Snyder et al. 1976). Snyder et al. (1985) give a detailed description of the HCO structure and discuss the energy diagram for the lowest energy levels. HCO is a bent triatomic asymmetric top with an unpaired electron. a-type and b-type transitions are allowed, with a stronger dipole moment (1.36 Debye) for the a-type transitions (Landsberg et al. 1977), which are therefore more easily detectable. The strongest HCO ground state transitions lie at 86.670, 86.708, 86.777 and 86.805 GHz, very close to the ground state transition of H 13 CO + and to the first rotationally excited SiO line. Therefore HCO can be observed simultaneously with SiO and H 13 CO + . HCO ground state lines have been detected in the Orion Bar as well as in the dense PDRs NGC 2023, NGC 7023 andS 140 (Schilke et al. 2001). From limited mapping, they have shown that HCO is sharply peaked in the Orion Bar PDR, confirming earlier suggestions that HCO is a tracer of the cloud illuminated interfaces (de Jong et al. 1980). García-Burillo et al. (2002) have mapped HCO and H 13 CO + in the nearby galaxy M82. HCO, CO and the ionized gas present a Table 1. Observation parameters for the maps shown in Fig. 1 and 5 (Fig. 5 is available on-line only nested ring morphology, with the HCO peaks being located further out compared to CO and the ring of H ii regions. The chemistry of HCO is not well understood. Schilke et al. (2001) concluded that it is extremely difficult to understand the observed HCO abundance in PDRs with gas phase chemistry alone. As a possible way out, they tested the production of HCO by the photodissociation of formaldehyde. In this model, H 2 CO is produced in grain mantles, and released by non-thermal photodesorption in the gas phase in the PDR. However, even with this favorable hypothesis, the model can not reproduce the abundance and spatial distribution of HCO because the photoproduction is most efficient at an optical depth of a few magnitudes where the photodissociation becomes less effective. In this paper, we present maps of the formyl radical ground state lines at high angular resolution towards the Horsehead neb-ula, and the detection of higher energy level transitions towards two particular lines of sights, one in the PDR region and the other in the associated dense core. These observations enable us to accurately study the HCO spatial distribution and abundance. We present the observations and data reduction in section 2, while the results and HCO abundance are given in section 3, and the discussion of HCO chemistry and PDR modeling is given in section 4. Observations and data reduction Tables 1 and 2 summarize the observation parameters for the data obtained with the IRAM PdBI and 30m telescopes. The HCO ground state lines were observed simultaneously with H 13 CO + and SiO. Frequency-switched, on-the-fly maps of the H 13 CO + J=1-0 and HCO ground state lines (see Fig. 5), obtained at the IRAM-30m using the A100 and B100 3mm receivers (∼ 7 mm of water vapor) were used to produce the shortspacings needed to complement a 7-field mosaic acquired with the 6 PdBI antennae in the CD configuration (baseline lengths from 24 to 176 m). The whole PdBI data set will be comprehensively described in a forthcoming paper studying the fractional ionization across the Horsehead edge (Goicoechea et al. 2009, in prep.). The CCH data shown in Fig. 1 have been extensively described in Pety et al. 2005. The high resolution HCO 1 0,1 − 0 0,0 data are complemented by observations of the 2 0,2 − 1 0,1 and 3 0,3 − 2 0,2 multiplets with the IRAM 30m telescope centered on the PDR and the dense core. To obtain those deep integration spectra, we used the position switching observing mode. The on-off cycle duration was 1 minute and the off-position offsets were (δRA, δDec) = (−100 ′′ , 0 ′′ ), i.e. the H ii region ionized by σOri and free of molecular emission. Position accuracy is estimated to be about 3 ′′ for the 30m data and less than 0.5 ′′ for the PdBI data. The data processing was done with the GILDAS 1 softwares (Pety 2005b). The IRAM-30m data were first calibrated to the T * A scale using the chopper wheel method (Penzias & Burrus 1973), and finally converted to main beam temperatures (T mb ) using the forward and main beam efficiencies (F eff & B eff ) displayed in Table 2. The resulting amplitude accuracy is ∼ 10%. Frequency-switched spectra were folded using the standard shift-and-add method, after baseline subtraction. The resulting spectra were finally gridded through convolution by a Gaussian. Position-switched spectra were co-added before baseline subtraction. Interferometric data and short-spacing data were merged before imaging and deconvolution of the mosaic, using standard techniques of GILDAS (see e.g. Pety et al. 2005, for details). Most of the formyl radical emission is concentrated in a narrow structure, delineating the edge of the Horsehead nebula. Low level emission is however detected throughout the nebula, including towards the dense core identified by its strong DCO + and H 13 CO + emission (Pety et al. 2007a). The HCO emission is resolved by our PdBI observations. From 2-dimensional Gaussian fits of the image, we estimate that the emission width is ∼13±4 ′′ in the plane of the sky. The H 13 CO + emission shows a different pattern : most of the signal is associated with the dense core behind the photodissociation front, and faint H 13 CO + emission detected in the illuminated edge. The CCH emission pattern is less extreme than HCO, but shows a similar enhancement in the PDR. Spatial distribution In summary, the morphology of the HCO emission is reminiscent of the emission of the PDR tracers, either the PAH emission (Abergel et al. 2002) or the small hydrocarbons emission, which is strongly enhanced towards the PDR (Teyssier et al. 2004;Pety et al. 2005). By contrast, the HCO emission becomes strikingly faint where the gas is dense and shielded from FUV radiation. These regions are associated with bright DCO + and H 13 CO + emission (Pety et al. 2007a). Our maps therefore confirm that HCO is a PDR species. Column densities and abundances 3.2.1. Radiative transfer models of HCO and H 13 CO + Einstein coefficients and upper level energies of the studied HCO and H 13 CO + lines are given in Table 3. As no collisional crosssections with H 2 nor He have been calculated for HCO so far, we have computed the HCO column densities assuming a single excitation temperature T ex for all transitions. Nevertheless our calculation takes into account thermal, turbulent and opacity broadening as well as the cosmic microwave background and line opacity (Goicoechea et al. 2006). For H 13 CO + , detailed non-local and non-LTE excitation and radiative transfer calculations have been performed using the same approach as in our previous PdBI CS and C 18 O line analysis (see Appendix in Goicoechea et al. 2006). H 13 CO + -H 2 collisional rate coefficients were adapted from those of Flower (1999) for HCO + , and specific H 13 CO + -electron rates where kindly provided by Faure & Tennyson (in prep.). Structure of the PDR in HCO and H 13 CO + To get more insight on the spatial variation of the HCO and H 13 CO + column densities and abundances, we have analyzed a cut through the PDR, centered on the "HCO peak" at δy=0" (see Fig. 3). The cut clearly shows that HCO is brighter than H 13 CO + in the PDR and vice-versa in the dense core. Taking into account the different level degeneracies of both transitions (a factor 2.4) and the fact that the associated Einstein coefficients A i j differ by a factor ∼8 (due to the different permanent dipole moments, see Table 3), N(H 13 CO + ) must be significantly lower than N(HCO) towards the PDR. We modeled the PDR as an edge-on cloud inclined by ∼5 • relative to the line-of-sight. We have chosen a cloud depth of ∼0.1 pc, which implies an extinction of A V ≃20 mag for the considered densities towards the "HCO peak". These parameters are the best geometrical description of the Horsehead PDRedge (e.g., ) and also reproduce the observed 1.2 mm continuum emission intensity. The details of this modeling will be presented in Goicoechea et al. (2009). In the following, we describe in details the determination of the column densities and abundances for two particular positions, namely the "HCO peak" and the "DCO + peak" (offsets relative to the map center can be found in Table 2). HCO column densities We used the three detected rotational transitions of HCO (each with several hyperfine components, see Fig. 2) to estimate the HCO column densities in the direction of the "HCO" peak. We have taken into account the varying beam dilution factors of the HCO emission at the "HCO peak" by modeling the HCO emission as a Gaussian filament of ∼12 ′′ width in the δx direction, and infinite in the δy direction. The filling factors at 260, 173 and 87 GHz are thus ∼0.8, 0.6 and 0.4, respectively. A satisfactory fit of the IRAM-30m data towards the "HCO peak" is obtained for T ex ≃5 K and a turbulent velocity dispersion of σ=0.225 km s −1 (FWHM= 2.355 × σ). Line profiles are reproduced for N(HCO) = 3.2 × 10 13 cm −2 (see red solid curves in Fig. 2). The most intense HCO lines at 86.67 and 173.38 GHz become marginally optically thick at this column density (τ 1). Therefore, opacity corrections need to be taken into account. We checked that the low value of T ex (subthermal excitation as T k ≃60 K) is consistent with detailed excitation calculations carried for H 13 CO + in the PDR which are described below. H 13 CO + column densities Both the H 13 CO + J=3-2 and 1-0 line profiles at the "HCO peak" are fitted with n(H 2 ) ≃5×10 4 cm −3 , T k ≃60 K and e − /H ≃5×10 −5 (as predicted by the PDR models below). The required column density is N(H 13 CO + )=5.8×10 11 cm −2 . For those conditions, the excitation temperature, T ex , of the J=3-2 transition varies from ≃4 to 6 K, which supports the single-T ex models of HCO. Both H 13 CO + lines are optically thin towards the "HCO peak". The H 13 CO + line emission towards the "DCO + peak" has been studied by Pety et al. (2007a). Both H 13 CO + lines are moderately optically thick towards the core, and the H 13 CO + column density is N(H 13 CO + )≃ 5.0×10 12 cm −2 , which represents an enhancement of nearly one order of magnitude relative to the PDR. According to our 1.2 mm continuum map, the extinction towards the core is A V 30 mag compared to 20 mag in the PDR. The H 13 CO + column density enhancement therefore corresponds to a true abundance enhancement. Table 4 summarizes the inferred HCO and H 13 CO + column densities and abundances towards the 2 selected positions : the Fig. 2. IRAM-30m observations (histograms) of several HCO hyperfine components of the 1 01 -0 00 , 2 02 -1 01 and 3 03 -2 02 rotational transitions towards the PDR ("HCO peak") and towards the dense core ("DCO + peak") (Pety et al. 2007a). Solid lines are single-T ex radiative transfer models of the PDR-filament (red curves) and line-of-sight cloud surface (blue curves). A sketch of the HCO rotational energy levels is also shown (right corner). Comparison of HCO and H 13 CO + abundances "HCO peak" in the PDR and the "DCO + peak" in the FUVshielded core. Both species exhibit strong variations of their column densities and abundances relative to H 2 between the PDR and the shielded region. In the PDR, we found that both the HCO abundance relative to H 2 (χ(HCO)≃1-2 ×10 −9 ) and the HCO/H 13 CO + column density ratio (≈50) are high. These figures are higher than all previously published measurements (at lower angular resolution). Besides, the formyl radical and HCO + reach similar abundances in the PDR. The situation is reversed towards the "DCO + peak", i.e. the observed HCO/H 13 CO + column density ratio is lower (≈1) than towards the "HCO peak" . Nevertheless, while the bulk of the observed H 13 CO + emission arises from cold and shielded gas, the origin of HCO emission is less clear. HCO could either (i) coexist with H 13 CO + or (ii) arise predominantly from the lineof-sight cloud surface. In the former case, our observations show that the HCO abundance drops by one order of magnitude between the PDR and the dense core environment. However, it is possible that the abundance variation is even more pronounced, if the detected HCO emission arises from the line of sight cloud surface. We have estimated the depth of the cloud layer, assuming that HCO keeps the "PDR abundance" in this foreground layer : a cloud surface layer of A V ≃3 (illuminated by the mean FUV radiation field around the region) also reproduces the observed HCO lines towards the cold dense core (blue solid lines in Fig. 2). In this case, both the HCO abundance and the HCO/H 13 CO + abundance ratio in the dense core itself will be even lower than listed in table 4. We have tried to discriminate both scenarios by comparing the HCO 1 01 − 0 00 (J=3/2-1/2, F=2-1) and H 13 CO + J=1-0 line profiles towards this position. Both lines have been observed simultaneously with the IRAM-30m telescope. Because of their very similar frequencies (∼86.7 GHz), the beam profile and angular resolution is effectively the same. Fig. 3. Observations along a horizontal cut through "the HCO peak" (histograms). The H 13 CO + J=1-0 and HCO 1 01 − 0 00 lines were mapped with the PdBI at an angular resolution of 6.8 ′′ , whereas the H 13 CO + J=3-2 line was mapped with HERA-30m (and smoothed to a spatial resolution of 13.5 ′′ ). Radiative transfer models of an edge-on cloud with a line of sight extinction of A V =20, inclined 5 • relative to the line of sight for HCO (red curve), and H 13 CO + (blue curves) are shown. The single-T ex HCO model assumes a 12 ′′ width filament with a column density of 3.2 × 10 13 cm −2 , while N(HCO) is 4.6 × 10 12 cm −2 behind the filament. The H 13 CO + model assumes a constant density of n(H 2 )=5×10 4 cm −3 with T k =60 K and N(H 13 CO + )=5.8 × 10 11 cm −2 for δx <35 ′′ ; and T k =10 K and N(H 13 CO + )=7.6 × 10 11 cm −2 for δx >35 ′′ . Modeled line profiles have been convolved with an appropriate Gaussian beam corresponding to each PdBI synthesized beam or 30m main beam resolution. In this situation, any difference in the measured linewidths reflects real differences in the gas kinematics and turbulence of Fig. 4. Photochemical models of a unidimensional PDR. Upper panels show the density gradient (n H = n(H) +2n(H 2 ) in cm −3 ) used in the calculation. Middle panels show the predicted HCO and H 13 CO + abundances (relative to n H ). The H 13 CO + abundance inferred from observations in the cold core ("the DCO + peak", see the offsets in Table 2) is shown with blue lines. The HCO abundance inferred from observations in the PDR ("the HCO peak", see the offsets in Table 2) is shown with red lines. Lower panels show the HCO/H 13 CO + abundance ratio predicted by the models whereas the HCO/H 13 CO + column density ratio inferred from observations is shown as blue arrows and red lines (for the cold core and PDR respectively). Every panel compares two different models: Leftside models show a standard chemistry (dashed curves) versus the same network upgraded with the addition of the H 2 CO + photon → HCO + H photodissociation (solid curves). Right-side models show the previous upgraded standard model (solid curves) versus a chemistry that adds the O + CH 2 → HCO + H reaction with a rate of 5.01×10 −11 cm 3 s −1 (dotted curves). The inclusion of the O + CH 2 reaction has almost no effect on H 13 CO + for the physical conditions prevailing in the Horsehead, but triggers an increases of the HCO abundance in the PDR by two orders of magnitude. the regions where the line profiles are formed. Gaussian fits of the HCO and H 13 CO + lines towards "the DCO + peak" provides line widths of ∆v(HCO) = 0.81±0.06 km s −1 and ∆v(H 13 CO + ) = 0.60±0.01 km s −1 . Therefore, even if the H 13 CO + J=1-0 line are slightly broadened by opacity and do not represent the true line of sight velocity dispersion, HCO lines are broader at the 3σ level of confidence. This remarkable difference supports scenario (ii) where the H 13 CO + line emission towards the "the DCO + peak" arises from the quiescent, cold and dense core, whereas HCO, in the same line of sight, arises predominantly from the warmer and more turbulent outer cloud layers. We note that the presence of a foreground layer of more diffuse material (A V ∼2 mag) was already introduced by Goicoechea et al. (2006), to fit the CS J=2-1 scattered line emission. The analysis of CO J=4-3 and CI 3 P 1 − 3 P 0 maps led also (author?) (Philipp et al. 2006) to propose the presence of a diffuse envelope, with A V ∼2 mag, and which contributes to roughly the about half the mass of the dense filament traced by C 18 O and the dust continuum emission. The hypothesis of a surface layer of HCO is therefore consistent with previous modeling of molecular emission of the horsehead. We conclude 1) that HCO and HCO + have similar abundances in the PDR, and 2) that the HCO abundance drops by at least one order of magnitude between the dense and warm PDR region and the cold and shielded DCO + core. Gas-phase formation: PDR models In order to understand the HCO and H 13 CO + abundances and HCO/H 13 CO + column density ratio inferred from observations, we have modeled the steady state gas phase chemistry in the Horsehead edge. The density distribution in the PDR is well represented by a density gradient n H (δx) ∝ δx 4 , where δx is the distance from the edge towards the cloud interior and n H = n(H) + 2n(H 2 ) (see the top panels of Fig. 4). The density reaches a constant n H value of 2×10 5 cm −3 in an equivalent length of ∼10 ′′ Goicoechea et al. 2006). The cloud edge is illuminated by a FUV field 60 times the mean interstellar radiation field (G 0 = 60 in Draine units). We used the Meudon PDR code 2 , a photochemical model of a unidimensional PDR (see Le Bourlot et al. 1993;Le Petit et al. 2006;Goicoechea & Le Bourlot 2007, for a detailed description). Our standard chemical network is based on a modified version of Ohio State University (osu) gas-phase network, updated for photochemical studies (see Goicoechea et al. 2006). It also includes 13 C fractionation reactions (Graedel et al. 1982) and specific computation of the 13 CO photodissociation rate as a function of depth. The ionization rate due to cosmic rays in the models is ζ=5×10 −17 s −1 . Following our previous works, we chose the following elemental gas phase abundances: He/H=0.1, O/H=3×10 −4 , C/H=1.4×10 −4 , N/H=8×10 −5 , S/H=3.5×10 −6 , 13 C/H=2.3×10 −6 , Si/H=1.7×10 −8 and Fe/H=1.0×10 −9 . In Fig. 4, we investigate the main gas-phase formation routes for HCO in a series of models "testing" different pathways leading to the formation of HCO. HCO and H 13 CO + predictions are shown in Figure 4 (middle panels). As a first result, note that in all models the HCO abundance peaks near the cloud surface at A V ≃1.5 (δx ≃14 ′′ ) where the ionization fraction is high (e − /H∼5×10 −5 ). Due to the low abundance of metals in the model (as represented by the low abundance of Fe), the ionization fraction in the shielded regions is low (e − /H 10 −8 ), and therefore the H 13 CO + predictions matches the observed values (Goicoechea et al. 2009). Besides, a low metalicity reduces the efficiency of charge exchange reactions of HCO + with metals, e.g., which are the main gas-phase formation route of HCO in the FUV-shielded gas in our models. Hence, the HCO abundance remains low inside the core. Nevertheless, despite that such models do reproduce the observed HCO distribution, which clearly peaks at the PDR position, the predicted absolute HCO abundances can vary by orders of magnitude depending of the dominant formation route. In our standard model (left-side models : dashed curves), the formation of HCO in the PDR is dominated by the dissociative recombination of H 2 CO + , while its destruction is dominated by photodissociation. Even if the predicted HCO/H 13 CO + abundance ratio satisfactorily reproduces the value inferred from observations, the predicted HCO abundance peak is ∼3 orders of magnitude lower than observed. In order to increase the gasphase formation of the HCO in the PDR we have added a new channel in the photodissociation of formaldehyde, the production HCO, in addition to the normal channel producing CO : This channel is generally not included in standard chemical networks but very likely exists (Troe 2007;Yin et al. 2007). We included this process with an unattenuated photodissociation rate of κ diss (H 2 CO)=10 −9 s −1 and a depth dependence given by exp(−1.74 A V ). This is the same rate as the one given by van Dishoeck (1988) for the photodissociation of H 2 CO producing CO, which is explicitly calculated for the Draine (1978) radiation field. Model results are shown in Fig. 4 (left-side models: solid curves). The inclusion of Reaction 2, which becomes the dominant HCO formation route, increases the HCO abundance in the PDR by a factor ∼5. But the HCO production rate is still too low to reproduce the abundance determined from observations. Another plausible possibility to increase the HCO abundance in the PDR by pure gas-phase processes is to include additional reactions of atomic oxygen with carbon radicals that reach high abundances only in the PDR. Among the investigated reactions, the most critical one, is known to proceed with a relatively fast rate at high temperatures (5.01×10 −11 cm 3 s −1 at T k =1200-1800 K; Tsuboi & Hashimoto 1981). This is the rate recommended by NIST (Mallard et al. 1994) and UMIST2006 (Woodall et al. 2007) and that we adopt for our lower temperature domain (∼10-200 K). Model predictions are shown in Fig. 4 (right-side models: dotted curves). Whereas the predicted HCO abundance in the shielded gas remains almost the same, the HCO abundance is dramatically increased in the PDR (by a factor of ∼125) and the O + CH 2 reaction becomes the HCO dominant production reaction. Therefore, such a pure gas-phase model adding reactions 2 and 3 not only reproduces the H 13 CO + abundance in shielded core, but also reproduces the observed HCO absolute abundances in the PDR. In this picture, the enhanced HCO abundance that we observe in the Horsehead PDR edge would be fully determined by the gas-phase chemical path : The validity of the rate of Reaction 3 used in our PDR model remains, of course, to be confirmed theoretically or experimentally at the typical ISM temperatures (10 to 200 K). Other routes for HCO formation: Grain photodesorption If Reaction 3 is not included in the chemical network, the predicted HCO abundance is ∼2 orders of magnitude below the observed value towards the PDR. As a consequence, the presence of HCO in the gas-phase should be linked to grain mantles formation routes, and subsequent desorption processes (not taken into account in our modeling). In particular, Schilke et al. (2001) proposed that HCO could result from H 2 CO photodissociation, if large quantities of formaldehyde are formed on grain mantles and then released in the gas phase. Even with this assumption, their model could not reproduce the observed HCO abundance in highly illuminated PDRs such as the Orion Bar. The weaker FUV-radiation field in the Horsehead, but large density, prevent dust grains to acquire high temperatures over large spatial scales. In fact, both gas and grains cool down below ∼30 K in ≃10 ′′ -20 ′′ (or A V ≃1-2) as the FUV-radiation field is attenuated. Therefore, thermal desorption of dust ice-mantles (presumably formed before σ-Orionis ignited and started to illuminate the nebula) should play a negligible role. Hence a non-thermal desorption mechanism should be considered to produce the high abundance of HCO observed in the gas phase. This mechanism could either produce HCO directly or a precursor molecule such as formaldehyde. Since high HCO abundances are only observed in the PDR, FUV induced ice-mantle photo-desorption (with rates that roughly scales with the FUV-radiation field strength) seems the best candidate (e.g., Willacy & Williams 1993;Bergin et al. 1995). Laboratory experiments have shown that HCO radicals are produced in irradiated, methanol containing, ice mantles (Bernstein et al. 1995;Moore et al. 2001;Bennett & Kaiser 2007). The formyl radical could be formed through the hydrogenation of CO in the solid phase. It is an important intermediate radical in the synthesis of more complex organic molecules such as methyl formate or glycolaldehyde (Bennett & Kaiser 2007). However, the efficiency of the production of radicals in FUV irradiated ices remains uncertain, and very likely depends on the ice-mantle composition. The formation of species like formaldehyde and methanol in CO-ice exposed to H-atom bombardment has been reported by different groups (Hiraoka et al. 1994;Watanabe et al. 2002;Linnartz et al. 2007), further confirming the importance of HCO as an intermediate product in the synthesis of organic molecules in ices. Indeed, hydrogenation reactions of CO-ice, which form HCO, H 2 CO, CH 3 O and CH 3 OH in grain mantles (e.g., Tielens & Whittet 1997;Charnley et al. 1997), are one important path which warrants further studies. To compare with our observations, we further need to understand how the radicals are released in the gas phase, either directly during the photo-processing, or following FUV induced photo-desorption. Recent laboratory measurements have started to shed light on the efficiency of photo-desorption, which depends on the ice composition and molecule to be desorbed. For species such as CO, the rate of photo-desorbed molecules per FUV photon is much larger than previously thought (e.g., Oberg et al. 2007). Similar experiments are required to constrain the formation rate of the various species that can form in interstellar ices and to determine their photo-desorption rates. We can use the measured gas phase abundance of HCO to constrain the efficiency of photo-desorption. We assume that the PDR is at steady state, and that the main HCO formation mechanism is non thermal photo-desorption from grain mantles (with a F HCO rate), while the main destruction mechanism is gas-phase photodissociation (with a D HCO rate), therefore : (6) where χ(HCO) is the gas phase abundance of HCO relative to H 2 , χ(HCO ice ) is the solid phase abundance relative to water ice, and n(H 2 O ice )/n(H 2 ) is the fraction of water in the solid phase relative to the total gas density. κ diss (HCO) and κ pd (HCO) are the HCO photodissociation and photo-desorption rates respectively. By equaling the formation and destruction rates, we get : or κ pd (HCO) s −1 ≈ 10 −12 κ diss (HCO) 10 −9 χ(HCO)/10 −9 χ(HCO ice )/10 −2 where we have used typical figures for the HCO abundance in the gas phase (∼10 −9 , see above) and solid phase (∼10 −2 see e.g. Bennet & Kaiser 2007) and for the amount of oxygen present as water ice in grain mantles. Assuming standard ISM grains with a radius of 0.1 µm the required photodesorption efficiency (or yield) Y pd (HCO): (see e.g., d 'Hendecourt et al. 1985;Bergin et al. 1995) converts to Y pd (HCO) ≈ 10 −4 molecules per photon (for the FUV radiation field in the Horsehead and A V ≃1.5, where HCO peaks). Therefore, the production of HCO in the gas phase from photodesorption of formyl radicals could be a valid alternative to gas phase production, if the photo-desorption efficiency is high and HCO abundant in the ice mantles. This mechanism also requires further laboratory and theoretical studies. Because the formyl radical is closely related to formaldehyde and methanol and the three species are likely to coexist in the ice mantles, a combined analysis of the H 2 CO, CH 3 OH and HCO line emissions towards the Horsehead nebula (PDR and cores) is needed to provide more information on the relative efficiencies of gas-phase and solid-phase routes in the formation of complex organic molecules in environments dominated by FUVradiation. This will be the subject of a future paper. Summary and conclusions We have presented interferometric and single-dish data showing the spatial distribution of the formyl radical rotational lines in the Horsehead PDR and associated dense core. The HCO emission delineates the illuminated edge of the nebula and coincides with the PAH and hydrocarbon emission. HCO and HCO + reach similar abundances (≃1-2 ×10 −9 ) in these PDR regions where the chemistry is dominated by the presence of FUV photons. For the physical conditions prevailing in the Horsehead edge, pure gas-phase chemistry is able to reproduce the observed HCO abundances (high in the PDR, low in the shielded core) if the O + CH 2 → HCO + H reaction is included in the models. This reaction connects the high abundance of HCO, through its formation from carbon radicals, with the availability of C + in the PDR. The different linewidths of HCO and H 13 CO + in the line of sight towards the "DCO + peak" suggest that the H 13 CO + line emission arises from the quiescent, cold and dense gas completely shielded from the FUV radiation, whereas HCO predominantly arises from the outer surface of the cloud (its illuminated skin). As a result we propose the HCO/H 13 CO + abundance ratio, and the HCO abundance itself (if 10 −10 ), as sensitive diagnostics of the presence of FUV radiation fields. In particular, regions where the HCO/H 13 CO + abundance ratio (or intensity ratio if lines are optically thin) is greater than ≃1 should reflect ongoing FUV-photochemistry. Given the rich HCO spectrum and the possibility to map its bright millimeter line emission with interferometers, we propose HCO-H 2 as a very interesting molecular system for calculations of the ab initio inelastic collisional rates. Fig. 5. Medium angular resolution maps of the integrated intensity of the 4 hyperfine components of the fundamental transition of HCO. These lines have been observed simultaneously at IRAM-30m. Maps have been rotated by 14 • counter-clockwise around the projection center, located at (δx, δy) = (20 ′′ , 0 ′′ ), to bring the illuminated star direction in the horizontal direction and the horizontal zero has been set at the PDR edge. The emission of all lines is integrated between 9.6 and 11.4 km s −1 . Displayed integrated intensities are expressed in the main beam temperature scale. Contour levels are displayed on the grey scale lookup tables. The red vertical line shows the PDR edge and the green crosses show the positions (DCO + and HCO peaks) where deep integrations have been performed at IRAM-30m (see Fig. 2).
2008-11-10T13:51:26.000Z
2008-11-10T00:00:00.000
{ "year": 2009, "sha1": "a86d802fdffe7beefed873618b0b08edb0e1ba12", "oa_license": null, "oa_url": "https://www.aanda.org/articles/aa/pdf/2009/06/aa10933-08.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "5c26c003dc713ac6bc414fcc863f45d9d3764fe3", "s2fieldsofstudy": [ "Physics", "Environmental Science" ], "extfieldsofstudy": [ "Physics" ] }
225490643
pes2o/s2orc
v3-fos-license
English Letter Recognition Based on TensorFlow Deep Learning Image recognition has always been a hot research direction. With the continuous progress of technology theory and application, deep learning has a very significant role in the field of image recognition. Today’s image international competitions and enterprise applications are mainly based on deep learning technology, compared with traditional technologies, deep learning is significantly more effective in the application of feature extraction and algorithm models. This paper based on the TensorFlow framework and use deep learning transfer learning fine-tuning to recognize handwritten English characters. In the course of the experiment, the data enhancement method is used to pre-process the collected data, which can increase the amount of training data and improve the generalization ability of the model. At the same time, the parameters and optimizer are continuously optimized to accelerate the convergence speed and finally reach the convergence loss value. Experimental results show that the application of deep learning algorithms has achieved good results in training model feature extraction and recognition accuracy. Introduction With the continuous transformation of computer and artificial intelligence technology, image recognition technology plays an important role in our working life. One of the most hot english letter recognition is an important branch of handwritten character recognition. There are many methods for handwritten character recognition, such as Que WeiTao [1] proposed a method for handwritten letter recognition based on BP neural network, it is verified that the problem of character classification and recognition can be well handled by neural networks, but its network level is not deep enough, feature extraction is not perfect. Ren Bo [2] and others proposed a character method to improve the structure of deep learning networks. The method add a Softmax classifier during the feature extraction phase of the deep network model to better achieve the effect of classification recognition, but the shortcoming is that the character data itself has a large difference and the recognition robust is not good enough. This paper based on the TensorFlow framework and use deep learning neural network transfer fine-tuning learning to achieve handwritten English letter recognition. Deep convolutional neural networks and transfer fine-tuning methods, feature extraction is made faster and more accurate to achieve good recognition classification. Tensorflow introduction and analysis TensorFlow is the second-generation artificial intelligence learning system released by Google. It supports many programming languages, operating system environments and hardware architectures. It has a stand-alone mode, a distributed mode and strong portability. The working process of TensorFlow is roughly divided into constructing the computation the calculation graph and performing all the calculations in the calculation graph. Among them, the computation graph is the most basic concept in TensorFlow, and all calculations in TensorFlow will be transformed into nodes on the computation graph [3]. Figure 1 is a simple data flow graph. The input matrix is a 1×2 matrix, w1 and w2 are 2×3 and 3×1 initialization weight matrices respectively, the nodes MatMul, MatMul_1 and add represent corresponding calculations, the connection between nodes are ages, the data transmitted in the edge is tensor,y1 represents the node value after MatMul calculation and b1,b2 are represented as biases matrix. After all parameters are initialized, the session is executed to calculate the calculation graph and the output y2 is formed to form a fully connected neural network data flow graph. The calculation process is: TensorFlow Slim is an image classification toolkit released by Google. This paper use the Inception V3 network model and use transfer learning to retain the weight parameters trained by Inception V3 on the ImageNet dataset as training initialization values. Therefore, it can be applied to the collected data set categories by modifying the corresponding classification layer, trained by migration fine-tuning to achieve the desired classification effect. Inception V3 convolutional network model The Inception V3 network model is composed of 5 convolutional layers,3 pooling layers, and 11 Inception modules. The network model structure is shown in Figure 2. It is mainly composed of convolutional layer, pooling layer, Concat aggregation layer, Dropout operation layer, fully connected layer and Softmax layer. The Inception module is composed of convolution kernels of different sizes in parallel, which greatly reduces the amount of calculation. The role of the convolution layer is to extract features from the input data. The key role of the pooling layer is to perform feature compression and dimensionality reduction, reducing the amount of calculation and redundant parameters. The role of the Concat aggregation layer is to stitch data. The role of the Dropout layer is to randomly remove some neuron nodes according to a certain probability value, which has the effect of preventing overfitting and enhancing model generalization. The role of the fully connected layer is to connect all features and There are three types of calculations: Sigmoid (x),Relu (x) and Tanh (x).This model use Relu (x) as the activation function. Compared to other activation functions, the Relu (x) activation function makes some neurons output zero, reduces the dependency between parameters and has an important role in preventing overfitting. Pooling Layer. The pooling layer is usually behind the convolutional layer. Its main role is to compress the size of the output image features of the convolution layer, which can reduce parameters and prevent overfitting. It statistically summarizes the eigenvalues of a position in the plane and its adjacent positions and uses the summarized result as the value of this position in the plane [4].Commonly used are maximum pooling and average pooling operations. This paper use these two pooling operations, its operation diagram is shown in Figure 3 and Figure 4. The Inception module uses multiple convolution kernels for convolution operations, making the output results more relevant. The Inception V3 model optimizes the Inception module, splitting the larger two-dimensional convolution kernel into two smaller one-dimensional convolution kernels to use. This can reduce the number of parameters and reduce the phenomenon of overfitting [5].The Inception module has three structures as shown in Figure 5. It can be seen from these three structure diagrams that each structure has 3 convolution branches and 1 pool road. The last two structure diagrams decompose the two-dimensional convolution kernel into several one-dimensional convolution kernels. The kernels are connected in parallel. In this way, the width of the network is increased to improve the performance of the entire model and the richness of feature extraction. There is a 1×1 convolution kernel in each structure to play the role of dimensionality reduction. Experimental process and analysis This paper collected 26 English alphabet image data samples. Because the data has problems of disorder and size, the data samples are first normalized and the amount of data is limited, so these data need to be pretreated. Because this deep convolutional neural network model requires a lot of data training to raise the accuracy of the model and the generalization ability of the model. The relatively small number of data samples collected in this article, so data enhancement is used to expand the amount of data. This paper use vertical mirror flip, horizontal mirror flip, adding Gaussian noise, adjusting image brightness and changing pixel contrast to perform data enhancement. Figure 6 shows a comparison of the effect of some original images and data enhancement. Experimental platform This experimental process was performed on Google Drive. In order to optimize the experimental time, the experimental platform used Google Colab's GPU hardware accelerator. The primary experimental configuration is python3.6.7、tensorflow1.13.1,operating system is Ubuntu18.04.1,the version of CUDA is Ubuntu18.04.1.The graphics card is NVIDIATESLA T4. training to 64, the learning rate of learning_rate to 0.01,the maximum number of iteration steps of max_number_of_setps to 10000,the weight_decay parameter value to 0.00001,the network optimizer to select momentum, the dynamic value to 0.9 and so on. Training model During the training process, TensorBoard is used to visualize the complex operations of training large-scale neural networks, at the same time monitor the changes in the training indicators during the training process. Formula 3 is p to represent the calculation formula of cross entropy of q. After the Softmax regression treatment, the output formula is formula 4 and formula 5 is the expression of mean square error loss function. The loss function in this paper is the cross entropy loss function, which is the most commonly used in classification problems and performs well. Validation Model After the 10,000-step iterative training model is completed, the loss value shows a convergence of about 1.203.As shown in the line chart of the loss value and the number of iteration steps shown in Figure 7, it can be seen that the convergence starts at more than 5000 steps. After the loss value is converged, the training model is completed and the model needs to be verified. Figure 7. Change Chart of Loss Value While verifying the training model, the TopN is also verified, showing the accuracy of all classification categories. TopN represents the classification label corresponding to the predicted picture, among the first N sets of the output matrix [6]. The classification accuracy of the model in this paper is 87.4%, Top3 is 95.1% and Top8 is 98.2% .In the output category probability, the correct category falls in the top 3 and the top 8 sets. The accuracy rate is increased from 95.1% to 98.2%. Recognition results and analysis After verifying the accuracy of the model classification, save the trained model as a PB file and export it. Use the handwritten English letters on the computer sketchpad to verify the effect of recognizing a single handwritten letter. Handwritten letter recognition results are shown in Figure 8. Judging from the recognition results, the effect of handwritten English alphabet recognition is good. In our recognition results, there is an output probability score of this category in a similar category, which is converted into a probability value output by the Softmax function. According to the handwritten letter recognition results in Figure 8, you can see that some scores are as high as 10.39 and some scores are only 3.16,which indicates that categories with low scores are easily affected by other external factors, such as external noise, writing habits of writers, etc which will affect the accuracy of recognition. Combining all these factors, the improvement of accuracy in all categories and the robustness of recognition in complex environments need to be further improved. Conclusion This paper use the method of fine-tuning of deep learning transfer learning and the unique structure of the Inception V3 model to classify and identify our common English letters. Compared with traditional training methods, the deep learning transfer fine-tuning is faster in training and feature extraction and feature extraction is relatively richer than traditional methods, which can quickly and accurately learn. In this way, it has a good research prospect for more and more complex image classification and recognition problems in the future and it also has a good research role for improving the robustness of classification recognition under complex environmental conditions. The next research will be more biased towards improving the robustness of recognition in complex environments.
2020-09-10T10:09:36.038Z
2020-08-01T00:00:00.000
{ "year": 2020, "sha1": "f66048eded8cdc2790f240bbd38756dc39cb6cab", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1627/1/012012", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "74c2750f40e47ebe13b8f12438b4499da607e504", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Computer Science" ] }
234914683
pes2o/s2orc
v3-fos-license
Prevalence and factors associated with anxiety among university students of health sciences in Brazil: findings and implications Objective: The aim was to evaluate the prevalence and factors associated with anxiety disorders among university students of health sciences at Federal University of Ouro Preto, Brazil. Methods: A cross-sectional study between March to June 2019. Data were collected through a self-administered questionnaire including sociodemographic, academic, family and behavioral issues. The Beck Anxiety Inventory was used to assess anxiety. Estimates were obtained through the prevalence ratio and Poisson multivariate analysis. Results: Four hundred and ninety-three students participated with a mean age of 23.1 and predominantly women (79.9%). All students had some degree of anxiety, with the frequency of the severe, moderate and mild forms being 28.0%, 29.8% and 27.0%, respectively. The factors associated with anxiety included having suffered psychological and/or physical violence in childhood, having suicidal thoughts, having a deceased parent, living with parents, being dissatisfied with the course and being in the exam period. Conclusions: The prevalence of anxiety was high in our study and family problems prior to entering university seem to significantly influence the degree of anxiety, which may compromise the student’s academic and social performance. INTRODUCTION Anxiety disorders are characterized by intense changes in thinking, mood and behavior, and are associated with persistent excessive worry and fear, which can result in debilitating conditions 1,2 .The physical changes include tachycardia, muscle tension, breathing difficulties, stomach pains and sweating 3 .In addition to the relationship between anxiety and the development of physical problems, this disorder is often associated with depression, alcohol and other forms of substance abuse including psychotropic medicines 3,4 .The development of anxiety disorders is also linked to certain risk factors including being female, low socioeconomic status, environmental risk factors and maltreatment during childhood [5][6][7] . Treating patients with mental health disorders is an increasing priority among countries worldwide with mental health accounting for between 10.0% and 13.0% of the global disease burden, and with mental health seen as the principal reason for years lived with disability among populations [8][9][10] .Of particular concern is that the global burden of mental health disorders has increased in recent years especially among lower-and middle-income countries (LMICs) due to a number of issues including environmental, demographic and socio-political issues including unrest 11,12 .In the American continent, anxiety disorders affect more than 57 million people, with Brazil having the highest prevalence of anxiety cases worldwide at 9.3% of the population 13 . In recent years, a high number of university students including medical students have presented with mental health disorders, mainly anxiety and depression, negatively impacting on their quality of life [14][15][16][17][18][19] .The peak incidence of mental health disorders is typically seen post-secondary education 14,20,21 , with prevalence rates even more pronounced in undergraduate health students 15,22 .Overall, it is estimated that between 12% to 46% of all university students are affected by mental health disorders in any one year 20,21,23 .There is also a high rate of suicidal ideation and suicides among university students 24,25 .Many situations may trigger the anxiety process in undergraduate students including stress, reduced social support including reduced family support, high workload and volume of study, limited sleep, financial concerns, as well as maltreatment by colleagues and teachers, which can all compromise academic achievement 15,20,26 .Similarly among undergraduate health students, increased competitiveness and workloads, students' interpersonal contact with their future patients, limited leisure activities, lack of emotional support and a stressful work environment also contribute to high rates of mental health disorders in this population 18,27 .Other situations outside of the academic environment during the university period can also enhance the development of mental health disorders.These include dissatisfaction with body image as well as feelings of personal and professional inferiority 28,29 , financial problems 23 and the effects of the physical environment and climate 30,31 . Brazil currently has 296 public higher education institutions, 36.8% of which are federal.The majority of people enrolled in higher education are concentrated in the universities 32 .In addition, published studies have demonstrated a high level of mental health disorders among medical students in Brazil, impacting on their quality of life 18,19,33 .There are also initiatives to try and address the impact of mental health disorders in these and other students in Brazil 34,35 .However, we were unaware of a study in a university in Brazil assessing the extent of anxiety disorders and potential factors where students come from all regions in Brazil as well as abroad.In addition, a study that included students from all potential health science courses and not just those attending medical or dental schools, which are the most studied.We believe such findings will help develop future strategies in Brazil and elsewhere to improve the mental health of students, better equipping them for the future.Consequently, the aim of this study was to identify factors associated with symptoms of anxiety disorder among university students in the health field as a basis for suggesting potential ways of addressing them. Study design A cross-sectional study was conducted from March to June 2019 with students from health-related courses enrolled in The Federal University of Ouro Preto [UFOP].According to the UFOP classification, six programs belong to the health field: Food Science and Technology, Physical Education, Pharmacy, Medicine, Nutrition and Social Service courses.All students on these courses received an online questionnaire available on the Google Forms platform, with an average time of completion of 10 minutes.Participation was entirely voluntary and the sample was formed using the snowball technique.Before data collection, the study was disseminated on social networks and local media (Instagram and WhatsApp), in order to stimulate participation.Only students under 18 years of age and those who did not complete the entire Beck Anxiety Inventory (BAI) were excluded from the study. Sample calculation The sample was calculated using the entire population of students regularly enrolled in all UFOP health science courses.Enrolled numbers included: Food Science and Technology (n = 220), Physical Education (n = 322), Pharmacy (n = 441), Medicine (n = 464), Nutrition (n = 307) and Social Service (n = 367), totaling 2,191 students 36 .Considering a prevalence of 19.7% of anxiety among students 37 , an accuracy of 3% and a drawing effect of 1, the sample was estimated as 500 students. Data collection To characterize anxiety disorders, the Beck Anxiety Inventory was applied 38 .This is a self-report measure of anxiety, translated and validated into Portuguese 39 . The inventory consists of 21 questions that address symptoms and the intensity of anxiety in the week prior to the interview, namely: numbness or tingling, feeling hot, feeling unsteady, inability to relax, fear of the worst happening, dizziness or lightheadedness, heart pounding/racing, unsteady, terrified or afraid, nervous feeling of choking, hands trembling, fear of losing control, difficulty in breathing, fear of dying, scared , indigestion, faint/lightheaded, flushed face and hot/cold sweats.Each symptom was scored from zero to three points -zero when the response was negative, one when the response was mild, two when it was moderate and three when the response was severe. In addition to the BAI, sociodemographic characteristics (gender, age), health conditions (chronic diseases, psychological and psychiatric treatment), details ondwelling arrangements, academic characteristics (program, years of study, satisfaction with the graduate programme), family (deceased parents, relationship with parents), behavioral issues (relationships, leisure activities, physical activity, frequency of physical activity, legal and illicit drug use, sexual orientation) if the respondent had been victim of childhood violence and if suicidal thoughts were present when the questions were answered. Data analysis Absolute and relative frequencies were used to describe the variables, using STATA statistical package version 14.0 (StataCorp).As the number of students differed between courses, an analysis of variance -ANOVA, followed by Bonferroni's test, was initially performed, comparing the degree of anxiety among students in the different courses.The total score determined the anxiety level, with 0-7 points indicating a minimum level, 8-15 mild, 16-25 moderate and 26-63 severe anxiety 38 .The minimum anxiety category was considered as an absence of anxiety and was compared to mild, moderate and severe anxiety in different multivariate models.The association between anxiety and explanatory variables was evaluated by a Poisson regression model.Univariate analysis was performed and variables that showed statistical association (p < 0.25) were selected to develop the multivariate model.The selected variables in the univariate models were subsequently included in the multiple model.The backward method was adopted to obtain the final model, where the variables with p-value lower than 0.05 remained.Variables with more than two categories were transformed into dummy variables.The Poisson regression model with robust variance estimations was used to assess the risk factors associated with anxiety disorder.Estimates were obtained through the prevalence ratio [PR] with 95% CI.Due to the large number of women who answered the questionnaire and because gender is a risk factor, the final model was adjusted by gender. Ethical statement The study was approved by the Committees of Research Ethics of the University Federal of Ouro Preto (No. 3,057,599).All participants gave written informed consent for the data collection and analysis. Regarding sexual orientation, the majority were heterosexual (n = 404; 83.3%) and had a single sexual partner (n = 244; 51.1%).Most students (n = 399; 82.8%) reported that their sexual orientation did not influence their emotional status and 429 (92.8%) reported having parental approval for their sexual orientation.With respect to physical activity, 260 (53.4%) of the participants practiced in some kind of activity with frequency varying from one to four times a week.Regarding alcohol consumption, 369 (75.8%) reported alcohol consumption, most of them (70.2%)occasionally; 80 (22.5%) had used illicit drugs, with 15 (19.0%) using these frequently. The univariate analysis according to the degrees of anxiety (mild, moderate and severe) is presented in Table 2. Prevalence of anxiety All students (n = 493) showed some degree of anxiety according to the Beck Anxiety Inventory with 138 participants having severe anxiety, 147 moderate, 133 mild and 75 a minimum degree.Consequently, the frequency of severe anxiety was 28.0% (95% CI: 24.2-32.1),moderate anxiety 29.8% (95% CI 25.9-34.0),mild anxiety 27.0% (95% CI 23.2-31.1)and minimal anxiety was 15.2% (95% CI 12.2-18.6).Due to the difference in the samples obtained for each course, we evaluated whether the level of anxiety varied between the courses. Overall, students of Medicine had a lower degree of anxiety compared to those enrolled into Food Science and Technology (p < 0.05) and Social Services (p < 0.05) programs.No significant differences were found in the other courses (Figure 1).Consequently, all participants were included into risk factors analyzing for anxiety. Risk factors associated with different degrees of anxiety According to the multivariate analysis, the risk factors associated with the prevalence of anxiety in the undergraduate students were: (i) having suffered psychological and physical violence in childhood (RP 1.4 95% CI 1.1-2.0 for mild anxiety and RP 1.7 95% CI 1.4-2.1 for severe anxiety), (ii) having suicidal thoughts (RP 1.4 95% CI 1.1-1.7 for moderate anxiety and 1.8 95% CI 1.4-2.4 for severe anxiety), (iii) deceased father and mother (RP 2.1 95% CI 1.1-4.1 for moderate anxiety), (iv) having a good relationship with their father (RP 1.2 95% CI 1.1-1.5 for moderate anxiety), (v) being unhappy or dissatisfied with the course (RP 2.6 95% CI 1.1-6.2 for moderate anxiety) and (vi) being on the exams week (RP 1.3 95% CI 1.1-1.6 for mild anxiety and RP 1.5 95% CI 1.2-1.8 for severe anxiety) (Table 3).The results in the present investigation showed a high prevalence of anxiety among university students of health sciences, similar to trends observed in other countries 14,16,22 , emphasizing that not only the medical students are affected.This is a concern since mental health disorders may compromise the academic and social performances of undergraduate students 27 .Our findings suggest that the prevalence of anxiety was higher in university health students who experienced some kind of violence in childhood, physical and/or psychological.This is perhaps not surprising since children who have experienced stress or some situations of threat to their physical integrity, develop feelings of fear which extend into adulthood, causing mental disorders 40 .Meyers et al. (2019) confirmed the hypothesis that early exposure to sexual trauma may influence the risk of psychopathology, i.e. depression, anxiety, and suicidal ideation, through neurological developmental mechanisms 41 .In addition, Fergusson et al. (2008) found that exposure to sexual abuse as children may be related to increased risks of mental health problems later in life including anxiety disorders, with other authors also demonstrating child mistreatment is associated with the development DISCUSSION We believe this is the first study among universities in Brazil including students from all the regions in Brazil and wider as well as enrolled into multiple courses. of psychiatric disorders, including internalized (anxiety, depression) and externalized (aggression) behaviors 42,43 , corroborating our findings.Psychological consequences of physical or mental abuse during childhood can acutely affect mental health until adulthood, and are a major social problem which need to be taken into account when dealing with mental health disorders among students 44,45 . The high prevalence (45.6%) of students with suicidal ideation in our study is a particular concern and associated with those students with moderate to severe anxiety.These results are in line with Eskin et al. (2016) analyzing undergraduates from 12 countries who found that the prevalence of suicidal ideation ranged from 20.0% to 49.1% 24 .We are aware that anxiety gradually increases the rate of suicidal thoughts in individuals suffering from this disorder helping to explain the findings 46,47 .Consequently, policies and initiatives need to be instigated in UFOP to address this high rate and the potential implications. We found that the prevalence of anxiety was related to certain family issues such as having a deceased father and/or mother and the absence of good relationship with one's father in addition to kind of violence in childhood, physical and/or psychological.This is similar to other studies which have also shown a higher prevalence of anxiety in individuals who do not have a good relationship with family members 20,48 .Overall, the absence of adequate family support negatively influences the maintenance of mental and emotional health 49 .Contrasting with this, family support, including good relationship with friends, reduces the predisposition to the development of mental disorders 50 .To help with this in UFOP, students' houses are common and traditional.Currently, Ouro Preto has 59 such houses provided by the Federal University, with a total capacity of 794 residents, located around the Morro do Cruzeiro campus and in the historic center 36 .Whilst no association was observed between housing types and anxiety disorders in our study, Machado (2014) highlighted that living with people who feel the same longings, participate in the same accomplishments, share the same goals and difficulties, and provide life with renewed hope reduces anxiety disorders 51 . Regarding academic factors, it was found that undergraduates dissatisfied with the course were approximately 2.5 times more likely to develop moderate anxiety.These results corroborate other studies that have pointed out that satisfaction with the course decreases the possibility of developing stress and mental health disorders including anxiety and depression 52,53 .Another academic factor was anxiety during exams period, with students on test week presenting a higher prevalence of anxiety compared to those not in this period.This is a concern as strong anxiety reactions to tests can lead to a decline in performance and may compromise the student's professional education although there are potential approaches to reduce this 54 . Anxiety has been shown to be greater in women than men 55 .This can be justified, among other factors, by the very high hormonal rates, which trigger anxiety symptoms 37,55 .In the present study there was a greater participation of female students, so the multivariate model was adjusted for this variable. In addition, other studies have found that the lower the income, the greater the chance of developing mental disorders and that social inequalities are factors associated with worse mental health 56,57 .Whilst we did not address these issues in our study, we believe it is important to include them in future studies. Finally, in Brazil there is a National Student Assistance Program (PNAES) which aims to expand and improve the conditions of stay of young people in Federal public higher education facilities 58 .The student assistance actions are geared to various aspects of academic life, including health care.However, the high prevalence of mental disorders in this and other studies in Brazil highlights the need to strengthen this program.We believe based on our findings, it is extremely important to have greater effectiveness and coverage of the PNAES making it accessible for all university students.More specifically in detecting anxiety symptoms, it is known that debates, lectures and other forms of information dissemination would help the academic community detect the signs and symptoms and demystify the taboo surrounding mental health disorders, breaking prejudice and changing the views of society.Special attention must be paid to this vulnerable population in order to prevent and adequately treat mental disorders among Brazilian students in federal universities, especially among female students, who are our future and we will be following this up in future studies. We are aware of a number of limitations with this study.First, because the questionnaire was applied using an online platform and not in person, by a psychologist or psychiatrist, the results may not reflect the real degree of anxiety of the participants.BAI also only assesses student's anxiety in the week prior to the interview and the results may not reflect the student's actual situation.Despite this limitation, we believe the BAI is a good instrument for cross-sectional studies as it can be self-applied and it is a validated and widely used scale, designed with questions to differentiate anxiety from depressionn 38,59 .Prevalence rates may be overestimated since in general, people who have a greater interest in the subject usually participate more in studies, especially in those with self-administered questionnaires.However, this is a feature of all such studies using this methodology.Finally, since this was a cross-sectional study, it was not possible to establish the temporality of the associated factors.Despite these limitations, we believe the findings are robust with this study pointing to a high prevalence of anxiety in students of health sciences at UFOP, which needs to be addressed going forward. Figure 1 . Figure 1.Score of anxiety according to the different health courses at UFOP.FST: Food Science and Technology; PE: Physical Education; NUT: Nutrition; PHA: Pharmacy; SW: Social Work; MED: Medicine. Table 1 . Characteristics of the undergraduates of health Science courses at Federal University of Ouro Preto, Ouro Preto, MG, Brazil, 2019 Table 2 . Univariate analysis according to the degrees of anxiety in the students of health Sciences at Federal University of Ouro Preto, Ouro Preto, MG, Brazil, 2019 Table 3 . Multivariate analysis of factors associated with anxiety in different grades, Ouro Preto, Minas Gerais, 2019
2021-05-22T00:02:44.633Z
2021-01-04T00:00:00.000
{ "year": 2021, "sha1": "d1033a3e29c4de69c077b3ea0581451261f03090", "oa_license": "CCBY", "oa_url": "https://www.scielo.br/j/jbpsiq/a/6NYtJ9h8ZWhgQ7wX7HmPNHs/?format=pdf&lang=en", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ecd869e6ecf1222aa0c1e6d0205f18c8967a9fe7", "s2fieldsofstudy": [ "Psychology", "Medicine" ], "extfieldsofstudy": [ "Psychology" ] }
246391448
pes2o/s2orc
v3-fos-license
A political ecology of aviation and development: an analysis of relations of power and justice in the (de)construction of Nepal's Second International Airport In this article, we investigate socio-ecological conflicts surrounding the proposed Second International Airport project near Nijgadh, a town in the southern Terai region of Nepal. Praised by the Nepali government as a gamechanger for Nepal's economy, it has come under scrutiny by environmental activists after plans emerged for extensive clearing of the densely forested project site. While public and political debates have focused on the environmental impacts of the project, the area is also home to nearly 8,000 people, most of whom have no formal land rights and belong to Janajati groups, who face displacement. The apparent lack of attention to the project's consequences for local communities raises questions about the safeguarding of their interests. Drawing on justice theories and political ecology, we conducted a case study to investigate the residents' struggle for justice, recognition, and visibility amidst a strong dichotomy of mainstream developmentalist and conservationist discourses. During two months of fieldwork in Nepal, we gathered empirical evidence, including observations, interviews, and project documentation. Our findings suggest that the misrecognition of local communities, particularly in Tangiya Basti, began long before the airport project, and is intertwined with distributive and procedural injustices, reinforced by power asymmetries of various kinds. Overall, we argue that while the airport project is often framed as an environmental conflict, it is also a conflict over claims to social justice and livelihood security. Introduction Nijgadh International Airport is [a] project of vital importance for Nepal. This is our national pride project. This will be a game-changing project for Nepal's economic prosperity. Rabindra Adhikari, Late Minister of Culture, Tourism and Civil Aviation, Government of Nepal (Paudyal & Koirala, 2018) While calls to reduce air travel have entered mainstream discourses in many high-income countries (Jacobson et al., 2020;Timperley, 2019), landlocked Nepal still has only one international airport. Blaming Nepal's poor air connectivity as a major barrier to increased tourism and economic activity, demands for building an alternative to Tribhuvan International Airport (TIA) are increasing (Sah, 2019). After years of deferment, in 2015 the Government of Nepal (GoN) revived its 1995 plan to construct an international airport near Nijgadh, a town 60 km south of Kathmandu (D. P. N. Pradhan et al., 2019). The project has since been promoted as Nepal's guaranteed path to development and prosperity; both themes are deeply entrenched in Nepali politics and society. Especially the notion of bikas 2 , which commonly outlines the dream of catching up with the 'West', has prevailed for decades (Mulmi, 2018;Paudel, 2016;Pigg, 1992Pigg, , 1993. In line with hegemonic development discourses, promoting economic growth and infrastructure expansion (Nightingale & Ojha, 2013), airport proponents have highlighted the economic opportunities it offers: it is estimated to serve 27 Asian countries and generate 100,000 direct and indirect jobs (Sah, 2019). But similar to other large infrastructure projects (Robbins, 2011), the construction of Nepal's Second International Airport, hereafter referred to as SIA 3 , is not without socio-environmental consequences, also reflected in its listing as an infrastructure conflict in the Global Atlas of Environmental Justice (EJAtlas) (Bridger, 2019). The proposed construction site comprises a dense sal (Shorea robusta) forest, known to be an elephant migration corridor (Shah, 2019;Shahi, 2019). After the project's Environmental Impact Assessment (EIA) was published in 2018, many expressed shock at the prospect of cutting downover 2 million trees (A. Dhakal, 2018). Opposition formed among conservationists and environmental activists, especially in Kathmandu, and rallies and online campaigns were organized in the capital (K. D. Bhattarai, 2019b;Pro Public, 2019). While protests and debates have focused on the ecological consequences of SIA, the project area is also home to nearly 8,000 people, most without formal land rights, who are threatened with displacement (Dhungana, 2019a;Shah, 2019). To date, the project authorities have not released a compensation and resettlement plan for the nearly 1,500 landless households. This apparent lack of public and political attention to the impact of the airport on these communities raises questions about the safeguarding of their interests. Literature shows that it is often those already marginalized who are most affected by infrastructure development (Ascher & Krupp, 2010;Otsuki et al., 2016). Fernholz (2010, p. 225) states: "Overall, the aim of major infrastructure projects […] is to improve economic growth and well-being in a country or region. Yet, many studies show that the people living in the proximate areas frequently do not share in these benefits, and often suffer major economic, health, and cultural losses." This production of uneven landscapes in the name of development is often the result of multidimensional injustices experienced by the marginalized groups concerned (Ottinger, 2017). Exposing such injustices in the creation and alteration of socio-environments has become a growing concern of various disciplines (Schlosberg, 2013). While justice scholarship has long focused on distributive and participatory aspects, conceptualizations have since expanded to include the notion of recognition "as an inherent element of justice" and often even as the basis for the other two dimensions (Fraser, 2005;Islar, 2013, p. 41;Schlosberg, 2004Schlosberg, , 2007. Building on existing justice and political ecology research, such as Persson et al. (2017) and Islar (2012), in this article, we argue that the invisibility of local concerns in public debates suggests issues of misrecognition and that the conflict over SIA is as much a struggle for social justice and recognition, sustained and undermined by various manifestations of power, as it is for environmental protection. Based on empirical evidence derived from two months of fieldwork in Kathmandu and the Nijgadh area in Bara district, we examined the injustices faced by the communities affected by the airport project and how various kinds of power have manifested as identifiable injustices. This study was led by our overarching research question and three sub-questions: How and to what extent can the case of Nepal's Second International Airport be understood as a site of conflict over recognition and other related injustices? (a) What are the historical injustices regarding land use in the SIA project area? (b) What are current injustices regarding land use in the SIA project area? (c) What are the different dimensions of power embedded in the conflict and how do they reinforce injustices? We have structured this article as follows. After introducing the background of airport project in Section 2, we discuss in Section 3 the theoretical framework based on Svarstad's work on conceptualizing power and its relevance for justice scholarship (Svarstad et al., 2018;Svarstad & Benjaminsen, 2020). By applying a framework that draws on both theories of justice and political ecology, we hope to foster the "potential for cross-fertilization between the two fields" called for by Svarstad and Benjaminsen (2020). The authors propose applying specific conceptualizations of power from political ecology to environmental justice, which we put into practice by using a three-dimensional power structure developed by Svarstad et al. (2018) to understand issues of misrecognition and other related injustices. The fourth section explains the methodological framework and fieldwork process. Sections 5 and 6 discuss the findings in the context of power relations and historical and current injustices related to land rights and competing land uses due to the airport project. In Section 7, we 3 In this article, we have chosen to use the term "Second International Airport" and its abbreviation SIA, which is commonly found in official government communication about the project. The term "Nijgadh International Airport" (NIA) also appears in the media, civil society, and some government agencies. However, its use is contentious, in part because the project area falls not within the municipality of Nijgadh but in the neighboring municipality of Kolhabi. Journal of Political Ecology Vol. 29, 2022 54 conclude by highlighting the need to explore larger issues of reconciling community interests with sustainability and sustainable development concerns. Twenty-five years in the making Although often framed as being the latest mega-development initiative in Nepal, plans to build SIA have existed for nearly three decades (see Figure 1 for project map). By the 1980s, the GoN mentioned plans for a second international airport (Gautam, 2020). Discussions gained momentum in 1992, triggered by two catastrophic air crashes in Kathmandu (K. D. Bhattarai, 2019a). In 1995, the government commissioned the consulting firm Nepal Engineering Consulting Services Centre Limited (NEPECON) to conduct a prefeasibility study of eight sites, in the course of which an area west of Nijgadh was identified as the most suitable (Gautam, 2020;Lal, 2019). However, the decade-long Maoist insurgency from 1996 onwards brought the project to a standstill (Gautam, 2020). It was not until 2010 that the project-implementing agency, the Ministry of Culture, Tourism and Civil Aviation (MoCTCA), commissioned the Korean company LandMark Worldwide (LMW) to conduct a detailed feasibility study (DFS) (Shah, 2019). MoCTCA, however, never received the complete DFS, only a summary, as the government never paid for the report in full (Lal, 2019). After plans to build SIA in cooperation with LMW as well as with the later applicant Flughafen Zürich AG fell through, MoCTCA declared that it would develop the project on its own (Prasain, 2020b(Prasain, , 2021K. D. Shrestha, 2020). In May 2021, however, when the state budget for the upcoming fiscal year was announced, funds were only earmarked for completing site preparation, leaving the financing of the airport project uncertain (Sunuwar, 2021). In a separate process, the Nepali engineering firm GEOCE Consultants (P) Ltd. was awarded the contract to deliver a project EIA in 2016, and they submitted it in March 2018 (Shah, 2019). Two months later, the report was approved by the Ministry of Forests and Environment (MoFE). Interestingly, the Civil Aviation Authority of Nepal (CAAN) had already signed a Memorandum of Understanding with the Nepal Army on behalf of MoCTCA in September 2017, long before the EIA was approved, which allowed the Army to cut down trees for building roads on the proposed construction site (Golf, 2017). The EIA, which was only published after environmental journalists filed a request in 2018, revealed that up to 2.4 million pole-sized and mature trees worth US$ 629 million as timber would have to be cut to clear the project area (A. Dhakal, 2018;Shah, 2019). This figure has since been corrected by officials to 328,904 trees in the first project phase (Shah, 2019). Project authorities further claim that deforestation can be offset by planting 25 saplings for each cut tree, but are yet to propose a suitable area (Lal, 2019). Deforestation could have serious impacts on local biodiversity (Chernaik, 2019;S. Shrestha, 2018). The area is home to over 500 different bird species, 23 endangered flora and 22 endangered wildlife species (Lal 2019;S. Shrestha 2018). The forest lies within the Terai Arc Landscape and is connected to the Parsa and Chitwan National Parks. Together they form important wildlife migration corridors (NEFEJ, 2019). Thakur (2015) concluded that the area around Tangiya Basti, i.e., the airport project area, falls within three common elephant migration routes and is a highly suitable elephant habitat. The interruption of current migration routes is expected to increase the risk of human-wildlife conflicts in this area (NEFEJ, 2019). As the forest serves as a water reservoir, deforestation is also expected to deplete groundwater during the dry season, threatening drinking water supplies and irrigation for thousands of people, as well as leading to severe flooding during the monsoon season (Lal, 2019;NEFEJ, 2019). In addition, the project is expected to cause noise and air pollution and to threaten two collaborative forests adjacent to the project site, Tamagadhi and Sahajanath, whose user groups have 37,000 beneficiaries that depend on forest resources for their livelihoods (NEFEJ, 2019;Shah, 2019). The release of the EIA was followed by criticism from various groups, including forest experts, lawyers, environmental activists, and journalists. In addition to the ecological impacts, many questioned the excessive size of the proposed airport, arguing that other international aviation hubs are significantly smaller than the allocated 8,046 hectares 4 (B. Bhattarai, 2020;S. Shrestha, 2018). Opponents further alleged that the selection of Nijgadh was influenced by political agendas and that alternative sites were not adequately explored (S. Shrestha, 2018). The EIA itself also came under criticism for containing misleading information after parts of it were found to have been copied directly from an earlier EIA produced for a hydropower project 5 (Gautam, 2020;Mandal, 2020). Building on this criticism, two Public Interest Litigations (PILs) were filed in 2019 against MoCTCA, MoFE and the Prime Minister's Office, among others, by groups demanding the review of alternative sites and the preparation of a new EIA (Awale, 2020;T. R. Pradhan, 2019). At the same time, the petitioners and other environmental activist groups organized protests across Kathmandu and raised awareness on social media (Bachmann, 2019). In response to the petitions, on 6 December 2019 the Supreme Court of Nepal a suspension of project activities forward until the ambiguities regarding its impacts are resolved (T. R. Pradhan, 2019). After extending the stay order on 22 December 2019, Chief Justice C.S. Rana stated: "The court should be equally responsible toward the earth, trees, aquatic animals and birds" ("CJ Rana", 2020; Prasain, 2020a). To date, the stay order has not been lifted, in part because the assigned Chief Justice Rana came under criticism for alleged corruption in 2021, which in turn led to fellow judges refusing to serve on his bench (Ghimire, 2022). Irrespective of the absence of a final decision by the Supreme Court, newly appointed Prime Minister Sher Bahadur Deuba presented an updated overall plan for the project on January 6, 2022, stressing that construction should not be delayed any further ("Construction of Nijgadh Airport should not be delayed", 2022). Figure 1: Information board next to East-West Highway, outlining the airport project area. Source: Authors, 2020. 4 The New Bangkok International Airport (Suvarnabhumi), Indira Gandhi International Airport in Delhi and the Singapore Changi Airport cover areas of 3,240 ha, 2,066 ha and 1,300 ha respectively (S. Shrestha, 2018). 5 Chapter 7.3 of the EIA states: "Nepal has accorded high priority for the development of hydro-electricity project(s) (...). The project will generate environment-friendly clean energy and will contribute to the socio-economic development of the country"(GEOCE, 2018, p. 7.2). Justice and three-dimensional power Environmental justice and political ecology serve as the theoretical umbrella of this study, building on the growing body of literature linking justice and political ecology concepts (Domènech et al., 2013;Gonzalez, 2019). Svarstad and Benjaminsen (2020, p. 2) noted that there is potential for synergy especially in the notions of justice and power, due to "a lack of specification of various conceptions of 'power' in [environmental justice] literature." Thus, combining the two concepts to understand what injustices exist and examine what kinds of power manifest them seems a promising framework for analyzing the Nijgadh conflict. We have chosen a three-dimensional environmental justice framework that includes aspects of distribution, recognition, and participation, inspired by Schlosberg (2004Schlosberg ( , 2007Schlosberg ( , 2013 and Fraser (1998Fraser ( , 2000Fraser ( , 2005, among others. Although the framework is commonly used to examine communities' claims to nature and natural resources, we believe that its grounding in social justice theory also makes it suitable for examining other justice claims put forward by communities. First, a distributive justice perspective that focuses on how burdens and benefits are distributed in socioenvironmental interventions provides a valuable starting point for examining how local communities' livelihoods and access to land and facilities have been affected by the airport project. Following Schlosberg and others, however, we do not understand distributive injustice in isolation from misrecognition and existing power asymmetries. Second, justice as recognition provides a key lens for understanding patterns of marginalization of local communities through various institutional, social, and cultural channels. Following Svarstad and Benjaminsen (2020, p. 4), we complement recognition theory with concepts of senses of justice, defined as "ways in which affected people subjectively perceive, evaluate and narrate an issue", and critical knowledge production, which concerns access to independent information to "facilitate the expression of subaltern voices" and explore competing justice claims of affected communities, even beyond pure environmental justice claims. Third, procedural justice offers a helpful perspective, not only on how affected communities are involved in the airport project decision-making process, but it also provides insight into how they practice participation within their communities, essentially examining internal notions of power. We understand these justice dimensions in the context of persisting power asymmetries, which we analyze using Svarstad et al.'s (2018) three-part conceptualizations of power in political ecology, encompassing actor-oriented, neo-Marxist, and poststructuralist conceptions of power. The authors put emphasis on the synergistic potential between the three to understand how power affects human-environment interactions in different ways and at different scales. In our case, they provide valuable starting points for unraveling the complexity of the SIA case, and understanding how the injustices experienced by affected communities are situated within political and sociocultural manifestations of power. Study site and methods The research was designed as an empirical single case study. We use the specific case of community livelihood conflicts surrounding the airport project to illustrate broader trends in development and modernization in Nepal. Most of the empirical evidence was gathered during two months of fieldwork in Nepal. Situating our study within social constructivism, we acknowledge that our positioning as white, European female researchers influenced the research process as well as our interpretation of the data. Although one of the authors speaks Nepali and both are deeply familiar with diverse aspects of Nepali society, thereby reducing socio-cultural and lingual barriers, we do not claim to establish absolute truths. Rather, we have intended to present a dense and authentic account of the complex, intersecting, and diverse realities we found in the research situation. Our study sites included Kathmandu and areas in and near the SIA project area in Bara district, which spans 8,046 ha and is delimited by the rivers Lal Bakaiya to the east and Pasaha to the west, the East-West-Highway to the north and a fence to the south (GEOCE, 2018). Some 94.3% is forestland, while the three settlements within the project boundaries cover an area of about 500 ha (Shah, 2019). Tangiya Basti (TB) is the largest settlement, see (A) in Figure 2, and has grown to 1,476 households since its establishment in the 1970s (Dhungana, 2019b). Most people are migrants from the hills and belong to the Janajati 6 ethnolinguistic groups of Tamang and Magar; other ethnic and caste groups are Dalit, Bahun, Chettri and Newar (Dhungana, 2019a). None of the residents have land ownership documents (Dhungana 2019a). The second largest settlement is Kathgaun (KG, see (B)) with people of various ethnicities and castes including Tharu, an indigenous group from the Terai. Of the 132 households, only 16 never owned land. Recently, however, the government acquired most of the privately-owned land through compensatory payments, leaving many without legal documents (Sah 2019). The smallest and youngest settlement is Matiyani Tol (MT, see (C)). Dalit families from other parts of the Terai began settling in the early 2000s. To date, all 40 households are landless (Shah, 2019). As befits a case study, we used multiple sources of evidence to account for the complexity of the case and to facilitate convergence through triangulation (Yin, 2009). We collected primary data through observations and semi-structured interviews with residents from the three directly affected settlements (TB, KG, MT). To gain a deeper insight into local dynamics, we decided to focus on Tangiya Basti, where we conducted most of our interviews with residents, and to visit Kathgaun and Matiyani Tol for only one group interview each. We also interviewed local politicians and project officials, expert interviews with Kathmandu-based activists, lawyers, and environmental scientists with knowledge of the case. We also drew on secondary data in the form of project documents, legal and policy documents, and media reports. After transcribing and translating the interviews, we coded them to identify themes and patterns. This was done deductively and inductively, as our 6 Heterogeneous groups with their own culture, language, religion, and customs, which make up 35.6 percent of Nepal's population and do not fall into any of the categories of the Hindu Varna system (Jha, 2019). Journal of Political Ecology Vol. 29, 2022 58 analysis was informed by theories of justice and power, but we also sought to identify new perspectives to capture the lived experiences of the respondents. Past and present injustices of recognition Our study of injustices faced by residents of the three affected settlements is based on the understanding that community-level injustices are embedded in wider historical and present framings of land rights, land ownership, community, and access to basic facilities in Nepal. Recognizing that there is a link between community-level injustices and national-level development (bikas) narratives, we begin our analysis by situating our case within broader discourses of development and modernization in Nepal. The longing for bikas has shaped the idea of the Nepali nation state for years (Kramer, 2008;Pigg, 1992). While many consider the early 1950s, marked by the fall of the Rana dynasty and the emergence of foreignfunded aid initiatives, as the beginning of modern development ideologies in Nepal, the rhetoric of prosperity and progress has been prevalent since Nepal was unified under the Shah dynasty in the 18th century (Paudel, 2016;Pigg, 1992). As scholars have noted, development in Nepal is more than common concepts of 'empowerment' and 'modernization' (Kramer, 2008;Pigg, 1992;Saxer, 2013). Rather, it is "the overt link between [Nepal] and the West"; a national vision of hegemonic superiority, deeply entrenched in the imagination of society and omnipresent in radio, TV, and schoolbooks (Ahearn, 2004;Pigg, 1992, p. 497;Sharma, 2002). This has created a dichotomy between what is desired and what is not: urban vs. rural, educated vs. illiterate, the West vs. presentday Nepal (Rest, 2012). As Pigg states, "where there is a push for progress through development, there is the creation of a state of backwardness" (1993, p. 46). Although the understanding of bikas is not homogenous, it is often expressed in material terms, as tangible facilities (suvidha) (Murton & Lord, 2020;Nightingale, 2017;Pigg, 1992). Influenced by Western interpretations of development as technological progress, industrialization, and growth (Escobar, 1995), infrastructure expansion for improved connectivity and economic activity is a major pillar of Nepal's "development dream" (Campbell, 2010;Murton & Lord, 2020, p. 3). In the National long-term goals of the 15th Five-Year Plan, which outline the path to the GoN's vision "Prosperous Nepal, Happy Nepali" 7 , "Accessible modern infrastructure and intensive connectivity" is listed as a key goal for prosperity (National Planning Commission, 2020, p. 19). More specifically, the plan includes ten large transport infrastructure projects as part of twenty-two 'National Pride Projects'; SIA is one of these. While there is a need for improved mobility in Nepal, research shows that large-scale infrastructure often leads to the production or consolidation of uneven landscapes, e.g., conceptualized as power corridors, leaving hopes for improved livelihoods unfulfilled for many (Campbell, 2010;Murton & Lord, 2020). The following sections take a closer look at how uneven landscapes have been produced in the case of SIA and how communities and their desire for development have been compromised in the name of national prosperity. A lack of land ownership Tangiya Basti was established in 1974-75 and has since grown to 1,476 households with around 7,500 inhabitants. While basti translates to 'settlement' in Nepali, tangiya points to the origins of the village as it is derived from taungya, a Burmese term (taung = hill and ya = cultivation) that describes a type of shifting cultivation in agroforestry (Bhusal, 2010). Sources mention that the taungya system was introduced in Nepal in 1972-74 in the Tamagadhi area near Nijgadh, suggesting that the founders of Tangiya Basti were some of the first in Nepal to apply it (Adhikari & Poudel, 2018;Gahatraj, 2017). In taungya, agricultural crops were grown along with tree saplings, mainly Dalbergia sissoo and teak, on degraded state-owned forestlands (Amatya, 2018). After three to five years, when the tree canopy began to cast excessive shade, cultivation was shifted to another, nearby location (Ndomba et al., 2015). As one villager in TB remembers: "We came here to do tree plantation […] We grew sisau trees. For example, we sowed maize Journal of Political Ecology Vol. 29, 2022 59 and mustard, we ate that. And when the trees grew, we moved." The system was a cheap and effective way to manage forests while providing a livelihood for the planters, who were mainly victims of natural disasters and landless peasants from hill districts (Bhusal, 2010;Ojha, 1983 The emergence of multi-party democracy in 1990 ultimately led to changes in Nepali forest policy and the abolition of taungya (Ninglekhu, 2020;Wagley & Ojha, 2002). Despite initial promises by authorities, planters who had lived in huts around the forest until then received neither land ownership certificates (lalpurja) nor alternative land plots. In the absence of any government interference or support, they began to establish a permanent settlement in Tangiya Basti (personal communication;GEOCE, 2018). A villager reminisced: Our forefathers were told, "You will be employed as laborers to raise saplings and farm the land […], and once the government stops employing this system […], we will arrange land for you permanently." […] But once we had democracy, the government didn't move us anywhere, nor did they give us land ownership. Since then, we've been here. Later, during the 10-year long Civil War (1996)(1997)(1998)(1999)(2000)(2001)(2002)(2003)(2004)(2005)(2006), the TB forest was a hiding spot for armed Maoist rebels who were controlling large parts of rural Nepal at the time. The Maoists taught the villagers to build more durable structures using timber from the forest. During this period, the TB population increased considerably. Two years after the end of the war, the Maoists won the 2008 assembly elections. Officials soon imposed a permanent ban on taking timber from the TB forest; but still no arrangements for permanent tenure were made. The denial of formal land rights to the TB population appears to be the first and possibly most significant case of their nonrecognition, long before the airport project was launched, and it had direct distributive implications. In Nepal, land ownership is an important determinant of wealth, power, and social status, or as Dhakal (2011, p. 1) states: "Land is probably the most important asset in the rural-agrarian economy." A land certificate is the key to access to services and socio-economic security (Wickeri, 2011), and a lack thereof renders people marginalized and unable to carry out simple procedures, as one shopkeeper in TB explains: "It is difficult to do anything here because we don't have permanent residence. […] I cannot register for any business because this place is a temporary residence." But people in TB are not only administratively marginalized, but they also face misrecognition as they are seen by many as illegal encroachers of the land they have inhabited since the 1970s. In an informal conversation with a staff member at the SIA Project Office in Simara, after we explained that we were investigating the impact of the airport on local communities, he replied: "Are they affected or are they the effect?", implying that they have been exploiting forest resources. Similarly, one of our interviewees in Kathmandu from an ecologically motivated airport opposition group stated: These reactions not only disregard that the Nepali leadership was instrumental in the encroachment of the forest under the taungya program and the founding of TB through promises of permanent settlement and later neglect, but they also show how the residents are socially stigmatized, as one TB resident expressed: "They discriminate against us as if we were outsiders. As if we came off our own accord, but we did not do that. The government brought us here. King Birendra was the ruler back then." A lack of recognition as a community While the misrecognition of local communities began before the launch of the airport project, SIA has been an essential part of TB's history. Many respondents had heard about plans to build an airport over two decades ago. Some remembered that officials came to the village in 1995 as part of NEPECON's pre-feasibility study of potential airport sites (see Section 2): "They dug some soil, but nothing was fixed. […] They came here to check if the land is okay to build the airport." After the Civil War had halted the project, a delegation led by the former Minister of Culture, Tourism and Civil Aviation, Prithvi Subba Gurung, returned to TB in 2008 to announce SIA's resumption. This was the first time that the government had provided residents with project information and inquired about their opinions. Yet the meeting was also used to underline the finality of the project, making the enquiry into the community's concerns appear to be mere tokenism, as one TB resident remembered: "He came with five ministers and requested us to not hinder the development process. […] He told us that the airport will be built and that it has already been announced nation-wide." The exclusion of residents from the early SIA project decision-making process is not just an injustice in itself, as defined by Young (2002), it also illustrates the interplay of procedural, recognitional and distributive justice dimensions. Preventing the locals to "participate on a par" in the project planning is an expression of their nonrecognition and social subordination with implications for their distributive equity (Fraser, 2000, p. 113). Their socio-economic status as a landless migrant population undermines them as equal rights holders in Nepali society 10 ; showing the circularity of recognition and participation: "If you are not recognized you do not participate; if you do not participate, you are not recognized" (Schlosberg, 2007, p. 26). Young (2002, p. 34) extents this to democracy and justicewhere one is an element but also a condition of the otherand argues that where structural injustices exist, "formally democratic procedures are likely to reinforce them." A lack of access to basic facilities Due to its informal status, the people of TB have been largely excluded from rural development initiatives. Thus, many residents actually reacted positively after the airport plans surfaced in 1995, hoping to finally gain access to basic facilities, such as electricity. But instead, SIA soon proved to be the main reason why basic services were not made available for TB, as a member of the TB concern committee explained: After the state decided to build the airport, everything was stopped […]. We need a [electricity] line, roads, irrigation, health services, drinking water. We have been asking for these since the beginning. But the government has ignored us, saying that since an airport is going to be built, why should the state spend money? This reasoning was confirmed by a SIA project official in Simara: "We cannot build the airport without removing the village. Knowing this, investing in facilities for the village is a waste. […] The focus should be on faster resettlement rather than providing facilities." Similarly, a local politician stated: "We know that the people in Tangiya Basti need facilities, be it water for farming, electricity or shelter, but we cannot ignore that this is double expenses since they have to be resettled." While these responses appear rational and solutionoriented, they neglect the temporal scale of the struggle, as TB residents were advocating for access to facilities long before the airport gained public momentum. Contrary to nearby villages, TB is still not connected to any major electricity grid and residents rely on 60 to 100-Watt solar panels for basic lighting and phone charging (Figure 3). Drinking water is supplied through around 50 taps that were installed with government support in the early 1990s, but according to residents, it is of poor quality and fetching it is strenuous. Similarly, education services have been stagnating due to the government's apathy, as a villager explained: "There are only three schools. One runs up to grade 8, the other is up to 5, and one is a boarding school. […] we could run +2 levels 11 , but the government doesn't allow us to progress." In the absence of adequate support, TB residents have taken the initiative to build infrastructure themselves, often with financial support from relatives working abroad. A striking example of such collective action is the village irrigation system, see Figure 4, which has enabled people to improve their livelihoods through the commercial cultivation of bananas and tobacco, among others: "We have built all the irrigation 11 +2 level includes Grade 11-12 and is completed with the Higher Secondary Certificate (required for university admission). facilities with our own money. We created groups of 15 to 20 households, and one household would invest 40,000 to 50,000 rupees." (US$ 330-415) However, while the government has long stood idly by and initially even encouraged community-driven initiatives in TB 12 , it has increasingly hindered local efforts: "They don't even allow renovating a collapsing house and tell us to stop building, because the land has already been taken by the Ministry of Tourism." Given the continued stagnation, many respondents wished for a timely end to their limbo, with or without the airport: "It is only a loss if we remain in this state of pendulum", one said. This contrasts with the interest of airport critics in Kathmandu, who are hoping for a further delay of the project, as one activist explained: "For us it is good. The more we can wait, the more discouraged investors will be, the more attention this project will get." From our initial observations, TB appeared to be a quaint agrarian village. But what at first glance seemed picturesque is a reflection of the harsh reality of life for the residents, where community development has been hindered in the name of national development. In fact, TB's infrastructural deprivation illustrates that the impact of the SIA project began to unfold over two decades ago. The state of uncertainty has shaped the history of TB, leading to limited access to education, communication, and secure shelter. Once again, the nonrecognition of the villagers and their needs had direct distributive consequences, leading to their continued marginalization. 12 For example, through the rural development scheme 'Build Your Village Yourself' under Prime Minister Adhikari in 1994-95. A lack of consultation and access to independent knowledge After 25 years of planning the airport, fast forward to 2020. According to media reports and a SIA project official, the preparatory work preceding the first project phase is nearing completion (Rai, 2019). One of the most visible results is a fence, Figure 5, consisting of concrete pillars and wire, that encloses the three settlements, Kathghat Temple, access roads, and which also marks the southern border of the project site. As the fence's purpose was not clear to us, we asked our respondents about it, only to find out that they too were not sure, as officials had never informed them: "We don't know. They said it's for the airport. Then again, they have to destroy the fences once they start to build the airport." One respondent said she had to ask the authorities to cut passages for households that still do not have a toiletand are reluctant to invest in one because they may be relocatedand defecate in the forest. The fence exemplifies the sluggish flow of information concerning the airport project and the insufficient consultation with residents, which could have prevented the toilet issue, for example. Although the EIA (GEOCE, 2018, p. 2.7) states that "Public Consultation was sought at different stages of EIA report preparation", the only formal consultation that our respondents mentioned was a public hearing in Simara on 31 August 2017. Its purpose was "to inform the local people about the environmental implication of the projects and to collect the opinions, suggestions and recommendations from the local institutions, local bodies and people" (GEOCE, 2018, p. 2.7). Nevertheless, the choice of the location around 25 km from TB suggests that the hearing was not intended to reach the entire population but was mainly aimed at community leaders and village elites. In fact, most interactions with authorities seem to take place through the local Tangiya Basti Concern Committee (TBSS), which acts as a representative body of all three settlements on SIA issues. This is not to say that people of TB have no contact with government members. Many responded that officials frequently visit to speak about SIA or, as one villager put it, "I don't think there is a minister who has not come here yet." On 17 November 2019, the current Tourism Minister Yogesh Bhattarai made a stopover in TB, during which he announced the completion of SIA within five years and promised the villagers a swift solution for their resettlement -"We believed him, thought he was going to build it then and there. But nothing has happened till now." Despite growing disillusionment, statements like Bhattarai's still have an undeniable effect on people: "When a person like that tells us something, we are born to believe them." Several respondents said that the presumed interest of politicians in TB is mainly due to its value as a vote bank 13 : "During election time […] they run from house to house begging for votes." But despite this awareness, limited access to alternative sources makes it difficult to evaluate the information provided by officials: "We only repeat what we hear." This dependence on information produced and reproduced by dominant actors such as the government is considered an injustice in Svarstad and Benjaminsen's (2020) concept of critical knowledge production. The ability of marginalized groups to access independent knowledge about the project, its impacts, and actors is understood as a crucial justice dimension in environmental interventions. Many responded that they seek information about the project through radio, newspapers, social media or the TBSS (Tangiya Basti Concern Committee). However, print media have limited importance because firstly, they do not report sufficiently on issues relevant to the communities and secondly, mainly reproduce statements by dominant actors. The TBSS, on the other hand, appears to be a powerful source of information, because several of its members stated to have personal contacts in government offices and political parties who provide them with updates on SIA: We contact our friends in Kathmandu. […] Some of them are leaders who are close to the politicians. […] that's how we know what's going on […] We don't get informed by the Simara [SIA project] office, we usually get the information from our friends. The TBSS was established in 2008 in response to the resumption of SIA and has since had three chairpersons, including the current one, a Bahun 14 and a former teacher. In March 2020, the TBSS had 17 members, male and female, from almost every tol 15 of TB. People from KG and MT also occasionally take part in TBSS meetings. Although the TBSS effectively gathers information and channels the villagers' interests as a seemingly unanimous voice to appeal to the authorities, it also acts as a 'filter' through which information must flow before it reaches the rest of the community, which again restricts access to truly independent information, as advocated by Svarstad and Benjaminsen (2020). Moreover, it is striking that in a village with less than 15% upper-caste households, the first and current committee chairmen are Bahun. This suggests that caste is still a strong determinant of the socio-political power distribution in the community. Historically regarded as the ruling caste, until date bahuns have the highest literacy rates and civil service representation in Nepal (Malla, 2018). Nightingale (2005Nightingale ( , 2010 shows that the perceived intellectual superiority of high castes continues to lead to social stratification at community level, partly maintained by Bahuns themselves and partly through complex processes of internalized subalternity of lower castes. One villager, a Tamang, justified the Bahun leadership by saying: "He's speaking for this place, for us and no one is as intelligent as him. […] He is a son of Bahuns and is like no other." With regards to the TBSS election process, another Tamang villager said: "We had a meeting, and we chose the people who could talk well." 14 Nepali for Brahmin, i.e., highest caste in the Hindu varna system. 15 Nepali for town quarter, square or junction. In the above statements, caste and perceived intellect are used to justify the Bahun leadership of the TBSS, which in turn acts as the community's leading voice. The perceived superiority of educated (high-caste) actors comes with the perceived inferiority of less educated (low-caste) actors and an internalized notion of backwardness. When we asked a villager, a Magar, how aware TB residents were about the airport project and its potential impacts on their livelihoods, he responded: "People are not really interested in the airport. This is a Matwali 16 settlement. […] Matwalis are straight people. They don't show much interest in these things." Again, caste is directly linked to behavior and intellect. Seven out of 13 interviewees in TB described their community as uneducated. The subordination extends further to the relation with the authorities: "We aren't as educated as [the project authorities]" and "We can only listen; we don't really say anything." This notion of inferiority reflects the hegemony of expert knowledge in Nepal, explored by Nightingale (2005), and is similar to Rest's findings from Arun Valley, where people have been living in uncertainty over a hydropower project for over 25 years: After a conversation with a woman who described herself and her people as "not educated" and without "enough brainpower", Rest reflected, This conversation is paradigmatic for many encounters I had […]. Not only for the evident lack of information about the project, but also for putting the blame about this on themselves: as if someone had told her the whole thing but she had just been too ignorant to understand (2014, p. 143). We argue that people's acceptance of their perceived incapacity should not be confused with indifference towards the project and its impacts on their existence. Rather, it reflects the manifestation of caste and class in Nepali society, where the 'subaltern' have been deprived of any power to influence mainstream discourses. As Ninglekhu (2020) concludes in a magazine article about the three settlements: "The 'subaltern' can speak, but not in a language that the 'mainstream' has ever attempted to learn and understand." It is precisely for the purpose of decoding this 'language' that Svarstad and Benjaminsen (2020, p. 5) emphasize a "senses of justice" approach as a crucial element of recognition in order to "gain access to 'hidden transcripts'" and amplify subaltern voices, as we elaborate in the next sub-chapter. A lack of benefits from development One aspect of Svarstad and Benjaminsen's 'senses of justice' is the notion of 'sense of place' (Barron, 2017;McKittrick, 2011), which describes the attachment that residents have to their area. This connection also became apparent among people in TB, despite the frustration over continued lack of access to basic facilities: "We were born on this land, raised on this land" and "When I travel elsewhere, I feel suffocated. I'm raring to get back home, because we have trees, shade, and it's so relaxing to sit under the tree in the summer." Surrounded by dense forest and clear streams, TB is indeed exceptionally scenic, but as described earlier, its unspoiled character also represents its exclusion from the perceived benefits of development that occurred in nearby areas, as a villager pointed out: "The people downhill, who came 20 years after us, their place is more developed than ours." For many respondents, SIA promises an end to this limbo, and the abandonment of their homeland is a sacrifice worth making: "We won't get any development here, that's why we're saying the airport should be built. […] We'll be relocated to another spot, and get what we need, right? Our access to development will no longer be blocked, right?" However, while several respondents expressed similar hopes that come with resettlement, some conceded that ultimately the government and people in nearby peri-urban areas would benefit far more from the airport: "The landowners, people who can open up companies and hotels, will benefit. There's never any real benefit for people [like us] who have to work daily to feed ourselves." In fact, landowners and estate agents in Nijgadh have already started to reap the benefits of the "fictitious commodity" SIA, as land values have risen rapidly since the project was resumed (Ninglekhu, 2020). Price increases of 2000% in just a few years were reported, starting at around 50 US$ per sq. m on the outskirts of Nijgadh and more than ten times that along the main road. Enabled by neoliberal land policies and fueled by promises about the economic potential of SIA, land in Nijgadh, similar to the plots in Figure 6, has become a precious commodity among members of the affluent middle and upper-class from Kathmandu (Ninglekhu, 2020); a trend we ourselves observed during our stay in a hotel in Nijgadh, where we saw new groups arriving from Kathmandu every day and overheard many conversations about land prices, sizes and locations. In an attempt to secure their basic livelihoods and to not be excluded yet again from the dream of bikas, the inhabitants of TB under the leadership of the Tangiya Basti Concern Committee (2017) have formulated seven demands with regards to their resettlement, listed in Figure 7. The press release of 29 July 2017 stressed that "land must be made available to the locals in the form of redistribution and not in the form of financial compensation." This compensatory land should be in close proximity of the airport project site (demand B). Respondents justified this with the hope for employment opportunities and increased economic activitya hope that appears greater than the fear of noise pollution. The project authorities have not formally responded to date, although they repeatedly assured that arrangements will be made: "they told us that they won't make us cry and they would manage and provide facilities. But it isn't in written form." So, the overshadowing state of uncertainty about what the future will look like remains as described by an activist in Kathmandu: They'll probably get better schools. But their living conditions will be completely destroyed. They have been living within nature where everything is so surreal and clean. […] They have no idea where they'll be relocated. Whether everyone will be in one place or whether they will be kept in different areas. The authorities' techno-bureaucratic top-down attitude, once again, reflects the nonrecognition of the communities, of their sense of place, and their socio-cultural institutions. Furthermore, also in view of other development-induced displacement processes in Nepal (Domènech et al., 2013;Rest, 2014), it highlights the urgent need for more deliberative approaches to governance at the community level (Banjade & Ojha, 2011;Cameron & Ojha, 2007). A lack of representation in the media Fraser (2000) and Schlosberg (2004) agree that the nonrecognition of communities in the course of socio-environmental interventions leads to their continued marginalization, also with regards to distribution and participation. Here, it is not only project stakeholders who may fail to recognize communities, but also discourse-shaping actors like the media. Initially, we struggled to find information about the affected settlements in the media; and even on social media, we noticed the predominance of content focusing on the ecological impacts of the project. This made us wonder whether this is an indication of broader discursive patterns in the reporting of the SIA. When inquiring in TB about media coverage, some indicated that they felt misrepresented or ignored. One respondent stated, "they only talk about the trees", while another villager was particularly frustrated by the incomplete reporting: They only talk about Nijgadh Airport. Sometimes they mention the village but never in the headlines. There are actually 1,476 houses here […] but the newspapers always report it carelessly […] Some say there are only 200 houses. […] They label us as slum dwellers. […] If they want to publish correct information they have to come here. The reporters go to Nijgadh because they think that's where the airport is built. That's why no truth comes out. Overall, our analysis shows that the affected communities only occupy a peripheral place in the English language media coverage of the airport project. Apart from a few exceptions 17 , news agencies rarely address social impacts of the project, seldom bring local voices to the fore, and don't provide information about the historical and cultural background of the communities. This reflects the nonrecognition of the communities and their struggle, and contributes to their invisibility. Combined with the focus on environmental impacts of the project, this may further divert major debates from social justice issues and bury a more nuanced understanding of the conflict under the dichotomy of 'building the airport' vs. 'saving the forest', thus shaping the way discourses on SIA are produced. Furthermore, with regard to Svarstad and Benjaminsen's (2020) critical knowledge production, the media become an unsuitable source of information for the residents, which increases the dependence on private internal or external sources of information, with its own biases and interests. Structural (Neo-Marxist) power The lack of recognition of the communities concerned and their stakes in the airport project is a central theme of our analysis and is representative of their nonrecognition as equal members of Nepali society. As predominantly landless Janajatis, they are at the bottom of Nepal's societal power pyramid, shaped by complex caste and class relations, as land is still a key determinant of "wealth, power and social prestige" in Nepal (Biswakarma, 2018, p. 52). In this "social monopoly of land", "the less land the poor own, the more dependent they become on those who control it" (N. R. Shrestha & Conway, 1996, p. 321), or as more generally expressed by Svarstad et al. (2018, p. 354), "structure generates the potential and limits for the exertion of power." The lack of adequate social and environmental regulations to assure the common good and land rights illustrates the distinct power asymmetries between state and landless groups from a class perspective. With no or very small land holdings in their hilly homeland, TB's founders depended on the government's goodwill to let them secure their livelihoods as taungya tree planters and have since lived in fear of looming displacement. With the continued denial of land rights and the ban on further conversion of forest into farmland, the socioeconomic condition of the communities has stagnated for decades. While, with the revival of the SIA project, surrounding communities have profited from the speculative land market and increasing land values, the landless residents again depend on the government's goodwill to relocate them as they don't have the means to do so themselves. One villager expressed: "We're ready to die. We have nowhere to go. It's better to die." Actor-oriented power Bounded by structural forces, the communities' existential struggle for recognition and social status is also shaped by the exertion of power by specific actors and decision-making processes (Svarstad et al., 2018). One example is the persistent denial of basic facilities. In a process of convoluted negotiations over the access to electricity, Bara's Chief District Officer (CDO) ultimately used his 'power resources' to deny the settlements access to the regional grid, thus siding with SIA project managers. Another example is the decision-making process on the resettlement locations. While the TBSS leaders were shown several locations in November 2019, the SIA project manager, whom we interviewed about possible resettlement sites, was clear that the decision would be taken top-down: "We only followed orders from above. We send them suggestions and data. Decisions must be data-driven. These are all internal matters." Political ecologist often see resistance of local communities as a counterforce to political or corporate actors. In this case, however, the resistance of the residents is limited by their fragile dwelling situation. Although many seemed determined to fight for their demands -"We will protest against [the government]"the government holds its leverage through the threat of forced eviction. Furthermore, the communities lack external support and allies who could advocate their demands without fear of retaliation. Our analysis shows that this is partly because the airport opposition largely builds on conservationist narratives that frame the villagers as encroachers and it promotes a 'fortress conservation' approach, "where urban elites call for the enclosure of lands long used and occupied by […] local people, all in the name of protection" (Peet et al., 2010, p. 27). As one activist admitted: Even if the airport doesn't get made there, I think the [TB] people irrespectively will get relocated. […] Because even if you want to turn this into a protected area, there shouldn't be, there cannot be villages […] And it's really sad for them but that's how it is. The same activist explained that while she had initially planned to call attention to the villagers' situation, she realized that because of the communities' landless status, it would be extremely difficult to define actual rights violations: "As evident as it is, […] Tangiya Basti people, their condition, it won't hold for our case." Ultimately, both PILs against SIA are based almost entirely on environmental concerns. Post-structuralist power: Discursive power Finally, the production and circulation of powerful narratives strengthen structures and legitimize actors' decisions. In the present case, several discourses are shaping the struggle of local communities. The government has repeatedly presented the SIA project as a guaranteed development path, although its feasibility remains questionable because, as far as is known, no detailed market analysis has been conducted. Here, typical for developmentalism in the Nepali context, an infrastructure project is directly linked to national prosperity; SIA as a 'National Pride Project.' With the desire for bikas permeating all levels of society, questioning a project like SIA can be seen as 'anti-development' and therefore 'anti-national.' This was also evident in TB, when respondents insisted that they knew of the national benefits the project would bring: "It's a work of development. The country will have an income from it, [...] because [of that] the airport should be built." Although they want their demands met through a type of governmentality, many expressed the need to make personal sacrifices in the name of development: "We love our nation. If the airport project is a success, it means that our village will be destroyed", and "We have to lose something to gain something." Linking development to patriotism further weakens the communities' power to resist and their struggle for justice. Not only are they confronted with government retaliation for their protest, but they also risk being portrayed as 'antinationalists' which further deteriorates their already vulnerable position in Nepali society. Since most villagers are not indigenous to the area 18 , their struggle for livelihoods is further delegitimized by portraying them as 'encroachers' and questioning their claim to the land they inhabit, as previously illustrated. Only 50 km north of Nijgadh, the resistance of an indigenous Newari community to the Fast Track Road Project 19 was much stronger, as evidenced by extensive media coverage, since their struggle was based on their cultural-historical connection to their land and the preservation of Newari culture as a national good (Manandhar, 2018;Subedi, 2019). The narrative of 'illegal encroachment', causing the degradation of forest resources, also unites pro and anti-airport groups. As Shrestha and Conway (1996, p. 315) airport-opposition-led conservationist narratives illustrates the discursive power asymmetries towards local communities, rendering them invisible. It also reveals a tension between environmental and social justice struggles, where claims to nature are pitted against community interests, impeding a nuanced discussion of either. Conclusion We have aimed to provide insights into the complex forces that shape the everyday struggles for justice and livelihoods of communities affected by the construction of Nepal's Second International Airport. We show how the misrecognition of residents' interests and lived realities, particularly in Tangiya Basti, is closely intertwined with past and present distributive and procedural injustices and reinforced by power asymmetries of various types and scales. We argue that the villagers' social subordination was manifested with the failure of the government to grant land rights, long before the airport project was launched. Their status as landless, predominantly Janajati, migrant population has since not only undermined their inclusion as stakeholders in the airport project and their demands for basic facilities; it also fueled narratives of portraying them as 'illegal encroachers' used by dominant actors to further delegitimize the communities, which contributes to their invisibility in major debates. Villagers maneuver their sense of belonging, their peasant identity, their desire for bikas and a less troublesome future for their children, their disillusionment with the government, their hope for fair compensation, their struggle for just treatment, their patriotism, and their compliance with government decisions. These contradictory notions, which emerged even within individual interviews, have one commonality: uncertainty, underlining the temporality of local livelihoods. In our analysis, we have combined justice theories and conceptualizations of power from political ecology with notions of a sense of justice and critical knowledge production. We have explored competing justice claims around the airport project, which is predominantly framed as purely a conflict over natural resources. We show that at the community level, claims to social justice and socio-economic security outweigh claims to nature; in part because the 25-year-long project limbo has led to a diminished sense of place. These social justice concerns contrast with environmental justice claims put forward by urban airport opponents. Overall, we understand the SIA project conflict to be shaped by powerful development and conservation narratives that do not provide an adequate platform for community voices. Our research suggests that further investigation of the impact of large infrastructure projects on local livelihoods and potential tensions between competing justice claims of stakeholders is needed in the context of Nepal and other regions of South Asia. This must include more nuanced investigations of micro-level dynamics along caste, class, and gender lines. Ultimately, our research highlights the need to explore larger questions of reconciling community interests with sustainability and sustainable development concerns; with deliberative democracy approaches offering potential entry points.
2022-01-30T16:09:17.100Z
2022-01-28T00:00:00.000
{ "year": 2022, "sha1": "53f78e7d17d4b2a02a58873f9117a29cefbb8e29", "oa_license": "CCBY", "oa_url": "https://journals.librarypublishing.arizona.edu/jpe/article/2304/galley/4805/download/", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "45fe34bcb5f6515f216e5dea08801e59a08a67ba", "s2fieldsofstudy": [ "Political Science" ], "extfieldsofstudy": [] }
55399690
pes2o/s2orc
v3-fos-license
Policy Decisions for a Price Dependent Demand Rate Inventory Model with Progressive Payments Scheme Problem statement: In this proposed research, we developed an inventor y model to formulate an optimal ordering policies for supplier who offers progressive permissible delay periods t the retailer to settle his/her account. We assumed that the annual demand rate as a decreasing functio n of price with constant rate of deterioration and ti me-varying holding cost. Shortages in inventory are allowed which is completely backlogged. Approach: The main objective of this study to frame an inventory model in real life situations. In this st udy, we introduced a new idea of trade credits, nam ely, the supplier charges the retailer progressive inter es rates if the retailer prolongs its unpaid balan ce. By offering progressive interest rates to the retailer s, a supplier, can secure competitive market advant age over the competitors and possibly improve market sh are profit. This study has two main purposes, first the mathematical model of an inventory system are e stablish under the above conditions and second demonstrate that the optimal solution not only exis ts but also feasible. We developed theoretical results to obtain the optimal replenishment interva l by examine the explicit condition. An algorithm i s given to find the flow of optimal ordering policy. Results: The results is illustrated with the help of numerical example using Mathematica software and th e optimal solution of the problem is Z (p, T 1) = 76.8586 at (p, T 1) = (0.952656, 0.128844). Conclusion: We proposed an algorithm to find the optimal ordering policy. A numerical study has been perform ed to observe the sensitivity of the effect of demand parameter changes. INTRODUCTION In the traditional Economic Order Quantity (EOQ) model, it is assumed that the retailer pays for the goods as soon as it is received by the system. However, in practice, the supplier offers a retailer a delay of fixed time period for setting the amount owed to him. Usually, there is no interest charge if the outstanding amount is paid within the credit period. However, if the payment is not paid in full by the end of the credit period, then interest is charged on the outstanding amount. Goyal (1985)eveloped an EOQ model under conditions of permissible delay in payments extended Goyal (1985) model by allowing shortages. Mandal and Phaujdar (1988) developed an inventory model by including interest earned from the sales revenue on the stoke remaining beyond the settlement period. Aggarwal and Jaggi (1995) extended Goyal's model for deteriorating items because the loss due to deterioration cannot be ignored. Jamal et al. (1997) generalized the model to allow for shortage and deterioration. Liao et al. (2000); Chang and Dye (2001); Teng (2002); Teng et al. (2005) and Hwang and Shinn (1997) developed the model with permissible delay in period. Chang et al. (2010) Developed an Optimal replenishment policies for non-instantaneous deteriorating items with stockdependent demand. In the progressive trade credit period, retailer settles the outstanding amount by first credit period. Hence, the supplier does not charge any interest. Supplier charges an interest at rate Ic 1 on the un-paid balance if retailer pays after first credit period but before second period offered by supplier to retailer. If retailer settles his amount after second credit period, then supplier charges to retailer an interest at rate Ic 2 on un-paid balance (Ic 1 <Ic 2 ). By assuming progressive trade credits to the retailer supplier can secure competitive market advantage and improve market share. Goyal et al. (2007) developed an inventory model with constant demand rate and deterioration rate under progressive payment scheme. Soni and Shah (2008) developed a model for stoke-dependent demand rate under progressive payment scheme. Singh et al. (2008) extended Soni and Shah (2008) model by allowing shortages and variable holding cost. This fact attracted a number of researchers to drive inventory modals on price dependent demand rate patterns. Presented an inventory model for items havinf the demand rate is constant and variable deterioration rate under the trade credits. Some of the related works in this area are by Haley and Higgins (1973); Wee (1995); Chung and Tsai (2001); Teng (2002) and Teng et al. (2005). In this study, we address the issues relating to progressive credit period relating to the retailer to settle his account. We developed a mathematical model when the demand rate, as a decreasing function of price and shortage which are fully backlogged with time varying holding cost. We assume that the supplier offers two progressive credit periods to the retailer to settle the account. The net profit is maximized by optimization technique. An algorithm is presented to derive the retailer's optimal solution. Fundamental assumptions and notations: The following assumptions are used to develop the model: • With boundary conditions, Q (0) = Q, Q (T 1 ) = 0, consequently, the solution of the above Eq. 3-5 are: And the order quantity is The cost components per unit time are as follows Eq. 6: Inventory holding cost Eq. 7: The deterioration cost in the time interval [0, T 1 ] is Eq. 8: p and T 1 are continuous variables. Hence the optimal values of p and T 1 can be obtained by setting Eq. 14, 15: And: To maximize the net profit, provided Eq. 16: Where: The optimal values of p = p 2.1 and T1 = T 2.1 are solutions of Eq. 21 and 22: And: For maximizing the total net profit, provided Eq. 23-26: And: Z (p,T ) GR OC HC DC SC Ic IE The optimal values of p=p 2.2 and T 1 =T 2.2 are solutions of Eq. 30 and 31: Case 3: T 1 ≥ N: Based on the total purchased cost, CQ, total money pD (p) M+IE 2 in account at M and total money pD (p) N+IE 2 at N, there are three sub cases may arise: Sub Case 3.1: Let pD (p) M+IE 2 ≥CQ This sub case is same as sub case 2.1; here sub case 3.1 designate decision variables and objective function. with interest rate Ic 2 during (N, T 1 ). Therefore, total interest charged on retailer; IC 33 per unit time is Eq. 36: Interest earned per unit time is: The net profit is: If pD (p) M+IE 2 ≥CQ is true then compute T 1 = T 2.1 and p = p 2.1 from sub case 2.1or T 1 = T 3.1 and p = p 3.1 from sub case 3.1, repeat step 2 and stop. If pD (p) M+IE 2 ≥CQ is not true but pD ( is not true, then compute T 1 = T 3.3 and p = p 3.3 from sub case 3.3, repeat step 2 and stop. Step 4: M<T 1 <N is not true then computes T 1 = T 3.3 and p = p 3.3 from sub case 3.3, repeat step 2 and stop. Results: The data obtained clearly shows that individual optimal solutions are very different from each other. However, there exists a solution which ultimately provides the Maximize the total profit operating of inventory system. In the above tables, it is observed that as the value of T 1 and p are increased and then the total cost is increased. Thus, the optimal solution of the problem is Z (p, T 1 ) = 76.8586 at(p, T 1 ) = (0.952656, 0.128844). CONCLUSION In this study, we introduced a new idea of trade credits, namely, the supplier charges the retailer progressive interest rates if the retailer prolongs its unpaid balance. By offering progressive interest rates to the retailers, a supplier, can secure competitive market advantage over the competitors and possibly improve market share profit. Shortages are allowed and completely backlogged in the present model. In many practical situations, stock out is unavoidable due to various uncertainties. There are many situations in which the profit of the stored item is higher than its back order cost. Consideration of shortages is economically desirable in these cases. The traditional parameters of holding cost is assumed here to be time varying. As the changes in the time value of money and in the price, index, holding cost cannot remain constant over time. It is assumed that the holding cost is linearly increasing function of time. We developed theoretical results to obtain the optimal replenishment interval by examine the explicit condition. We proposed an algorithm to find the optimal ordering policy. A numerical study has been performed to observe the sensitivity of the effect of demand parameter changes. Further, the model can be enriched by incorporating other realistic parameters such as Weibull distribution deterioration rate, inflation rate, partial backlogging and in progressive interest charges.
2019-04-16T13:32:25.168Z
2012-02-17T00:00:00.000
{ "year": 2012, "sha1": "5239a396db22ecd738cbf09d42a140084e6fb6d2", "oa_license": "CCBY", "oa_url": "http://thescipub.com/pdf/10.3844/jmssp.2012.157.164", "oa_status": "HYBRID", "pdf_src": "MergedPDFExtraction", "pdf_hash": "b3ebb5eef8f58f7ff69e2f64e6a88b3139d23da1", "s2fieldsofstudy": [ "Business" ], "extfieldsofstudy": [ "Mathematics" ] }
227121587
pes2o/s2orc
v3-fos-license
Interdisciplinary Research in Artificial Intelligence: Challenges and Opportunities The use of artificial intelligence (AI) in a variety of research fields is speeding up multiple digital revolutions, from shifting paradigms in healthcare, precision medicine and wearable sensing, to public services and education offered to the masses around the world, to future cities made optimally efficient by autonomous driving. When a revolution happens, the consequences are not obvious straight away, and to date, there is no uniformly adapted framework to guide AI research to ensure a sustainable societal transition. To answer this need, here we analyze three key challenges to interdisciplinary AI research, and deliver three broad conclusions: 1) future development of AI should not only impact other scientific domains but should also take inspiration and benefit from other fields of science, 2) AI research must be accompanied by decision explainability, dataset bias transparency as well as development of evaluation methodologies and creation of regulatory agencies to ensure responsibility, and 3) AI education should receive more attention, efforts and innovation from the educational and scientific communities. Our analysis is of interest not only to AI practitioners but also to other researchers and the general public as it offers ways to guide the emerging collaborations and interactions toward the most fruitful outcomes. INTRODUCTION Artificial Intelligence (AI), which typically refers to the artificial creation of human-like intelligence that can learn, perceive and process information, is rapidly becoming a powerful tool for solving image recognition, document classification (Vapkin, 1995;LeCun et al., 2015) as well as for the advancement of interdisciplinary problems. It is often considered to be a powerful computational tool that can be applied to many complex problems which have not been successfully addressed so far. However, this is not a one way street, other fields such as neuroscience (Hassabis et al., 2017;Ullman, 2019), developmental psychology Charisi et al., 2020), developmental robotics (Oudeyer, 2011;Moulin-Frier and Oudeyer, 2013;Doncieux et al., 2020) and evolutionary biology (Gobeyn et al., 2019) can inspire AI research itself, for example by suggesting novel ways to structure data (Timmis and Knight, 2002), or helping discover new algorithms, such as neural networks, which are inspired from the brain (Rosenblatt, 1958). Of course, combining AI with other fields is not without challenges. Like any time when fields synergize, barriers in communication arise, due to differences in terminologies, methods, cultures, and interests. How to bridge such gaps remains an open question, but having a solid education in both machine learning and the field of interest is clearly imperative. An example of cross-pollination interdisciplinary program showing the success of these approaches is not utopic is Frontier Development Lab, a cooperative agreement between NASA, the Seti Institute, and ESA set up to work on AI research for space science, exploration and all humankind (Frontier Development Lab). Besides multidisciplinarity, advocating for ethics and diversity (Agarwal et al., 2020) is a must to account for biased models (Hendricks et al., 2018;Denton et al., 2019) and avoid stereotypes being perpetuated by AI systems (Gebru, 2019). For instance, interdisciplinary approaches, e.g., including art and science, as well as ensuring minorities are well represented among both the users and the evaluators of the latest eXplainable AI techniques (Arrieta et al., 2020), can make AI more accessible and inclusive to otherwise unreachable communities. While the AI revolution in research, healthcare and industry is presently happening at full speed, its long term impact on society will not reveal itself straight away. In research and healthcare, this might lead to blindly applying AI methods to problems for which, to date, the technology is not ready [e.g., IBM's Watson for oncology (Strickland, 2019)], and to ethically questionable applications [e.g., predicting sexual orientations from people's faces (Wang and Kosinski, 2017), using facial recognition in law enforcement or for commercial use (Clearview)]. AI can be used as a tool to improve data privacy (e.g., for deidentification, www.d-id.com) or for threat identification, but it is more often seen as itself being a threat to IT systems (Berghoff et al., 2020), e.g., in the cases of biometric security and privacy (Jiang et al., 2017). AI can be a target of attacks with vulnerabilities qualitatively new to AI systems [e.g. adversarial attacks and poisoning attacks (Qiu et al. , 2019)] as well as a powerful new tool used by the attackers (Dixon and Eagan, 2019). In industry, AI chatbots ended up being racist, reflecting the training data that was presented to the algorithm, recruitment software ended up being gender-biased; and risk assessment tools developed by a US contractor sent innocent people to jail (Dressel and Farid, 2018). A more careful consideration of the impact of AI is clearly needed by following global and local ethics guidelines for trustworthy (Smuha, 2019) and responsible AI (Arrieta et al., 2020). While a large number of industries have seen a potential in this technology and invested colossal amounts of money to incorporate AI solutions in their businesses, predictions made by AI algorithms can be frightening and without a proper educational framework, lead to a societal distrust. In this opinion paper we put forward three research topics that we believe AI research should accentuate on, (1) How can an interdisciplinary approach towards AI benefit from and contribute to the AI revolution? While AI is already used in various scientific fields, it should go beyond solely predicting outcomes towards conducting exploratory analysis and finding new patterns in complex systems. Additionally, in the future development of AI, the reverse direction should also be considered, namely investigating ways in which AI can take inspiration and can benefit from other fields of science. (2) How could regulatory agencies help correct existing data biases and discriminations induced by AI? To ensure this, AI research must be accompanied by decision explainability and dataset and algorithm bias analysis as well as creation of regulatory agencies and development of evaluation methodologies and tools. In all cases, AI research should guarantee privacy as well as economical and ecological sustainability of the data and algorithms based on it. (3) How can we manage the impact of this AI revolution once AI tools are deployed in the real world, particularly how to ensure trust of the scientific peers and the general public? This includes establishing public trust in AI through education, explainable solutions, and regulation. By considering these three aspects, interdisciplinary research will go beyond the considerations of individual disciplines to take broader and more thoughtful views of the promised digital revolutions. Our recommendations are a result of in-person discussions within a diverse group of researchers, educators, and students, during a 3-day thematic workshop, which has been collectively written and edited during and after the meeting. While not comprehensive, we believe they capture a broad range of opinions from multiple stakeholders and synthesize a feasible way forward. PART I: ARTIFICIAL INTELLIGENCE AND INTERDISCIPLINARY RESEARCH The relationship between AI and interdisciplinary research must be considered as a two-way street. While one direction may be more well known (applying AI to other fields), here we consider both directions: 1) from AI to other fields and 2) from other fields to AI. Then we argue that applying knowledge from other fields to AI development is equally important in order to move forward and to achieve the full potential of the AI revolution. From Artificial Intelligence to Other Fields Using AI to make predictions or decisions in e.g. quantitative science, healthcare, biology, economy and finance has been extensively, and possibly excessively done over the past several years. While the application of AI to these domains remains an active area of research, we believe that the biggest challenge for the future of AI lies ahead. Rather than just predicting or making decisions, AI solutions should be developed to conduct exploratory analyses, i.e., to find new, interesting patterns in complex systems or facilitate scientific discovery (Raghu and Schmidt, 2020). Specific cases where this direction has already been explored include e.g., drug discovery (Vamathevan et al., 2019), the discovery of new material (Butler et al., 2018), symbolic math (Lample and Charton, 2019;Stanley et al., 2019) or the discovery of new physical laws (Both et al., 2019;Iten et al., 2020;Udrescu and Tegmark, 2020). Will AI succeed in assisting humans in the discovery of new scientific knowledge? If so, in which domain will it happen first? How do we speed up the development of new AI methods that could reach such goals? These are some questions that should inspire and drive the applications of AI in other fields. Another possible approach consists of using AI models as experimental "guinea pigs" for hypothesis testing. In the domain of neuroscience, one standard methodology consists of analyzing which AI model is best at predicting behavioral data (from animals or humans) in order to support or inform hypotheses on the structure and on the function of biological cognitive systems (Gauthier and Levy, 2019). In that case, the process of training the AI-agent is an experiment in itself since the intrinsic interest does not lie in the performance of the underlying algorithm per se but instead in its ability to explain cognitive functions. Can we create an AI algorithm that will replace all stages of scientific process, from coming up with questions, generating the data, to analysis and interpretation of results? Such automated discovery is considered as the ultimate goal by some experts, but so far remains out of reach (Bohannon, 2017). From Other Fields to Artificial Intelligence Whereas AI approaches are readily impacting many scientific fields, those approaches also continue to benefit from insights from fields such as neuroscience (Hassabis et al., 2017;Samek et al., 2019;Ullman, 2019;Parde et al., 2020), for example the similarities between machine and human-like facial recognition (Grossman et al., 2019) and the use of the face space concept in deep convolutional neural networks (O'Toole et al., 2018;Parde et al., 2020). Other fields impacting AI research include evolutionary biology (Gobeyn et al., 2019) and even quantum mechanics (Biamonte et al., 2017). One of the biggest successes of integrating insights from other fields in modern day AI, the perceptron, became the prelude to the modern neural networks of today (Rosenblatt, 1958). Perceptrons and neural networks can be considered analogous to a highly reduced model of cortical neural circuitry. Other examples are algorithms such as reinforcement learning which drew inspiration from principles of developmental psychology from the 50s (Skinner, 2019) and have been influencing the field of developmental robotics (Cangelosi and Schlesinger, 2015) since the 2010s. Further illustration of this cross-fertilization can be seen in bioinspired approaches, where principles from natural systems are used to design better AI, e.g., neuroevolution algorithms that evolve neural networks through evolutionary algorithms (Floreano et al., 2008). Finally, the rise of quantum computers and quantum-like algorithms could further expand the hardware and algorithmic toolbox for AI (Biamonte et al., 2017). Despite these important advances in the last decade, AI systems are still far from being comparable to human intelligence (and to some extent to animal intelligence), and several questions remain open. For instance, how can an AI system learn and generalize while being exposed to only a small amount of data? How to bridge the gap between low-level neural mechanisms and higher-level symbolic reasoning? While AI algorithms are still mostly focused on the modeling of purely cognitive processes (e.g., learning, abstraction, planning. . .), a complementary approach could consider intelligence as an emergent property of cognitive systems through their coupling with environmental, morphological, sensorimotor, developmental, social, cultural and evolutionary processes. In this case, the highly complex dynamic of the ecological environment is driving the cognitive agents to continuously improve in an ever-changing world, in order to survive and to reproduce (Pfeifer and Bongard, 2006;Kaplan and Oudeyer, 2009). This approach draws inspiration from multiple scientific fields such as evolutionary biology, developmental science, anthropology or behavioral ecology. Recent advances in reinforcement learning have made a few steps in this direction. Agents capable of autonomously splitting a complex task into simpler ones (auto-curriculum) can evolve more complex behaviors through coadaptation in mixed cooperativecompetitive environments (Lowe et al., 2017). In parallel, progress has also been made in curiosity-driven multi-goal reinforcement learning algorithms, enabling agents to autonomously discover and learn multiple tasks of increasing complexity (Doncieux et al., 2018). Finally, recent work has proposed to jointly generate increasingly complex and diverse learning environments and their solutions as a way to achieve open-ended learning (Doncieux et al., 2018). One related research direction are studies of systems that sequentially and continually learn (Lesort et al., 2020) in a lifelong setting, i.e., continual learning without experiencing the well known phenomenon of catastrophic forgetting (Traoré et al., 2019). When combined, this research puts forward the following questions: How can we leverage recent advances that situate AI agents within realistic ecological systems? How does the dynamic of such systems drive the acquisition of increasingly complex skills? PART II: ARTIFICIAL INTELLIGENCE AND SOCIETY The rise of AI in interdisciplinary science brings along significant challenges. From biased hiring algorithms, to deep fakes, the field has struggled to accommodate a rapid growth and an increasing complexity of algorithms (Chesney and Citron, 2019). Moreover, the lack of explainability (Arrieta et al., 2020) has slowed down its impact in areas such as quantitative research and prevents the community to further develop reproducible and deterministic protocols. Here we propose methodologies and rules to mitigate the inherent risks that arise from applying complex and nondeterministic AI methods. In particular we discuss how general scientific methodologies can be adapted for AI research and how auditability, interpretability and environmental neutrality of results can be ensured. Adapting the Scientific Method to Artificial Intelligence-Driven Research To ensure that AI solutions perform as we intended, it is important to clearly formulate the problem and to state the underlying hypothesis of the model. By matching formal problem expression/definitions to laws (intentions), functional and technical specifications, we ensure that the project has a well established scope and a path towards achieving this goal. These specifications have been set forward by the GDPR (General Data Protection Regulation) that published a self assessment template guiding scientists and practitioners to prepare their AI projects for society (Bieker et al., 2016). In short, products and services resulting from AI decision making must clearly define their applicability and limitations. Note that this differs from problem definition since it involves explicitly stating how the algorithm will address part or all of the original problem. The developers have to explicitly detail how they handle extreme cases and show that security of the user is ensured. It should be mandatory for the owner and user of the data to clearly and transparently state the known biases expressed by the dataset (similar to the way the secondary effects of medicines are clearly stated on the medication guide). While some of these are already addressed by the GDPR in the EU, similar regulation and standards are needed globally. An alternative, complementary approach would be to rely on the classical scientific method practices developed over the centuries. Relying on observation, hypothesis formulation, experimentation (Rawal et al., 2020) and evaluation allows us to understand causal relationships and promotes rigorous practices. AI would certainly benefit from explicitly integrating these practices into its research ecosystem (Forde and Paganini, 2019). Biases and Ethical Standards in Artificial Intelligence To control the functioning of AI algorithms and their potential inherent biases, clear, transparent and interpretable methodologies and best practices are required. Trustworthiness of AI-driven projects can be ensured by, for example, using open protocols of the algorithms functionality, introducing traceability (logs, model versioning, data used and transformations done on data) or the pre-definition of insurance datasets. In transversal domains such as software development, tools have been devised to prevent mistakes and model deterioration over time (such as automated unit tests). Establishing similar standards for AI would force data scientists to design ways to detect and eliminate biases, ultimately making sure that the algorithm is behaving as intended. If ethical standards can be encoded in the algorithm, then regulation can be imposed on the optimized objectives of AI models (Jobin et al., 2019). Auditability and Interpretability The goal of AI should be to improve human condition and not further aggravate either existing inequalities (Gebru, 2019) or environmental issues in our societies. The AI service and product developers are likely to be at the center of this challenge -they are the ones that can directly prevent errors and biases in input data or future applications. They present a priori knowledge that can lead to or prevent misuse (conscious or unconscious). It is tempting to extensively employ libraries and "ready-to-use" code samples, as these make the production process faster and easier. However, especially when used by non-experts, the key features of AI models, e.g., data recasting, could easily be implemented incorrectly. The secondary users of AI tools must be able to measure the biases of their input data and obtained results, which can be done only if they are both aware of potential problems and if they have the necessary tools readily available. As with any software, failures and mistakes will inevitably arise and a system has to be in place to assess how AI tools and services behave not only during development but also "in production." The combination of decision logs and model versioning can allow us to verify and ensure the product outcomes are the ones intended. Here the question of independent authorities comes in order to regularly audit the AI products around us. Companies and AI product developers must be capable of "opening the black box" and clearly exposing the monitoring they perform over an algorithm. Opening the black box has already been set as an important goal in AI research (Castelvecchi, 2016), even if not all experts agree that this is necessary (Holm, 2019). It includes not only making the currently used model transparent, but more importantly being able to explain how it was designed, and examining its past states and decisions. For example, developers must track data drifting and deploy policies preventing an algorithm to produce unintended outcomes. So far, this has been left to good practices of individual developers, but we can envision construction of an authority in charge of auditing AI products regularly. One proposed approach has been to impose Adversarial Fairness during training or on the output (Adel et al., 2019). Independently of a particular way to ensure auditability and interpretability, the process should be co-designed not only by AI practitioners but all stakeholders, including the general public, following open science principles (Greshake Tzovaras et al., 2019). Auditability and interoperability considerations complement and extend the more obvious and direct requirements of robustness, security and data privacy in AI. Finally, as for any technology, the usefulness of AI will have to be assessed against its environmental impact. In particular, life cycle assessment of AI solutions should be systematic. Here also, auditing by independent authorities could be a way to enforce environmental neutrality (Schwartz et al., 2019). Education Through and About Artificial Intelligence Technologies Besides impacting research and industry directly, AI is transforming the job market at a rapid pace. It is expected that approximately 80% of the population will be affected by these technological advancements in the near future (HolonIQ). Highly complex jobs (e.g., the medical, juridical or educational domains) will be redefined, some simpler, repetitive tasks will be replaced or significantly assisted by AI and new jobs will appear in the coming decades. For instance, budget readjustment and reeducation of people who lose their jobs, towards a clean energy shift, with only about 30% coming from governments (which amounts to less than 10% of the funds committed to coronavirus economic relief), could positively shift climate change (Florini, 2011). However, workers of these different fields received little to no formal education on AI, and more initiatives on sustainable AI (such as EarthDNA ambassadors or TeachSDG Ambassadors) are needed. Therefore, the AI transformation should come along with a transformation in education where educational and training programs will have to be adapted to these different existing professions. The transformation in education can be implemented on four different levels: academic institutions, companies and governments. Academic institutions should not only prepare AI experts by providing in-depth training to move forward AI research but also focus on interdisciplinarity and attract diversity in AI. Three main axes for AI education should be: 1) high level AI experts who can train future generations 2) AI practitioners who can raise public awareness in their research and (3), broader public that can be informed directly, leading to decrease in a priori distrust. The end users and beneficiaries of AI services and products, as the most numerous part of the population, must play a central role in their development. It is they who should have the final say on what global use of AI technologies should be pursued. However, to do so, they must have a chance to learn the fundamental principles of AI. This is not fundamentally different from educating the general public about any scientific topic with a global societal impact, may it be medical (e.g., antibiotic resistance, vaccination) or environmental (e.g., climate change). Providing the information and training at scale is not a trivial task, due to at least two major issues: 1) the motivation of the general public and 2) the existence of appropriate educational tools. Various online resources are available targeting the general public, such as Elements of AI in Finland or Objectif'IA in France. Interestingly, in the case of AI, the problem itself could also be a part of a possible solution -we can envisage AI playing a central role in creating adaptive learning paths, individualbased learning programs addressing the needs and interests of each person affected by AI technology. Educational tools designed with AI can motivate each individual by providing relevant, personalized examples and do it at the necessary scale. Interactions between AI and education is yet another example of interdisciplinarity in AI (Oudeyer et al., 2016), which can directly benefit not only the two fields, education and AI, but society and productivity as a whole. CONCLUSION AI is currently ever present in science and society, and if the trend continues, it will play a central role in the education and jobs of tomorrow. It inevitably interacts with other fields of science and in this paper we examined ways in which those interactions can lead to synergistic outcomes. We focused our recommendations on mutual benefits that can be harnessed from these interactions and emphasized the important role of interdisciplinarity in this process. AI systems have complex life cycles, including data acquisition, training, testing and deployment, ultimately demanding an interdisciplinary approach to audit and evaluate the quality and safety of these AI products or services. Furthermore in Part II we focused on how AI practitioners can prevent biases through transparency, explainability, inclusiveness and how robustness, security and data privacy can and should be ensured. Finally we emphasize the importance of education for and through AI to allow the whole society to benefit from this AI transition. We offer recommendations from the broad community gathered around the workshop resulting in this paper, with the goal of contributing, motivating and informing the conversion between AI practitioners, other scientists, and the general public. In this way, we hope this paper is another step towards harnessing the full potential of AI for good, in all its scientific and societal aspects. DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/supplementary material, further inquiries can be directed to the corresponding author/s.
2020-11-23T14:03:07.427Z
2020-11-23T00:00:00.000
{ "year": 2020, "sha1": "e13c320fc31a21c5ede04fffd1d7ba10ee284bc8", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fdata.2020.577974/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e13c320fc31a21c5ede04fffd1d7ba10ee284bc8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
209500995
pes2o/s2orc
v3-fos-license
Deformed Infinite series metric in Cartan Spaces Igarashi introduce the concept of $(\alpha, \beta)$-metric in Cartan space $\ell^{n}$ analogously to one in Finsler space and obtained the basic important geometric properties and also investigate the special class of the space with $(\alpha, \beta)$-metric in $\ell^{n}$ in terms of $ 'invariants' $. In the present paper we determine the $ 'invariants' $ in two different cases of deformed infinite series metric which characterize the special classes of Cartan spaces $\ell^{n}$. Introduction In 2004 Lee and Park [14] introduced the concept of r-th series (α, β)metric where r is varies from 0, 1, 2, ..., ∞ and give very interesting example of special (α, β)-metric for the different values of r such as one-form metric, Randers metric, combination of Kropina and Randers metric, infinite series metric etc. In 1994, Igarashi [4,5] introduce the concept of (α, β)-metric in Cartan space ℓ n analogously to one in Finsler space and obtained the basic important geometric properties and also investigate the special class of the space with (α, β)-metric in ℓ n in terms of ′ invariants ′ . The classes which he obtained includes the spaces corresponding to Randers and Kropina space. Further he characterizes these spacial classes by means of ′ invariants ′ in case of Finsler theory. In the present paper we determine the ′ invariants ′ in two different cases of deformed infinite series metric in which firest metric is defined as the product of infinite series and Riemannian metric another one is the product of infinite series and one-form metric. Further we characterize the special classes of Cartan spaces ℓ n in case of these two metrics and also investigate the relation under which "invariants" are characterized as the special classes of ℓ n 2 Preliminaries E. Cartan [3] introduced the concept of a Cartan Space, where the measure of its hypersurface element (x, y) is given a priori by homogeneous function F (x, y) of degree one in y,i.e.,the "area" of a domain on hypersurface S n−1 : where y = (y i ) is the determinant of (n − 1, n − 1) minor matrix omitted ith row of (n, n − 1) matrix ( ∂x i ∂v α ), α = 1, 2, 3.....n − 1. In this space, we obtain the fundamental tensor by As the special case for the fundamental tensor a ij of Riemannian space, we can find the (n − 1) dimensional area of a domain on hypersurface such that hence it is clear that Riemannian space is a special case of Cartan space. On the other hand, Cartan space is considered the dual notion of Finsler Space. Further the relation between both spaces is studied by L. Berwald [1] in early days, afterwards, by H. Rund [12] and F.Brickel [2]. Recently R. Miron [10,11] established new Carton geometry which shows totally different feature in the form of particularization the Hamilton space which defined as: The fundamental tensor field of ℓ n and its reciprocal g ij (x, y) is given by The homogeneity of H(x, y) is expressed by where g ij (x, y) and its reciprocal g ij (x, y) are both symmetric and homogeneous of degree 0 in y i . On the other hand, the Finsler spaces with (α, β)-metric were considered by G. Randers [12], V. K. Kropina [6] and M. Matsumoto [7,8,9], especially the last paper shows the great success for investigation of these spaces. Cartan spaces with (α, β)-metric [4,5,10] can be defiend as It is clear thatH satisfy the conditions imposed to the function H(x, y) as a fundamental function for ℓ n . Then It follows: The functionH(α(x, y), β(x, y)) is positively homogeneous of degree 2 in both α and β. By this reason, there maybe no confusion if we adopt the notation H(α, β) itself instead ofH(α, β). Also we write such that Proposition 3.2 The following identities hold : Differentiating α and β with respect to y i we havė and the vector field Y i satisfies the relation Further let Also, we have the relation similar to (3.7): Differentiating (3.6) and (3.9) by y i succeedingly, we havė And using the same manner for (3.5), we geṫ On account of (3.5) and y i = 1 2 (H α∂ i α + H β∂ i β), we have Lemma 3.1 The Liouville vector field y i is expressed in the form We need to derive the fundamental tensor from the fundamental function H(x, y) of the Cartan space ℓ n . Proof. Making use of (3.12) and (3.14),we have Taking into account Lemma (3.2), we have (3.1). Q.E.D. In order to check the fitness of this tensor g ij for the fundamental tensor of (α, β)-metric,we verify the homogeneity of g ij .Contracting g ij by y i and y j ,we have 2H = H. which shows that our conclusion is right. Let us rewrite this expression in the form and The reciprocal tensor g ij of g ij are given by where and A ij ,C i are given by Consequently, we have g ij (x, y)g jk (x, y) = δ i k ,and (3.2') rank g ij (x, y) = n because of (3.2") det g ij (x, y) = (1 + C 2 )det A ij = (1 + C 2 )det a ij (x) = 0 The following relations are useful afterwards : where we use the notations Therefore we can prove without difficulty: 12) where we use the notations We can get another result from the Lemma (3.4) such that Theorem 4.2 The Cartan tensor C ijk of a Cartan space ℓ n with (α, β)metric is given by where the notation Π ijk means the cyclic symmetrization of the quantity in the brackets with respect to indices i, j, k. We can deduce the other important geometric object fields for ℓ n with (α, β)-metric, for instance, N ij ,H i jk ,C jk i etc. without difficulty. Cartan spaces with infinite series of (α, β)metric In 2004 Lee and Park [14] introduced a r-th series (α, β)-metric where they assume α < β. If r = 1 then L = α + β is a Randers metric. If r = 2 then L = α + β + α 2 β is a combination of Randers metric and Kropina metric. If r = ∞ then above metric is expressed as and the metric (5.2) named as infinite series (α, β)-metric. This metric is very remarkable because it is the difference of Randers and Matsumoto metric. In this section we consider two cases of Cartan Finsler spaces with special (α, β)-metrics of deformed infinite series metric which are defined as In the first case, partial derivatives of the fundamental function H(α, β) lead us the followings: Using equation (3.15) and (3.17) we have following invariants Proposition 5.1 The invariants ρ never vanishes in a Cartan space ℓ n equipped with deformed infinite series metric function H(α, β) = αβ 2 β−α -metric on T * M . Converesely, we have H α = 0 on T * M. Again using equation (3.20) and (3.23) we have following invariants Proposition 5.2 The invariants of Cartan tensor C ijk in Cartan space ℓ n which equipped with deformed infinite series metric function H(α, β) = αβ 2 β−α is given by (5.2). The invariants of equations (5.1) and (5.2) satisfies the following relations From equation (5.1) and (5.2), The fundamental tensor g ij (x, y) is of the form The fundamental tensor g ij (x, y) of the space ℓ n endoewd with the metric function H(α, β) = αβ 2 β−α is given by the equation (5.4). Converesely we obtain From equation (5.5) and (5.6), The fundamental tensor g ij (x, y) is of the form The fundamental tensor g ij (x, y) of the space ℓ n endoewd with the metric function H(α, β) = β 3 β−α is given by the equation (5.8). Conclusions In this work, we consider the infinite series (α, β)-metric, Riemannian metric and 1-form metric we determine relations with the "invarints" which characterize the special classes in Cartan Finsler frames . But, in Finsler geometry, there are many(α,β)-metrics, in future work we can determine the frames for them also.
2019-12-20T08:31:01.000Z
2019-12-20T00:00:00.000
{ "year": 2019, "sha1": "4a1bcb58d5d43d740e651fcf8fec09020143fcb1", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "4a1bcb58d5d43d740e651fcf8fec09020143fcb1", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
214351970
pes2o/s2orc
v3-fos-license
FACTORS CONTRIBUTING TO TRANSFORMATION PROCESS IN KENYA’S MANUFACTURING SECTOR Kenya’s manufacturing sector provides a clear footing in industrialization advancement. However, the sector is faced with challenges in its efforts to build a competitive manufacturing base as well as cultivating business and industrial environs. Therefore, the paper intends to ascertain factors contributing to manufacturing sector transformation process and its position in Kenya’s economic growth. We analyze annual data for the period 1975-2017. The findings of this study using time series regression analysis confirms that new investments by manufacturing sector to credit issuance by financial institutions and commercial banks ratio, labor involvement to output to manufacturing output ratio, value addition output to manufacturing output ratio positively contributes to transformation of Kenya’s manufacturing sector and Economic Growth. The study also reveals that lack of political good will during election period does affect manufacturing sector operations. The study recommends manufacturing sector to embrace innovation concept and technological advancements for betterment of operational efficiency and effectiveness. Introduction Manufacturing Sector is very essential in industrial revolution and growth for any given economy. Transformation has not been easy for manufacturing firms in developing countries (Marival Segarra-Oña et al., 2016; Navas Antonio, 2014; Beckmann B. et al., 2016) due to lack of capability to innovate and adopt technological advancements. The world economy at large has been influenced by competition at both national and international level whereby each firm tends to fight for a better position (Jacob Chege et al., 2016;Reischauer G., 2018). The growth and structure of manufacturing sector has not provided even level playing field for its investors due to unrealistic policies (Odhiambo Walter, 1991; Rioba Martin E. 2013; Yash Mehta, A. John Rajan, 2017). However, facts provided by Kenya National Bureau of Statistics (KNBS) have showed how the manufacturing sectorial activities contribute to Kenya's Gross Domestic Product. In 2017, there was a tremendous deceleration on industrial contribution to GDP; hence service-oriented sectors seemed to contribute more to Kenya's overall economic growth. For the manufacturing sector to be revived and become main contributor of economy, strategies towards long-term sustainability are vital. According to Kenya National Bureau of Statistics report Kenya has witnessed varied performance in the manufacturing sector which is largely associated to lack of total commitment and proper resources allocation towards industrial development (Rioba Martin E., 2014; Aaron Atteridge, Nina Weitz, 2017). Navas Antonio argued that market competition for manufactured products do dictate transformations in manufacturing firms that depend on innovation (Schumpeter, 1934;Helena Forsman, 2011;Heredia Pérez et al., 2018). Mendi P., Mudida R. highlighted how past informalities affect innovation in Kenya's firms which still proves a major challenge for firms transforming from informal to formal classification. Mendi P., Mudida R. research failed to put more weight on fund availability as an enabler of innovation implementation in firms while (Rioba Martin E., 2013; Jacob Chege, 2016) found out how unfriendly Kenya's reform policies towards manufacturing sector transformation were. It is therefore evident that past studies have given emphasis on innovation concept and policy implementation in manufacturing firms excluding attention towards manufacturing sector new investments, manufacturing productivity, value addition, mode of financing and labour productivity regardless of political situation. With inclusion of innovation and technological advancements, the study aims at ascertaining the factors leading to major transformations in Kenya's manufacturing sector since independence before and after multiparty democratic system of governance. Manufacturing Sector Developments Agricultural activities are considered to be main contributors of GDP for Least Developing Countries (LDCs). In Kenya, agricultural sector is the number one contributor of country's GDP (KNBS, 2018) but is currently faced with challenges, such as, global warming leading to adverse climate change, natural calamities and biodiversity loss. To exit low-income status, manufacturing sector development is considered to be a likely alternative Through innovated systems, competition at firm, sector, national, and international levels is boosted. It is through innovation that manufacturing firms are able to do away with traditional methods or processes of production by embracing science, technology and creativity (Helena Forsman, 2011; Heredia Pérez et al., 2018). Proper resource allocation is also of significance in manufacturing development through optimal input allocation (Zhang Xun et al., 2017; W-C. Lee, S-S. Wang, 2017) hence, increased output, reduced waste reduction and increased efficiency in production (Konstantinos Salonitis, Christos Tsinopoulos, 2016). Infrastructural development especially capital investments, technological advancements, state-of-the-art equipment, skilled labour and R&D need to be given priority through allocation of necessary funding towards their successful implementation (Ueasangkomsate P., Jankkot A., 2017; Yash Mehta, A. John Rajan, 2017). Political goodwill is another aspect that cannot be ignored. Corruption in LDCs is one major challenge towards realization of industrialization through manufacturing development (Mijiyawa A. G., 2017). LDCs' governments and democratic processes should provide favourable manufacturing environment by supporting right policies and discouraging slow and tedious bureaucracies (Navas Antonio, 2014). Last but not least, financial structures are crucial in manufacturing development. Financial institutions and commercial banks play key role in ensuring credit is allocated to most industrious manufacturing firms. Also, it is ideal to financially support Small and Medium Enterprises (SMEs) in the manufacturing sectors (Hoxha Indrit, 2013) that are characterized by weak R&D and incapacity to innovate (Helena Forsman, 2011). Findings have proved how degree of competition in banking sector does have an impact on external financing towards manufacturing sector whereby, industrialized countries are largely dominated by monopolistic banking competition. (Munacinga Simatele, 2015). Therefore, agenda by Kenya government to revitalize of the manufacturing sector is ambitious priority towards industrial development, job creation for youth as well as boosting of local and overseas market accessibility for its products (KNBS, 2018). It is evident from the literature review above that with proper resource allocation, labour productivity, implementation of innovation towards value addition, availability of external financing and presence of political good do play part in various transformations in manufacturing sector. Econometric Modeling The research model is based on Ordinary Least Square Principle (OLSP) in efforts to determine effects of manufacturing transformation on economic growth using Eviews10. Secondary data for dependent, independent and control variables was from KNBS for the years between 1975 and 2017. The model specification is based on the function below; EG=F(IF, LQ, VQ, P) (1) The estimation time series linear equation; econometric model is then written as follows; EG=β0+β1IF+β2LQ+β3VQ+β4P+ε (2) Where, Measure for economic growth (EG) is the dependent variable represented by real GDP growth rate while independent variables are; new investments by manufacturing sector to credit issuance by financial institutions and commercial banks ratio (IF), labour involvement to output to manufacturing output ratio (LQ), value addition output to manufacturing output ratio (VQ) and political good will (P) is a dummy variable used to capture election and campaign period during the study period. ₃ is the error term. β₀, β₁, β₂, β₃ and β₄ are OLS estimators. In dealing with data deviations due to changes in respect to time, logarithms are introduced to equation 2. LogEG=β0+β1logIF+β2logLQ+β3logVQ+β4P+ε The OLS estimators are expected to give desired properties; Best Linear Unbiased Estimators (BLUE), consistent, normal distribution of residuals among other time series properties for the variables. Statistical Results The empirical analysis commenced by conducting unit root tests through Augmented Dickey-Fuller (ADF) test that confirmed that all variables except EG and P were stationary after first differencing. By comparing Test Statistic Value (TSV) and Test Critical Value (TCV) for each variable as shown in Table 1 at 5% significance level, inferences for Unit Root Test are also indicated. Further, the study also rejects the null hypothesis of no co-integration at 5% significance level. Table 2 shows Johansen Co-integration tests for both Trace and Maximum Eigenvalue. From Table 2 Trace test indicates two co-integrating equations while maximum Eigenvalue indicates one co-integrating equation at the 5% level of significance. However, the study fails to reject the null hypothesis for at Most 2, 3, 4 for Trace value and at Most 1, 2,3 and 4 for Maximum Eigenvalue since respective statistics values are less than critical values at 5% significance level. The results therefore confirms existence of long run relationship among EG, IF, LQ, VQ and P with co-integrating relationship as shown in Table 3. Through regression analysis, Table 4 provides values for estimation equation in the short run with EG as the Dependent Variable. The results indicate that all independent variables except Political goodwill have positive impact on countries economic growth. Unit increase in value addition output to manufacturing output ratio increases economic growth by 0.61. Unit increase in labour productivity increases EG by 0.29 while unit allocation of financial credit to new manufacturing investments contributing 0.1 increase in EG. Meanwhile, lack of political goodwill especially during election and campaign period have negative impact on EG. The R-Squared results in Table 4 also indicates that the variables in question contributes 67.55% to economic growth while 32.45 takes care of other variables not factored in this study. Conclusion In conclusion, based on research findings above, Kenya's manufacturing sector plays a big role in growth of the country's economy though it's has not reached its peak. It's obvious, with ongoing industrial uprising in Kenya and other developing countries whose economies largely depend on agriculture, more transformations are yet to be witnessed. With innovation and technological advancements, manufacturing outputs will be through efficient and effective operations and quality aspect will not be compromised. Political environment seems to have impact on manufacturing firms' operations. Therefore, political goodwill need to be embraced especially during electioneering period. From the study, it's is evident that during general election periods, manufacturing outputs, new annual investments as well as credit issuance by financial institutions are sluggish hence a negative impact on manufacturing activities contributing to GDP and Kenya's overall economic growth. Kenya's manufacturing sector is the leading in East and Central Africa, as a result, other countries in Sub-Saharan Africa have developed a greater interest towards Kenya. With the recent signing of African Free Trade Area Agreement and manufacturing revitalization pillar under -Big Four‖ agenda, the government and the private sector under Kenya Association of Manufactures alliance need to collaboratively work as team by formulating policies, channeling more resources in support of R&D, investment in state-of-the-art equipment and technological advancements in efforts to ensure the manufacturing sector continues to transform in an accelerating manner. Further, the research recommends that the management of local manufacturing firms embrace the innovation culture in their internal structures in efforts to promote local sector competition as well as meeting global competition standards. In terms of labour productivity, the manufacturing sector need to be in the forefront in ensuring its maximum outputs are met at minimum costs without interfering with its socio-economic role.
2020-01-02T21:11:27.185Z
2019-12-19T00:00:00.000
{ "year": 2019, "sha1": "9091131b96e58adcb8fea18333b737e3d263444a", "oa_license": "CCBY", "oa_url": "https://revistas.pucsp.br/index.php/risus/article/download/46532/30853", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "fd1d886a3ca57ee4651bb0c4cd4893a51b0db71c", "s2fieldsofstudy": [ "Business", "Economics" ], "extfieldsofstudy": [ "Business" ] }
259199628
pes2o/s2orc
v3-fos-license
A Phase 3, Randomized, Double-Blind, Comparator-Controlled Study to Evaluate Safety, Tolerability, and Immunogenicity of V114, a 15-Valent Pneumococcal Conjugate Vaccine, in Allogeneic Hematopoietic Cell Transplant Recipients (PNEU-STEM) Abstract Background Individuals who receive allogeneic hematopoietic cell transplant (allo-HCT) are immunocompromised and at high risk of pneumococcal infections, especially in the months following transplant. This study evaluated the safety and immunogenicity of V114 (VAXNEUVANCE; Merck, Sharp & Dohme LLC, a subsidiary of Merck & Co., Inc., Rahway, NJ, USA), a 15-valent pneumococcal conjugate vaccine (PCV), when given to allo-HCT recipients. Methods Participants received 3 doses of V114 or PCV13 (Prevnar 13; Wyeth LLC) in 1-month intervals starting 3–6 months after allo-HCT. Twelve months after HCT, participants received either PNEUMOVAX 23 or a fourth dose of PCV (if they experienced chronic graft vs host disease). Safety was evaluated as the proportion of participants with adverse events (AEs). Immunogenicity was evaluated by measuring serotype-specific immunoglobulin G (IgG) geometric mean concentrations (GMCs) and opsonophagocytic activity (OPA) geometric mean titers (GMTs) for all V114 serotypes in each vaccination group. Results A total of 274 participants were enrolled and vaccinated in the study. The proportions of participants with AEs and serious AEs were generally comparable between intervention groups, and the majority of AEs in both groups were of short duration and mild-to-moderate intensity. For both IgG GMCs and OPA GMTs, V114 was generally comparable to PCV13 for the 13 shared serotypes, and higher for serotypes 22F and 33F at day 90. Conclusions V114 was well tolerated in allo-HCT recipients, with a generally comparable safety profile to PCV13. V114 induced comparable immune responses to PCV13 for the 13 shared serotypes, and was higher for V114 serotypes 22F and 33F. Study results support the use of V114 in allo-HCT recipients. Clinical Trials Registration. clinicaltrials.gov (NCT03565900) and European Union at EudraCT 2018-000066-11. Allogeneic hematopoietic cell transplantation (allo-HCT) is an effective therapy for a variety of hematologic and immunologic conditions. Transplant recipients are at high risk of infection and associated complications, including high rates of disease due to Streptococcus pneumoniae [1][2][3]. Delayed immune reconstitution and complications like graft versus host disease (GVHD) further increase this risk while decreasing vaccine responses. Clinical Infectious Diseases M A J O R A R T I C L E the prevention of pneumococcal disease in allo-HCT recipients include administration of a 3-dose PCV series starting 3 months post-transplant followed by vaccination with the 23-valent pneumococcal vaccine (PPSV23) at 1 year posttransplant. A fourth dose of PCV is recommended in individuals with GVHD as these persons are less likely to respond to unconjugated vaccines [9][10][11][12][13]. While limited data exist on the effectiveness of post-allo-HCT pneumococcal vaccination, data from a cohort of auto-and allo-HCT recipients with high (>90%) pneumococcal vaccine uptake after transplant suggest that the use of PCVs reduces the incidence of invasive pneumococcal disease (IPD) [14]. The few studies that have evaluated the safety and immunogenicity of PCVs in HCT recipients are smaller, mostly open-label, nonrandomized studies [4,5,8,[15][16][17]. V114 (VAXNEUVANCE; Merck, Sharp & Dohme LLC, a subsidiary of Merck & Co, Inc., Rahway, NJ, USA [MSD]) is a 15-valent PCV approved in adults and children, containing all 13 serotypes in PCV13 (Prevnar 13; Wyeth LLC, marketed by Pfizer, New York, NY, USA) and epidemiologically important serotypes 22F and 33F [18][19][20][21] for expanded serotype coverage. Several studies have demonstrated the acceptable safety and immunogenicity profiles of V114 in immunocompetent adults and children [22][23][24], as well as in individuals with human immunodeficiency virus (HIV) or sickle cell disease [25,26]. This study was designed to evaluate the safety, tolerability, and immunogenicity of V114 when given to individuals following allo-HCT. Study Design This study was a phase 3, randomized, double-blind, comparator-controlled, multicenter clinical study that aimed to describe the safety, tolerability, and immunogenicity of V114 and PCV13 when administered as a 3-dose regimen in recipients of allo-HCT aged 3 years and older (protocol V114-022). It was conducted at 44 centers in 10 countries (Supplementary Table 1) from September 2018 to November 2021 (clinicaltrials.gov NCT03565900 and European Union at EudraCT 2018-000066-11). The study was designed to randomize approximately 300 participants (250 adults [age ≥18 y] and 50 children [ages ≥3 to <18 y]) in a 1:1 ratio to receive either a 3-dose series of V114 or PCV13 in 1-month intervals (study day 1, days 30-44, and 30-44 days following receipt of dose 2) starting 3-6 months after allo-HCT. At 12 months post-allo-HCT, the 3-dose PCV series was followed by either a single dose of PPSV23 (PNEUMOVAX 23; MSD) or, if the participant was diagnosed with chronic GVHD, a fourth dose of PCV (V114 or PCV13). Treatment allocation/randomization occurred centrally using an interactive response technology system and was stratified according to the following factors: (1) use of systemic steroids within 14 days of randomization, (2) age category (3 to <18 y, 18-49 y, or ≥50 y), and (3) haploidentical donor status. The study vaccines were managed, prepared, and administered by an unblinded pharmacist and/or other qualified study site personnel who were not involved in any participant assessments or other study procedures. All safety and immunogenicity assessments were conducted by blinded personnel, and the participant and/or participant's parent/legal representative were also blinded to the study vaccine received by the participant. The study was conducted in accordance with the principles of Good Clinical Practice and approved by the appropriate institutional review boards and regulatory agencies. Participants Eligible participants included those who (1) received allo-HCT 90 to 180 days prior to randomization for acute lymphoblastic or myeloid leukemia, chronic myeloid leukemia, Hodgkin's lymphoma, non-Hodgkin's lymphoma, myelodysplastic syndrome, myelofibrosis and myeloproliferative diseases, aplastic anemia, or sickle cell disease; (2) had a life expectancy of more than 12 months after allo-HCT; and (3) had stable engraftment. Written informed consent was obtained from each participant or parent/legal representative prior to any study procedure. Key exclusion criteria were as follows: (1) 1 or more allo-HCT; (2) an allo-HCT with ex vivo graft manipulation, in vivo T-cell depletion with alemtuzumab, or haploidentical allo-HCT with high-dose anti-thymocyte globulin; (3) an allo-HCT for multiple myeloma or, in participants aged 18 years or older, any nonmalignant disease other than sickle cell disease or aplastic anemia; (4) persistent or relapsed primary disease after allo-HCT; (5) history of grade 3 or 4 GVHD; (6) history of culture-positive PD after allo-HCT; and (7) known hypersensitivity to any PCV components. Key prior/concomitant therapy that would result in exclusion included (1) chimeric antigen receptor T-cell therapy, checkpoint inhibitor therapy, or anti-CD20 therapy after allo-HCT; (2) pneumococcal vaccination after allo-HCT; (3) receipt of systemic steroids of more than 0.5 mg/kg/day for 14 or more days within 30 days prior to study vaccine administration; and (4) receipt of any vaccine within 14 days prior to administration of study vaccines. Safety Assessments Participants were followed after each study vaccination for unsolicited and solicited adverse events (AEs). Postvaccination, participants or their parent/legal representative recorded daily body temperatures for either 5 days in adults or 7 days in pediatric participants and any complaints for 14 days postvaccination using an electronic Vaccination Report Card. The complaints were subsequently reviewed by the study investigators to determine if they met protocol-defined AE criteria and to assess seriousness, intensity, and causality to the study vaccine. Solicited AEs included injection-site AEs (pain, erythema, and swelling for adults and additionally induration for pediatric participants) collected for 5 days postvaccination for adults and 14 days postvaccination for pediatric participants and systemic AEs (myalgia, arthralgia, headache, and fatigue for adults and additionally urticaria for pediatric participants) collected for 14 days postvaccination for all participants. Serious AEs and deaths were collected for the length of the study. Immunogenicity Assessments Blood was collected prior to the first study vaccination and 30 days after the third PCV vaccination (day 90) as well as prior to and 30 days after vaccination with PPSV23 or a fourth PCV dose for the measurement of serotype-specific, antipneumococcal polysaccharide (PnPs) antibodies. Anti-PnP serotype-specific immunoglobulin G (IgG) for the 15 serotypes contained in V114 and the 13 serotypes contained in PCV13 was measured in sera using the pneumococcal electrochemiluminescence (PnECL) v2.0 assay, which was developed by MSD [27,28]. A validated multiplex opsonophagocytic assay was used to assess serotype-specific, antibody-mediated killing activity. Determination of Study Sample Size The overall objective of the study was to generate safety and immunogenicity data with V114 in persons who have undergone allo-HCT and, as such, the safety and immunogenicity endpoints are descriptive. The sample size of approximately 125 adult participants per group was selected to achieve a reasonably sized safety database in this population. After study initiation, a pediatric cohort was added to enroll approximately 25 participants in each group following initiation of the V114 pediatric phase 3 program. Analysis Populations Safety analyses were conducted in the "all participants as treated" population, which consisted of all randomized participants who received at least 1 study vaccination. The "per protocol" population was the primary analysis population for immunogenicity data and consisted of all randomized participants without protocol deviations that may substantially affect results of the immunogenicity endpoints. Safety Endpoints and Statistical Methods The primary safety endpoint was to evaluate the safety and tolerability of 3 doses of V114 and PCV13 with respect to the proportion of participants with AEs. Secondary safety endpoints were to evaluate safety and tolerability of PPSV23 or a fourth dose of V114 or PCV13 with respect to the proportion of participants with AEs. The within-group 95% CIs were calculated based on the exact method by Clopper and Pearson [29]. Immunogenicity Endpoints and Statistical Methods The primary immunogenicity endpoint was to evaluate the anti-PnP serotype-specific IgG geometric mean concentrations (GMCs) at day 90 for each vaccination group. The secondary objectives were to evaluate the anti-PnP serotype-specific opsonophagocytic activity (OPA) geometric mean titers (GMTs) at day 90 for each vaccination group, as well as OPA and IgG geometric mean fold rises (GMFRs) and the proportion of participants with a 4-fold or greater increase in antibody levels from baseline (day 1) to day 90. Exploratory immunogenicity endpoints were to evaluate anti-PnP serotype-specific IgG GMCs and OPA GMTs at 30 days following PPSV23 or 30 days following a fourth dose of V114 or PCV13. Point estimates and within-group 95% CIs were calculated by exponentiating the CIs of the mean of the natural log values based on the t-distribution. For continuous endpoints, the within-group 95% CIs were obtained by exponentiating the CIs of the mean of the natural log values based on the t-distribution. For dichotomous endpoints, the within-group 95% CIs were based on the exact method by Clopper and Pearson. Analysis Software All analyses were performed using SAS software, version 9.4, of the SAS System for Unix (copyright © 2012 SAS Institute, Inc). RESULTS A total of 274 participants (14 children [3 to <18 y of age] and 260 adults [≥18 y old]) were enrolled and vaccinated with either V114 (n = 139) or PCV13 (n = 135). Enrollment was contingent on the adult cohort, with a plan to cease enrollment once 250 adults were enrolled. As a result, a lower-than-expected number of pediatric participants were enrolled. Of the vaccinated participants, 115 in the V114 group (82.7%) and 111 in the PCV13 group (80.4%) completed the study (Figure 1). Participant demographic and baseline characteristics were generally comparable between vaccination groups, including prior steroid use and haploidentical donor status (Table 1). Graft versus host disease at randomization was also reported in both groups (43.2% in the V114 group, 36.3% in the PCV13 group). Safety During the study, most participants experienced 1 or more AEs. Due to different AE collection requirements in adults and children, safety outcomes in these cohorts were reported separately. The proportions of participants with systemic and serious AEs after any of the first 3 doses of study vaccine were comparable between intervention groups in both adults and children (Figure 2 and Supplementary Table 2). Adults in the V114 group had a numerically higher (>10-point difference between groups) proportion of solicited AEs, injectionsite AEs, and vaccine-related systemic AEs after any of the first 3 doses compared with the PCV13 group. The majority of AEs in each group were of mild-to-moderate intensity (Figure 2A). The most common AEs reported were those solicited in the trial, including injection-site (pain, erythema, swelling) and systemic (myalgia, arthralgia, headache, fatigue) AEs. The 3 most common AEs after any of the first 3 PCV doses were injection-site pain, myalgia, and fatigue in adults and injectionsite pain, injection-site swelling, and myalgia in children. The proportions of participants with individual solicited AEs by intensity are displayed in Figure 2. The majority of participants experienced AEs of mild-to-moderate intensity with a duration of 3 days or fewer (Figure 2 and Supplementary Table 3). The distribution of maximum body temperature measurements after vaccination were also comparable between vaccination groups, with most participants reporting a maximum temperature of 38.0°C or less (Supplementary Table 4). At approximately 12 months post-allo-HCT, participants received either a single dose of PPSV23 (n = 164) or a fourth dose of PCV (n = 66) if there was a diagnosis of chronic GVHD (Figure 1). After PPSV23, the most common AEs in both groups were injection-site pain and myalgia. The majority of AEs in both groups were of mild-to-moderate intensity with a duration of 3 days or fewer (Supplementary Tables 5-7). Results were similar in participants who received a fourth PCV dose (Supplementary Tables 8-10). Of all vaccinated participants, 32.4% experienced 1 or more serious AEs (28.8% in the V114 group, 36.3% in the PCV13 group). The proportions of participants with serious AEs after any of the first 3 PCV doses, PPSV23, or the fourth PCV dose were generally comparable between groups (Supplementary Tables 11-14). Two serious AEs were considered by the investigator to be related to a vaccine given in the study, one after V114 and one after PPSV23. One adult participant with a history of acute lymphoblastic leukemia and cutaneous GVHD discontinued the study 36 days after V114 dose 2 due to a serious AE of immune thrombocytopenic purpura. Another adult participant in the V114 group was hospitalized after experiencing pyrexia (39.5°C) 1 day after PPSV23 that lasted 2 days. In both cases, the conditions resolved with no further complications. Seventeen participants (8 in the V114 group, 9 in the PCV13 group) died during the study. None of the deaths were determined by the investigator to be related to the study vaccines. New-onset or worsening GVHD during the study was reported for 28.1% of participants in the V114 group and 40.0% of participants in the PCV13 group. Relapse or progression of underlying disease occurred in 10.8% and 11.9% of participants in the V114 and PCV13 groups, respectively. Immunogenicity Immunogenicity analyses were pooled for adults and children. At day 90, V114 induced immune responses comparable to PCV13 for the 13 shared serotypes and higher than PCV13 for serotypes 22F and 33F for both IgG GMCs ( Figure 3) and functional antibodies (OPA) (Figure 4). Results were further supported by serotype-specific IgG and OPA GMFRs and the proportions of participants with a 4-fold or greater increase in serotype-specific antibody levels from baseline to day 90 ( Supplementary Figures 1 and 2). V114 also induced immune responses as assessed by serotype-specific IgG GMCs at day 90 in participants stratified by steroid use within 14 days of the first study vaccination, haploidentical donor status, and age. Results observed in the subgroup analyses, as well as in participants with a history of GVHD, were generally consistent with the overall population (Supplementary Tables 15-22). Immunogenicity was further assessed before and after either PPSV23 or the fourth PCV dose. PPSV23 induced immune responses to the 15 serotypes shared with V114 when given following either V114 or PCV13 (Supplementary Figures 3 and 4). Increases in serotype-specific IgG GMCs and GMFRs to all study vaccine serotypes were also observed in both groups following a fourth PCV dose (Supplementary Figures 3 and 5). DISCUSSION Following allo-HCT, individuals are highly vulnerable to infectious disease and subsequent infection-related mortality [30]. Strategies to prevent infection through vaccination are critical to shorten the window of susceptibility and increase survival for this vulnerable population. This study demonstrates the safety, tolerability, and immunogenicity of V114 following allo-HCT, which has the potential to reduce the remaining burden of IPD [31,32]. V114 was generally well tolerated in all participants throughout the vaccination series and comparable to recipients of PCV13, with the majority of solicited AEs experienced being of mild-to-moderate intensity and short duration. The proportions of adult participants who experienced solicited AEs after any V114 dose were generally comparable to PCV13 for all except injection-site pain, injection-site swelling, and myalgia, which were numerically higher in the V114 group, but mild-to-moderate in intensity and of short duration. The results observed in the planned subgroup analyses were generally consistent with the overall population. New-onset or worsening GVHD occurred in 28.1% of V114 recipients and 40.0% of PCV13 recipients; however, the clinical significance of this is unknown. Overall, the safety profile of V114 was consistent with the safety of other PCVs in this population [4][5][6]8]. Following the recommended immunization schedule, a 3-dose series of V114 starting 3-6 months after allo-HCT elicited robust quantitative and qualitative immune responses as evidenced by serotype-specific IgG GMCs and OPA GMTs to all 15 serotypes. Due to delayed B-cell reconstitution after transplant [9], PPSV23 is recommended at least 1 year after transplant to broaden serotype coverage. Vaccination with PPSV23 following V114 was well tolerated and comparable to vaccination with PPSV23 following PCV13. In addition, the study demonstrates that PPSV23 or a fourth dose of PCV (in recipients with chronic GVHD) increased serotype-specific antibody levels at 1 year following transplant. These data suggest that serotype-specific immunity can be maintained with these vaccines at 1 year following transplant in the presence or absence of chronic GVHD. In addition, an advantage of V114 in this regimen is the higher immune response induced against serotypes 22F and 33F during the post-transplant window that is sustained following PPSV23. This study has limitations. While the study enrolled a substantial number of participants with the risk condition of interest, the cohort size was insufficient for hypothesis testing of immunogenicity endpoints between groups or for the assessment of infrequently occurring AEs. Several factors contributed to low enrollment of pediatric participants, such as a later start Stacked bar graphs on the right display solicited AEs by intensity. For injection-site swelling, erythema, and induration, intensity was assigned according to size as follows: mild events were those measuring 0 to ≤1 inch, moderate events were >1 to ≤3 inches, and severe events were >3 inches. Injection-site induration and urticaria were only solicited in pediatric participants. Abbreviations: AE, adverse event; P, PCV13; PCV, pneumococcal conjugate vaccine; V, V114. *Determined by the investigator to be related to the study vaccine. of the pediatric V114 phase 3 program as compared with the adult one, the generally lower number of pediatric HCTs performed annually and the coronavirus disease 2019 (COVID-19) pandemic [33]. Certain populations of transplant recipients were excluded from the study. Vaccine effectiveness and persistence of antibody levels following booster were not evaluated. A 20-valent PCV is approved for use in the general adult population, [34] but has not been assessed in this population and was not available as a comparator for this study. The results of this study demonstrate that V114 is well tolerated in allo-HCT recipients, with a safety profile generally comparable to PCV13. V114 was immunogenic in the 3-dose series, as well as at 1 year post-transplant. These results support the use of V114 as well as V114 followed by PPSV23 in recipients of allo-HCT. Supplementary Data Supplementary materials are available at Clinical Infectious Diseases online. Consisting of data provided by the authors to benefit the reader, the posted materials are not copyedited and are the sole responsibility of the authors, so questions or comments should be addressed to the corresponding author.
2023-06-21T06:17:01.220Z
2023-06-20T00:00:00.000
{ "year": 2023, "sha1": "93a696c86844ec8fc8b96435de9b04ec1b691cb6", "oa_license": "CCBYNCND", "oa_url": "https://academic.oup.com/cid/advance-article-pdf/doi/10.1093/cid/ciad349/50654016/ciad349.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "984c860531c186e7993900faa6c70a92c131aa19", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
216645713
pes2o/s2orc
v3-fos-license
Methanolic Extract of the Herb Ononis spinosa L. Is an Antifungal Agent with no Cytotoxicity to Primary Human Cells Ononis spinosa L. is a plant traditionally used as folk remedy. There are numerous studies regarding chemical constituents and health beneficial properties of Ononidis Radix. The following study was designed to investigate chemical composition and antifungal potential of the methanolic extract obtained from the O. spinosa L. herb. Chemical analyses regarding phenolic compounds of O. spinosa were performed by liquid chromatography with mass spectrometry (LC-DAD-ESI/MSn). Antifungal activity, antibiofilm properties and antifungal mode of action of the extract were evaluated, as well as cytotoxicity. Chemical analyses revealed the presence of flavonoids, isoflavonoids and phenolic acids in O. spinosa, with kaempherol-O-hexoside-pentoside being the most abundant compound (5.1 mg/g extract). Methanolic extract was active against all of the tested microfungi with Penicillium aurantiogriseum being the most sensitive to the extract inhibitory effect at 0.02 mg/mL; and effectively inhibited biofilms formed by Candida strains. Minimum fungicidal concentrations of extract rose in the presence of ergosterol and leakage of cellular components was detected. The extract showed no cytotoxicity to human gingival fibroblast (HGF-1) cells. This study significantly contributes to overall knowledge about medicinal potential of O. spinosa herbal extract and enlightens previously unrevealed properties. O. spinosa aerial parts seem to be an interesting candidate for the development of antifungal preparations, non-toxic to human cells. Introduction Colossal structural diversity and biological activity of natural molecules are unrivaled by any available synthetic drugs in reference libraries. As such, these privileged platforms derived from nature serve as important scaffolds for the design of novel therapeutic candidates, including antifungals. More than a billion people are suffering from various fungal infections, with more than 1.5 million having fatal consequences [1]. These infections are difficult to treat making the mortality rates high Chemical Composition of Phenolic Compounds The chromatographic data obtained from the High-Performance Liquid Chromatography coupled with a Diode Array Detector and Electrospray Mass Spectrometry (HPLC-DAD-ESI/MSn) analyses of the phenolic compounds in the extracts of O. spinosa are presented in the Table 1 and Figure 1. We have identified 16 compounds in the extract, which are counting seven flavonoids, five phenolic acids and four isoflavonoids. Chromatographic characteristics corresponding to standard compounds caffeic acid, quercetin-3-O-glucoside and kaempherol-3-O-glucoside, were used for positive identification of Peaks 3, 9 and 13, respectively. The most abundant class of the compounds were flavonoids with the highest number of tentatively identified compounds and with the highest quantity (12.2 ± 0.1 mg/g extract) as well. , had given a particular MS 2 fragment at m/z 285 (kaempherol aglycone), analogous to the loss of 324 u (two hexosyl units), 294 u (one hexosyl and one pentosyl unit) and 162 u (one hexosyl unit), respectively. These Peaks (7, 10 and 12) were tentatively identified as kaempherol-O-dihexoside, kaempherol-O-hexoside-pentoside and kaempherol-O-hexoside (with a different retention time when compared to the peak 13), respectively. According to their chromatographic characteristics, Peaks 6 and 11 were found to be glycosylated derivatives of quercetin, and these were further identified as quercetin-O-hexoside-pentoside and acetylquercetin-O-hexoside, respectively. As far as the authors knowledge there are no previous reports on O. spinosa regarding the identification of this type of flavonoids. Nevertheless, these types of compounds have been previously identified in others Ononis varieties, such as O. arvensis [15] and O. angustissima L. [16] aerial parts. Antifungal Activity of O. spinosa Methanolic Extract Antifungal activity of the methanolic extract obtained from the aerial parts of O. spinosa is presented in Table 2. The activity of extract was tested against wide range of pathogenic and contaminant fungi, including human, animal and plant pathogens, as well as food contaminant species. Antifungal activity of O. spinosa was the most prominent against food isolated species Penicillium aurantiogriseum with minimum inhibitory concentration (MIC) of 0.02 mg/mL and minimum fungicidal concentration (MFC) of 0.04 mg/mL. On the other hand, the most resistant species to the effect of O. spinosa methanolic extract was Penicillium ochrochloron, a species frequently isolated from the soil and apples, with MIC of 5.00 mg/mL and MFC of 10 mg/mL. Antifungal activity of tested extract was the most prominent against Penicillium aurantiogriseum followed by Aspergillus fumigatus, Candida tropicalis, A. versicolor, A. niger, Trichoderma viride, P. funiculosum, C. albicans, C. krusei, A. ochraceus and P. ochrochloron. As far as we know, this is the first study reporting antifungal activity of the methanolic extract obtained from the herb of O. spinosa. The activity of O. spinosa was comparable to the activity of commercial fungicides. The most promising effect was achieved on A. fumigatus and P. aurantiogriseum, to which commercial antifungal drugs ketoconazole and bifonazole showed weaker activity when compared to the antifungal action of O. spinosa. Most of the tested microfungi strains gave the similar results regarding MICs and MFCs, which were in the activity range of tested commercial positive controls (ketoconazole and bifonazole). Previous literature data indicated antifungal potential of extract obtained from the roots of O. spinosa, which is traditionally used in ethnomedicine [25]. Results obtained in this study indicate that O. spinosa methanolic extract obtained from the aerial plant parts possessed antifungal properties as well. A study by Deliorman Orhan et al., [25] indicated that the infusion made from Ononidis Radix is active against the following fungal species: Candida albicans (MIC 0.016 mg/mL; MFC 0.064 mg/mL), C. tropicalis (MIC 0.016 mg/mL; MFC 0.064 mg/mL), C. parapsilopsis (MIC 0.008 mg/mL; MFC 0.016 mg/mL), Trichophyton rubrum (MIC 0.016 mg/mL; MFC not active), Epidermophyton were tentatively identified as caffeic acid hexoside and ferulic acid hexoside, respectively, based on its characteristic Ultraviolet-Visible (UV-Vis) spectra, fragmentation pattern and the information previously reported by Barros et al., [17]. Peaks 4 and 5 ([M − H] − at m/z 473) were tentatively identified as the cis and trans isomers of chicoric acid, respectively, based on the chromatographic information previously reported by Chen et al., [18]. As previously mentioned, to the best of our knowledge there are no studies, on the composition in phenolic acids in O. spinosa. Nevertheless, these types of compounds have already been reported in the O. angustissima L. aerial parts [16]. Finally, the groups of isoflavonoids found in O. spinosa were less abundant in comparison to the other two groups of phenolic compounds. Though, this group has been extensively studied in O. spinosa [19][20][21][22]. malonyl-hexoside, respectively, based on its chromatographic characteristic, as also their fragmentation pattern, which has been previously reported by Gampe et al. [5]. Although two of these compounds were found in trace amounts in the studied sample (peaks 8 and 16), it is important to highlight the relevance of isoflavonoids to human health, having already been intensively studied, mainly in legumes, for their effects to inhibit the proliferation of certain types of cancers or even against some neurodegenerative diseases [23,24]. Antifungal Activity of O. spinosa Methanolic Extract Antifungal activity of the methanolic extract obtained from the aerial parts of O. spinosa is presented in Table 2. The activity of extract was tested against wide range of pathogenic and contaminant fungi, including human, animal and plant pathogens, as well as food contaminant species. Antifungal activity of O. spinosa was the most prominent against food isolated species Penicillium aurantiogriseum with minimum inhibitory concentration (MIC) of 0.02 mg/mL and minimum fungicidal concentration (MFC) of 0.04 mg/mL. On the other hand, the most resistant species to the effect of O. spinosa methanolic extract was Penicillium ochrochloron, a species frequently isolated from the soil and apples, with MIC of 5.00 mg/mL and MFC of 10 mg/mL. Antifungal activity of tested extract was the most prominent against Penicillium aurantiogriseum followed by Aspergillus fumigatus, Candida tropicalis, A. versicolor, A. niger, Trichoderma viride, P. funiculosum, C. albicans, C. krusei, A. ochraceus and P. ochrochloron. As far as we know, this is the first study reporting antifungal activity of the methanolic extract obtained from the herb of O. spinosa. The activity of O. spinosa was comparable to the activity of commercial fungicides. The most promising effect was achieved on A. fumigatus and P. aurantiogriseum, to which commercial antifungal drugs ketoconazole and bifonazole showed weaker activity when compared to the antifungal action of O. spinosa. Most of the tested microfungi strains gave the similar results regarding MICs and MFCs, which were in the activity range of tested commercial positive controls (ketoconazole and bifonazole). Previous literature data indicated antifungal potential of extract obtained from the roots of O. spinosa, which is traditionally used in ethnomedicine [25]. Results obtained in this study indicate that O. spinosa methanolic extract obtained from the aerial plant parts possessed antifungal properties as well. A study by Deliorman Orhan et al., [25] indicated that the infusion made from Ononidis Radix is active against the following fungal species: Candida albicans (MIC 0.016 mg/mL; MFC 0.064 mg/mL), C. tropicalis (MIC 0.016 mg/mL; MFC 0.064 mg/mL), C. parapsilopsis (MIC 0.008 mg/mL; MFC 0.016 mg/mL), Trichophyton rubrum (MIC 0.016 mg/mL; MFC not active), Epidermophyton floccosum (MIC 0.066 mg/mL; MFC not active) and Microsporum gypseum (MIC 0.032 mg/mL; MFC not active) [25]. Although in the paper by Deliorman Orhan et al., [25] the antifungal action of the O. spinosa root infusion was analyzed, obtained results are comparable to ours. Furthermore, ethanolic and water solutions of the ashes obtained from O. spinosa plant [3] showed anticandidal activity. Fungicidal concentrations were in range of 1.25 µg/mL (towards C. albicans) to 40 µg/mL (towards C. glabrata) for ethanolic ash solution; and from 1.25 µg/mL for C. albicans and to not active against C. glabrata for aqueous ash solution [3]. Currently, some synthetic antifungals are associated with some adverse effects and there is still no effective cure for some fungal infections in humans [26]. Namely, it has been revealed that Pharmaceuticals 2020, 13, 78 6 of 13 infections caused by anthropophilic and zoophilic fungi, which represent the most common fungal infections limited to human and animal skin, nails and mucous membranes, are frequently difficult to treat with topical therapeutics and in some cases they may require long term treatment with systemic antifungals [27]. Furthermore, fungicides used in agricultural industry to prevent growth of phytopathogenic fungi may have harmful effects on the environment, as well as on humans and animals through further food chain [28]. Results obtained in this study showed that aerial parts of O. spinosa might provide a good basis for development of natural antifungals. Therefore, this study presents one of the many attempts to resolve issues arisen from the use of synthetic antifungal preparations, both in the treatment of humans and animals, as well as in application in agricultural industry. Antibiofilm Activity of O. spinosa In this study antibiofilm activity of the O. spinosa methanolic extract was tested on C. albicans, C. krusei and C. tropicalis (Table 3). These species are able to form structured communities that are attached to surfaces by specific signaling molecules [26]. The results of this study pointed to antibiofilm activity of O. spinosa ( Table 3) Insights into the Modes of Antifungal Action Ergosterol is one of the crucial molecules found in fungal cell membranes. Since it is a vital molecule for fungal survival, the enzymes involved in its biosynthesis often present targets for the activity of effective antifungals [29]. Herein, we studied the survival of C. albicans in the presence of externally added increasing concentrations of ergosterol (25-100 µg/mL) and serial dilutions of the O. spinosa extract in order to determine whether the extract achieves its antifungal effect via disruption of ergosterol biosynthetic pathway. The results presented in the Figure 2 revealed that MFC values were increased with increasing concentrations of external ergosterol. This is leading to the conclusion that ergosterol biosynthetic pathway is disrupted by compounds presented in O. spinosa extract. cytoplasmic membrane occurs in the presence of 2 × MIC concentration of the extract. Obtained results revealed time-dependent effect of O. spinosa extract on the cell membrane permeability (Figure 3). Namely, optical densities at 260 nm and 280 nm were increased rapidly after 30 min of incubation and achieved maximum values after 90 min, indicating release of intracellular components from the cells of C. albicans to the extracellular compartment. The results pointed out that the fungal cell membrane represents one of the targets of O. spinosa antifungal action. In general, results obtained in this study presented preliminary insight into the mode of antifungal action of O. spinosa extract. Based on the results it could be concluded that the extract exerted its antifungal activity by disruption of ergosterol biosynthesis and by modulation of cell membrane permeability. This study represents one of the first reports exploring possible modes of action of O. spinosa methanolic extract obtained from the aerial parts of the plant. Fungal pathogens have the eukaryotic conserved signaling pathways, which enable them to adapt and survive in the environment, including host cells [29]. The slight differences of fungal eukaryotic structure in relation to human cells are attractive for antifungal drug development [30]. The most important targets of antifungal drugs currently used are enzymes involved in ergosterol biosynthetic pathway [29]. Based on the literature data it might be concluded that targeting ergosterol biosynthetic pathway is the most common mode of action of major antifungals. Having in mind that ergosterol pathway is already successfully been targeted by antifungal substances currently in use [29], natural products that are found to act against targets within ergosterol biosynthetic pathway are more likely to be effective. Besides targeting ergosterol biosynthetic In general, results obtained in this study presented preliminary insight into the mode of antifungal action of O. spinosa extract. Based on the results it could be concluded that the extract exerted its antifungal activity by disruption of ergosterol biosynthesis and by modulation of cell membrane permeability. This study represents one of the first reports exploring possible modes of action of O. spinosa methanolic extract obtained from the aerial parts of the plant. Fungal pathogens have the eukaryotic conserved signaling pathways, which enable them to adapt and survive in the environment, including host cells [29]. The slight differences of fungal eukaryotic structure in relation to human cells are attractive for antifungal drug development [30]. The most important targets of antifungal drugs currently used are enzymes involved in ergosterol biosynthetic pathway [29]. Based on the literature data it might be concluded that targeting ergosterol biosynthetic pathway is the most common mode of action of major antifungals. Having in mind that ergosterol pathway is already successfully been targeted by antifungal substances currently in use [29], natural products that are found to act against targets within ergosterol biosynthetic pathway are more likely to be effective. Besides targeting ergosterol biosynthetic pathway, some antifungal products are proved to have the power to penetrate and disrupt the fungal cell membranes, rich in unsaturated fatty acids, which leads to rearrangement of membrane constituents, loss of cell viability and, eventually, cell dead [31]. Here we showed that extract of O. spinosa provoked leakage of intracellular contents from C. albicans cells, which is one of the indicators pointing to cell membrane as additional target of the tested extract. Our results further showed that the extract of O. spinosa is complex mixture of the compounds that acted by different mechanisms. It is interesting to point that the chance of developing fungal resistance to the extract is very unlikely since the extract is acting by different mechanisms affecting different targets. It makes O. spinosa extract a strong candidate for future application. Evaluation of Cytotoxicity of the O. spinosa Methanolic Extract on Primary Human Gingival Fibroblast Cells Currently, a wide range of different immortalized and primary cells and tissue models are available for in vitro toxicity evaluation. The evaluation of drug cytotoxicity is an important step in biomedical research and represents a primary consideration covering drug selection. Additionally, the first step in the development of novel antifungal drugs includes toxicity studies on human cells in culture. We used human gingival fibroblast (HGF-1) cells to test possible cytotoxic effect of the extract on primary human cells. No cytotoxicity of the O. spinosa methanolic extract on the HGF-1 cells was observed with concentration up to 400 µg/mL; a concentrations which is considered as the limit of toxicity ( Figure 4A). Namely, as shown in the Figure 4A, there was no statistical difference (p < 0.05) in relative growth rate between HGF-1 cells treated with different concentrations of the O. spinosa extract and non-treated control cells. Additionally, we analyzed morphology of the control HGF-1 cells and HGF-1 cells treated with 400 µg/mL of O. spinosa extract ( Figure 4B,C). Obtained results revealed that the treatment of the cells with the extract did not induce changes of the cell morphology ( Figure 4C) when compared to the morphology of non-treated fibroblast cells ( Figure 4B). Therefore, we showed that O. spinosa had no influence on primary human gingival fibroblast cells, regarding the growth rate and cellular morphology. Currently, a wide range of different immortalized and primary cells and tissue models are available for in vitro toxicity evaluation. The evaluation of drug cytotoxicity is an important step in biomedical research and represents a primary consideration covering drug selection. Additionally, the first step in the development of novel antifungal drugs includes toxicity studies on human cells in culture. We used human gingival fibroblast (HGF-1) cells to test possible cytotoxic effect of the extract on primary human cells. No cytotoxicity of the O. spinosa methanolic extract on the HGF-1 cells was observed with concentration up to 400 µg/mL; a concentrations which is considered as the limit of toxicity ( Figure 4A). Namely, as shown in the Figure 4A, there was no statistical difference (p < 0.05) in relative growth rate between HGF-1 cells treated with different concentrations of the O. spinosa extract and non-treated control cells. Additionally, we analyzed morphology of the control HGF-1 cells and HGF-1 cells treated with 400 µg/mL of O. spinosa extract ( Figure 4B, C). Obtained results revealed that the treatment of the cells with the extract did not induce changes of the cell morphology ( Figure 4C) when compared to the morphology of non-treated fibroblast cells ( Figure 4B). Therefore, we showed that O. spinosa had no influence on primary human gingival fibroblast cells, regarding the growth rate and cellular morphology. Collection and Extraction of O. spinosa The aerial parts of wild growing O. spinosa were collected in Vranje (Serbia) in July 2018 and authenticated. The samples were dried, prepared and successively extracted with methanol as previously described byĆirić et al., [32]. Analysis of Phenolic Compounds The phenolic profile was determined by LC-DAD-ESI/MSn (Dionex Ultimate 3000 UPLC, Thermo Scientific, San Jose, CA, USA), and separated and identified as previously described by Bessada et al., [33]. The obtained extracts were redissolved at a concentration of 20 mg/mL with the ethanol:water (80:20, v/v) mixture. A double online detection was performed using a DAD (280, 330 and 370 nm as preferred wavelengths) and a mass spectrometer (MS). The MS detection was performed in a negative mode, using a Linear Ion Trap LTQ XL mass spectrometer (Thermo Finnigan, San Jose, CA, USA) equipped with an ESI source. The identification was performed based on their chromatographic behavior and UV-vis and mass spectra by comparison with standard compounds, when available, and data reported in the literature giving a tentative identification. Data acquisition was carried out with Xcalibur ® data system (Thermo Finnigan, San Jose, CA, USA). In order to perform a quantitative analysis, a calibration curve for each available phenolic standard was constructed based on the UV-vis signal. The quantification of the identified phenolic compounds, for which a commercial standard was not available, was performed through the calibration curve of the most similar available standard: caffeic acid (y = 388345x + 406369, R 2 = 0.9939), ferulic acid (y = 633126x − 185462, R 2 = 0.999), hesperetin (y = 34156x + 268027, R 2 = 0.9999), naringenin (y = 18433x + 78903, R 2 = 0.9998), quercetin-3-O-glucoside (y = 34843x − 160173, R 2 = 0.9998), and quercetin-3-O-rutinoside (y = 13343x + 76751, R 2 = 0.9998). The results were expressed as mg/g of extract. A modified microdilution technique was utilized to investigate the antifungal activity as described previously by Soković et al., and Clinical and Laboratory Standards Institute [34,35]. Briefly, MICs and MFCs were determined by a serial dilution technique using 96-well microtiter plates. The extract of O. spinosa was dissolved in 5% dimethyl-sulfoxide-DMSO. The commercial fungicides bifonazole and ketokonazole (Srbolek, Belgrade, Serbia) were used as positive controls (1-3500 µg/mL of DMSO), while 5% dimethyl-sulfoxide in water was used as a negative control. Biofilm Inhibition Assay on Candida Strains The effect of O. spinosa methanolic extract on biofilm formation of C. albicans, C. krusei and C. tropicalis was determined as previously described by Popovic et al. [36]. The extract of O. spinosa was dissolved in 5% dimethyl-sulfoxide-DMSO. Staining process with crystal violet was used for determination of biofilm reduction and further measuring the UV absorbance of stain at 570 nm using a plate reader. MIC was defined as the minimum concentration of antifungal agent that inhibited further growth of the initial biofilm, and minimum fungicidal concentration (MFC) was defined as the concentration presenting no fungal growth (empty well). Fluconazole (dissolved in 5% DMSO) was used as a positive control, while 5% DMSO in water was used as a negative control. Insights into the mode of Antifungal Action: Ergosterol Binding and Membrane Permeability Assays Assays were performed on the Candida albicans strain. Serial dilutions of the extracts were done in microtiter plates as for microdilution method with addition of ergosterol (25-100 µg/mL) [3]. After 24 h of incubation at 37 • C, MFCs were determined as explained for antifungal activity assay. The effect of the O. spinosa extract on membrane permeability was evaluated as previously described by Stojković et al. [37]. Strain was incubated with the O. spinosa extract at the 2 × MIC for different time periods: 15, 30, 45, 60 and 90 min. C. albicans incubated with 10 mM PBS (pH 7.4) was used as a control. The optical density was measured at 260 nm and 280 nm (Aglient 8453 spectrophotometer) at room temperature (25 • C). Investigation of O. spinosa Methanolic Extract Cytotoxic Activity Cytotoxic effect of O. spinosa methanolic extract was determined on human gingival fibroblasts cells HGF-1 (ATCC ® CRL-2014™) using crystal violet assay as described by Feoktistova et al., [38], with some modifications. The extract of O. spinosa was dissolved in PBS to a final concentration of 8 mg/mL. HGF-1 cells were grown in fibroblast basal medium (ATCC ® PCS-201-030 TM ) at 37 • C in a CO 2 incubator. Forty-eight hours before treatment, HGF-1 cells were seeded in a 96-well microtiter adhesive plate at a seeding density of 4 × 10 3 cells per well. After 48 h, the medium was removed and the cells were treated for the next 48 h with various concentrations of the extract in triplicate wells. Subsequently, the medium was removed; the cells were washed twice with PBS and stained with 0.4% crystal violet staining solution for 20 min at room temperature. Afterwards, crystal violet staining solution was removed; the cells were washed in a stream of tap water and left to air dry at room temperature. The absorbance of dye dissolved in methanol was measured in a plate reader at 570 nm. The results were expressed as relative growth rate (%). Statistical Analysis All analyses were performed in triplicate; each replicate was quantified also three times. Data were expressed as mean standard deviation, where applicable. In the cases where statistical significance differences were identified, the dependent variables were compared using Tukey's honestly significant difference (HSD) test. Conclusions The present study revealed underestimated biological potential of the aerial parts of O. spinosa plant. Methanolic extract was a good source of phenolic compounds indicated by the presence of phenolic acids, flavonoids and isoflavonoids. Flavonoids were the most dominant class of the identified compounds, followed by phenolic acids and isoflavonoids. For the first time, we reported the presence of phenolic acids in the methanolic extract of O. spinosa, together with the types of identified flavonoids, which were not previously reported in the investigated species. This is the first study reporting antifungal and antibiofilm activities of the methanolic extract obtained from the herb of O. spinosa. Based on the results it could be concluded that the extract exerted its antifungal activity by disruption of ergosterol biosynthesis and by modulation of cell membrane permeability. Finally, extract was not toxic to HGF-1 cells, which makes it the good candidate for further antifungal drug development.
2020-04-30T09:08:01.578Z
2020-04-01T00:00:00.000
{ "year": 2020, "sha1": "add902a4e79244378f3e0944a45a8bd85fbe33a2", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8247/13/4/78/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d6244ce3a9f180838f2c2517a5c069901b4f1da1", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
123299183
pes2o/s2orc
v3-fos-license
On possible origins of trends in financial market price changes We investigate possible origins of trends using a deterministic threshold model, where we refer to long-term variabilities of price changes (price movements) in financial markets as trends. From the investigation we find two phenomena. One is that the trend of monotonic increase and decrease can be generated by dealers' minuscule change in mood, which corresponds to the possible fundamentals. The other is that the emergence of trends is all but inevitable in the realistic situation because of the fact that dealers cannot always obtain accurate information about deals, even if there is no influence from fundamentals and technical analyses. Introduction Many phenomena in human society are composed of human choices. Buying and selling is one of them, and we regard financial markets as the aggregation of dealers' choices and their interactions. A major indicator in financial markets is price change (price movement). Price change in financial markets fluctuates irregularly, as shown in Fig. 1. However, the mechanism of the fluctuations has not been elucidated yet, and the question is still venerable and challenging. There are two major approaches to tackle the question. One is to investigate features or natures of the data from the viewpoint of statistics and dynamical systems [1,2,3], and the other is to construct an artificial market using a model and investigate events in the market [4,5,6,7,8,9,10,11]. The merit of the former approach is that it provides us with various knowledge and insights for the understandings of the phenomena of price changes. However, this approach is restrictive, because it does not always lead us to the understanding of the mechanism of price changes produced by the interaction between dealers' choices and market price properties [10]. In contrast, the latter approach significantly contributes to clarifying the mechanism. As the purpose of this paper is to find possible origins of trends (see below) in financial market prices, we follow the latter approach. Some models have been proposed and the models are called dealer models [4]. The dealer model, which is an agent-based model, constructs an artificial market. The first dealer model was introduced by Takayasu et al. in 1992 [4]. They considered that a market is composed of many dealers and buying and selling are interactions among them with discontinuous (nonlinear) and irreversible processes. To implement this mechanism they introduced a numerical model of financial market prices using threshold dynamics [4]. In the model a deterministic dynamics is assumed for an assembly of agents describing mutual trades by threshold dynamics including discontinuous irreversible interactions. After this pioneering work, numerous studies have been done (for example, see Refs. [5,6,7,8,9,10,11]). These models have been improved and refined to be able to reproduce basic empirical laws such as the power-low distributions of price changes, slow decay of autocorrelation of volatility and so on. For the details see [11]. In this paper, we observe afresh the behaviours of financial market data carefully. As mentioned above, to put it briefly, price changes in financial markets show irregular fluctuations (see Fig. 1). The irregular fluctuations can be divided into two main features, short-term variabilities and long-term variabilities. The long-term variabilities are usually called trends. The data can be obtained from http://ratedata.gaincapital.com/ behaviours exhibiting these two fluctuations are extremely natural for price changes in financial markets. Hence, models must be built or improved so as to reproduce these two fluctuations. A possible origin for the short-term variabilities has already been indicated by Takayasu et al. [4]. Despite that there are numerous works after the work, curious to say, discussions on the origins of trends seem not to be lively, although many market participants should observe the trends rather than details of the short-term variabilities to know the overall movement of markets. One of the major underlying reasons seems to be that the existence of the trends is taken to be granted, since it can be seen in any financial data to a varying degree. Also, the two factors, fundamentals (for example, the gross domestic product (GDP) that change according to policy interest rate, remarks by senior administration officials or central bankers, and so on) and technical analyses are vaguely considered to be the main origins of these fluctuations. The major trends in the medium and long term are believed to be generated by fundamentals. On the other hand, it is well known that the trends can also be observed without remarkable news and important economic index to be released. Under these circumstances, dealers have too small clues to predict the movement of price changes. In this case, dealers make their deals based on technical analyses. As a result, minor trends of the moment are generated or accentuated by these actions. Hence, it seems to be commonly accepted that fundamentals generates relatively-long period trends and technical analyses accelerates the movement of the moment rather than generating trends. From these two factors we obtain the following conventional idea: prices rise when the momentum of buyers exceed that of sellers and prices fall when the momentum of sellers exceed that of buyers. This idea seems to make sense. However, it brings up the following simple question on the origin of trends nonetheless: if it were not for these factors, do trends make no appearance at all? We focus our attention mainly on this point. It should be noted here that, although we understand that it is important to reproduce basic empirical characteristics of the data, we do not aspire for the reproduction in this paper. We simply focus on the origins of trends. 2 In this paper, two investigations are shown. One is to make an attempt to generate monotonic increase and decrease at our disposal in the framework of the dealer model to verify the common belief that the medium and long term trends are under the influence of fundamentals. The other is to show that the trends are spontaneously generated even if fundamentals and technical 2 By the progress of computer technology we can obtain high frequency financial data with high resolution time. The data are generally called tick data. Tick is the minimum movement in the price of financial instruments and the term "tick" refers to the minute change in the price from deal to deal. Various analyses have been done using tick data. However, we note that all deals are not recorded in the tick data. Tick data we can obtain are data measured at certain time intervals. There are not many discussions about discrepancy of statistical nature between intermittent data and data with all deals. In a subsequent paper we shall make the investigative study. analyses have no influence on financial markets, which suggests a possible unknown mechanism of generating trends. This paper is organized as follows. We use a deterministic dealer model with threshold elements which has been previously proposed. In Section 2, we briefly review the model, show the behaviours of the data generated by the dealer model, and have some observations. In Section 3 we make attempts to generate given types of trends under some well-defined situations. Section 4 is for the discussions and summary. Current technologies: deterministic threshold dealer model To generate trends reflecting fundamentals and find possible (or unknown) origins of trends, we use a previously proposed deterministic dealer model with threshold elements. As the model is proposed in 1992 by Takayasu et al. [4], we refer to the model as "dealer model 92." As the dealer model 92 is deterministic including no probabilistic factor, we expect that we have a better sense of the behaviours generated by the model. As the model treats price of all deals, the price data generated by the model should be compared to tick data. For simplicity the deal in the model is assumed to be one brand in one market. Also, we assume that all dealers participate deals with the same rule. After the dealer model 92 was proposed, some modifications have been made [5,10,11]. However, we consider that the modifications are considered to be artificial, since all deals are done by only two dealers, that is, one buyer and one seller. In the actual deals there are usually one or more than one seller, which we consider one of the vital aspects of deals. As the dealer model 92 treats such a situation, we use the dealer model 92. 3 In this section we briefly review the dealer model 92 and clarify the mechanism of the price change. Dealer model 92 A deal, composed of consecutive selling and buying, is a typical discontinuous and irreversible process. A deal is done, if a buying condition and a selling condition meet, while it is undone, if not. This is the origin of the discontinuity. The buyer and the seller of the last deal would never deal again under the identical condition at least for a while. This is the origin of the irreversibility. The dealer model 92 is introduced to reflect these aspects of dealing [4]. The market is composed of N dealers. Let the ith dealer's buying price and selling price be B i and S i , respectively. 4 The selling price S i is larger than the buying price B i , and the difference L i = S i − B i is always positive. For simplicity we use a constant value L for L i . This simplification gives In this model a deal is done when the following condition is satisfied: where max{B i } and min{B i } indicate the maximum and minimum values. 5 In the dealer model 92 there is one buyer and one or more than one seller. A dealer can be the buyer when the dealer gives the highest buying price. The buyer i can buy at the price from other dealers j who satisfy the following condition This price P (t) of the deal is defined by the max{B i }. If Eq. (1) is not satisfied, the deal is not done and the market price does not change as follows: When a deal is done, there are the buyer and sellers who could make the deal and other dealers who cannot participate in the deal. However, other dealers do not look wistfully and enviously at the deal with doing nothing. As all dealers are potentially willing to participate in the deals, all dealers re-establish their prices for the next deal. To reflect the eagerness, the ith dealer's expectation a i is introduced. The term is a character of the ith dealer. If a i > 0, the dealer rises the price setting, and if a i < 0, the dealer lowers the price setting. In this model we consider that each dealer has each expectation. Hence, the values of a i are given by uniform random numbers in a range and the mean of {a i } is normalized to be zero. 6 By this operation dealers' expectation for buying and selling is kept in equilibrium. As dealers becomes buyers and sellers depending on their conditions or circumstances in the actual financial markets, inside details of the deals are very complicated. For a simple situation, this model assumes that all dealers have infinitely large amount of property and the dealers do not change their attitudes even if deals are done (that is, a buyer is always a buyer and a seller is always a seller). Also, this model is assumed in an invariant state (steady state). Hence, the values of a i do not change [4]. An unique aspect of this model is that it takes into account an acquisitive nature of the buyer and sellers for the next deal. It is assumed that the buyer and sellers who could participate a deal expect that they can participate again in the next deal with a cheaper price for the buyer and a higher price for the sellers. The other dealers who could not participate in the previous deal do not have such a psychological tendency. To reflect such a tendency, the term ∆ i reflecting the acquisitive nature is introduced: where 0 < δ < L and n is the number of the sellers of the deal. That is, the next buying price of the buyer falls by the amount of δ from the current buying price and the next selling prices of the sellers rises by the amount of δ/n from the current selling prices. The buyer and sellers who cannot participate in the next deal have no anticipation. Also, when Eq. (1) is not satisfied (that is, a deal is not done), ∆ i = 0 for all i. Note that the summation of all dealers' acquisitive nature becomes zero. Based on these ideas the dealer model 92 is given by where the term c i characterizes the ith dealer's response to the change of market price and t prev indicates the time when the last deal is done [4]. However, we note that the dealer model 92 is extremely unstable and uncontrollable when c i = 0 [4,12]. Hence, the behaviour of the model have been scrutinized with changing values of c i and a i and the number of dealers. For details see [12]. Hence, we use the model with c i = 0 as the dealer model 92 for our work, which becomes Typical behaviours in the dealer model 92 We show typical behaviours in the dealer model 92. To generate the data we take the number of dealers N = 100 and δ = 0.4. The initial value 5) is normalized to be zero. 7 Figure 3 shows the typical behaviours of price data P (t), where we use the price data only when the deal is done (only when Eq. (1) is satisfied). The behaviours are stable over long periods and P (t) fluctuates around the mean value showing short-term variabilities but does not have trends. Some observations and possible reasons for lacking of trends As the mean of dealer's expectation {a i } is zero, the summation of the dealer's expectation is obviously zero. However, the number of dealers with a i > 0 is not always a half of the total (that is, the number of the buyers is not always a half). The numbers of the buyers for Fig. 3 is 38 where the numbers of the total dealers is 100, as mentioned above. 8 From the observations we consider that trends do not appear when dealers' expectation for buying and selling equals out. In other words, breaking the balance of dealers' expectation of buying and selling might be useful for generating trends. Based on this idea, we make attempts to generate trends under some well-defined situations in the next section. Exploration of origins for trends In this section we make attempts to generate two types of trends. One is the trends with surmise. We consider that it corresponds to the trends reflecting fundamentals. The other is the trends with no surmise. We consider that it corresponds to trends generated almost spontaneously. After generating both trends, we unite both ideas into one model and generate data using it. Trends with surmise: trends reflecting fundamentals We show here that we are able to generate monotonic increase or decrease trends at our disposal. It is widely believed that the major trends in the medium and long term are generated by fundamentals. However, the correspondence between the types of fundamentals and the types of trends is basically unknown. We therefore avoid the overinterpretation of this relationship and do not touch on the problem. We simply focus on the problem of how we can generate monotonic increase and decrease trends at our disposal. As Eqs. (3) and (5) show, when a deal is done, the buyer brings down the asking price for the next deal, 9 expecting that it might be possible to buy at a cheaper price. Similarly, when a deal is done, the sellers bring up their selling prices for the next deal, expecting that it might be possible to sell at a higher price. How does the acquisitiveness change in the existence of trends in a market? When a monotonic increase (or decrease) trend is expected in a market, which corresponds to self-fulfilling expectation, it seems to be plausible that a dealer, whether the dealer is either a buyer or a seller, considers that the dealer is not likely to take the same attitude in the next deal if the dealer offers a higher (or cheaper) price. Based on the idea Eq. (3) is redefined Figure 4: (Colour online) Monotonic increase and decrease trends generated by dealer model 92. ε b = −0.002 for a monotonic increase trend, ε b = 0.002 for a monotonic increase trend, and ε s = 0 for both the cases. The data with no trend is the same as that shown in Fig. 3(a). We show it for the for comparison. All other conditions are the same in Fig. 3. using the small values ε b and ε s as follows: (for nonparticipants of the deal), where 0 < δ < L and n is the number of the sellers of the deal as taken in Eq. (3). For newly introduced parameters, ε b and ε s , −1 ≤ ε b ≤ 0 and ε s ≥ 0 when a monotonic increase trend is expected, and 0 ≤ ε b ≤ 1 and ε s ≤ 0 when a monotonic decrease trend is expected. Figure 4 is the data generated by the dealer model 92 with ε b = ±0.002 and ε s = 0, which exhibits a monotonic increase trend for ε b = −0.002 and a monotonic decrease trend for ε b = 0.002. We have also confirmed that the dealer model 92 with (ε b , ε s ) = (0, 0.002) show similar behaviours. Although the values of the parameters are small, this tiny little ulterior motive generates major trends. The results indicate that, even if the amount of a change in the psychological tendency of a dealer caused by fundamentals is very small, it can bring a major influence on the market forces. When we use larger values of ε b or ε s or when we use the pair of ε b and ε s together, we have confirmed steeper trends. Trends with no surmise: unpremeditated trends In this section we consider possible (or unknown) underlying origins of the trends without the influence of fundamentals and technical analyses on financial markets. Let us carefully examine an important feature of the dealer model 92. Generally speaking, all dealers in the actual financial markets cannot know the number of sellers in the deal in advance, when a deal is done. However, the number of sellers n is included in Eq. (3) to reflect the acquisitive nature of the dealers. That is, the dealer model 92 contains a parameter which is impossible to know in advance in actual deals. The number of sellers n plays a crucial role in the dealer model 92. As Eq. (3) shows, the buyer's price falls δ and each of n sellers' prices rises δ/n from those when the deal is done. In other words, the total amount of the rises of the sellers' prices n × δ/n exactly compensates for the decline of the buyer's price δ. Because of this exact compensation effect, we consider that the data generated by the dealer model 92 have only short-term variabilities and do not have trends. It is unlikely and unnatural for sellers to know the precise number of sellers in a deal in advance (any dealer cannot actually know it). However, it is possible to know it in the past deals. Hence, we investigate the behaviours of the number of sellers of the data shown in Fig. 3. Figure 5 shows that the number of sellers varies greatly at each deal, where the minimum is 1 and the maximum is 14. It does not seem to be easy to predict the next number of sellers using the past data. 10 In such a cases each seller would have each idea for the price rise. As a result, the expectation for buying and selling would not always equal out. As the simple situation, we use the average sellers number µ n (t) instead of the precise sellers number n in Eq. (3), where µ n (t) is calculated using all of the sellers number prior to time t (that is, from 1 to t − 1). We note that this idea is not so irrelevant. When the prediction does not seem to be easy, one of the common approaches is to use the mean value of the data: the average number of sellers who participated deals in the past. Based on these considerations, 11 Eq. (3) is renewed as follows: (for nonparticipants of the deal), where 0 < δ < L. Figure 6 shows the behaviours of the dealer models 92 using the average sellers number. Figure 6(a) shows that the simulated price data P (t) have short-term variabilities and exhibit the appearance of medium and long-term rising and falling behaviours of trends. The difference in the behaviour from the one for the original dealer model 92 shown in Fig. 3 is striking. The overall behaviour seems to be similar to the real data shown in Fig. 1. Figure 6(b) shows that the average number of sellers converges very quickly. In contrast, from Fig. 6(c) that shows the number of sellers at each deal, 12 the number fluctuates wildly at each deal between 1 and 13. The behaviour is similar to Fig. 5(a). We emphasize that the emergence of the monotonic increase and decrease trends by this model is rarely reported. In Fig. 6 we calculate µ n (t) using all of the sellers number data, although taking the average over all sellers number is not essential. Averaging over the data in certain interval, that is to say, the last 100 sellers number data from t − 1 to t − 100, is equally permissible for generating trends. Mingled trends with surmise and no surmise: trends reflecting fundamentals and unpremeditated trends In the Subsection 3.1, we discussed an idea to generate monotonic increase and decrease trends reflecting fundamentals. In the Subsection 3.2, we discussed that the emergence of trends is generated due to the imbalance between the momentum of buying and selling, which is inevitable in financial markets even without the influence of fundamentals and technical analyses. Although we discussed these two cases separately to make the course of arguments clear, it is true that they are strongly interconnected and mingled with each other in real markets. In this Subsection, we treat these two cases on the same footing. Incorporating both ideas described in Subsection 3.1 and Subsection 3.2, we use the following improved expression for ∆ i : (for the sellers) 0 (for nonparticipants of the deal), where the notations of δ and n are the same as described in Eq. (3) and the notations of ε b and µ n (t) as the same as described in Eqs. (6) and (7). Figure 7 shows the data generated by the dealer model 92 defined by Eq. (8) using different values of ε b . We also show the plots for the data corresponding to unpremeditated trends defined by ε b = 0 for comparison. By replacing the number of sellers n with the average value µ n (t), the profiles of the trends becomes significantly susceptible to the actual values of ε b . Without the replacement of n with µ n (t), a negative ε b generates a monotonically increasing trend, and a positive ε b generates a monotonically decreasing trend as shown in Fig. 4. Figure 7(a) shows similar results for ε b = ±0.031. However, the sign of ε b does not have a simple relationship with increasing or decreasing nature of trends in this case. For Fig. 7(b), we use a much smaller absolute value for |ε b | = 0.0021 than that used in Fig. 7(a), |ε b | = 0.031. In this plot, a trend produced by the positive value, ε b = 0.0021, shows much more prominent increase than that produced by the negative value, ε b = −0.0021. Only by taking a slightly smaller absolute value, |ε b | = 0.002, we see that a negative value, ε b = −0.002, generates a decreasing trend and a positive value, ε b = 0.002, generates an increasing trend as shown in Figure 7(c). Note that these values, |ε b | = 0.002, are the same as those used in Fig. 4. Discussion and Summary We have investigated possible origins of trends in financial market price using deterministic threshold models from the viewpoint of dealers' expectation for the price, where we consider financial markets as a game field of commercial activity composed of human choices and their interactions. We observed that monotonic increase and decrease trends can be generated by dealers' minuscule changes in mood, where the phenomena corresponds to trends with surmise. We also indicated that irregular fluctuations with shortterm variabilities and long-term variabilities (trends) emerge spontaneously by a natural approach under the very normal situation that we cannot obtain precise information about deals, where the phenomena corresponds to trends with no surmise. The interesting point is that the result indicates a possibility that the emergence of trends is all but inevitable in the realistic situation, even if there is no influence on fundamentals and technical analyses in financial markets. When we use both the ideas to generate trends with surmise and no surmise at the same time, we observed two distinctive aspects. One is that dealers' minuscule changes in mood make a large difference in the behaviours. The other is that there are cases where the behaviours of price changes are different from, even opposite to, dealers' expectation of the price changes. Both of the aspects indicates that the data always cannot tell us the underlying dealers' expectation. In other words, the behaviours do not , and (c) are the data generated by ε b = 0. Note that plot of ω originally corresponds to a monotonic increase trend, that of λ to a monotonic decrease trend, and that of γ to unpremeditated trends. All other conditions are the same in Fig. 3. always accord with dealers' expectation for a long term. We consider that this is very intriguing result. From the behaviours or movements of a price change, each dealer infers his/her specious justification of the cause of the change and offer future prospects to others in the next deal. Once the price in the market moves against their expectaions, however, dealers doubt their justifications and may take a counter reaction to their original expectation. Such counter reactions of the dealers may cause excess volatility. We understand that there should be other origins to generate trends other than the ones described in this paper. The point is, however, that dealing tactics between buyers and sellers in the actual financial markets should be more ingenious than that in the dealer model 92. We consider therefore that the knowledge gained from the ideas pursued in this paper contains an essential ingredient and is universal. As no dealer has access to complete information about deals in the actual financial markets, dealers would have their own expectation. As the expectation is brought by the incomplete information, it would be rough and flimsy. The results obtained in this paper indicate that such an expectation causes the emergence of trends. In the real world we always cannot obtain complete information and it is difficult to predict behaviours. The fact and the results imply that the emergence of trends is all but inevitable in financial markets. Hence, we consider that the results are insightful and the knowledge is suitable for many situations. In the actual markets we often find that there are price changes with very similar behaviours, although the brands are different. Our results indicate that some trends might be unpremeditated. Although the trends emerge spontaneously, why the behaviours are similar? What kinds of interactions are there between the brands? Those will be very interesting questions.
2014-11-06T22:58:45.000Z
2014-06-19T00:00:00.000
{ "year": 2015, "sha1": "a41886943d7289c2c8bd5558012ba2f363cfafb1", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1406.5276", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "a41886943d7289c2c8bd5558012ba2f363cfafb1", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Economics" ] }
246314127
pes2o/s2orc
v3-fos-license
Oxysterols Profile in Zebrafish Embryos Exposed to Triclocarban and Propylparaben—A Preliminary Study Oxysterols have long been considered as simple by-products of cholesterol metabolism, but they are now fully designed as bioactive lipids that exert their multiple effects through their binding to several receptors, representing endogenous mediators potentially involved in several metabolic diseases. There is also a growing concern that metabolic disorders may be linked with exposure to endocrine-disrupting chemicals (EDCs). To date, there are no studies aimed to link EDCs exposure to oxysterols perturbation—neither in vivo nor in vitro studies. The present research aimed to evaluate the differences in oxysterols levels following exposure to two metabolism disrupting chemicals (propylparaben (PP) and triclocarban (TCC)) in the zebrafish model using liquid chromatography coupled with tandem mass spectrometry (LC-MS/MS). Following exposure to PP and TCC, there were no significant changes in total and individual oxysterols compared with the control group; however, some interesting differences were noticed: 24-OH was detected only in treated zebrafish embryos, as well as the concentrations of 27-OH, which followed a different distribution, with an increase in TCC treated embryos and a reduction in zebrafish embryos exposed to PP at 24 h post-fertilization (hpf). The results of the present study prompt the hypothesis that EDCs can modulate the oxysterol profile in the zebrafish model and that these variations could be potentially involved in the toxicity mechanism of these emerging contaminants. Introduction Over recent years, many environmental chemicals, including additives in the manufacture of plastic materials (e.g., bisphenol A (BPA)) and personal care products (e.g., parabens and triclocarban (TCC)), have been shown to possess the ability to interfere in hormone action [1]. Such endocrine-disrupting chemicals (EDCs) may act by interacting with all key regulatory steps of hormone systems, from hormone synthesis by the endocrine gland to the responses of hormone-responsive cells [2]. In addition to the developmental and reproductive effects, there is also a growing concern that metabolic disorders may be linked to EDCs [3]. Such EDCs have been defined as environmental obesogens [4]. However, because adverse effects by EDCs may also lead to other metabolic diseases such as nonalcoholic fatty liver disease (NAFLD), hyperlipidemia, and type 2 diabetes, this subclass would be better referred to as metabolic disruptors [3][4][5]. Metabolic disruption may result from different types of activity, including metabolic perturbations [3], interactions with specialized metabolic receptors [6] and xenosensors activation. These proteins are specialized in sensing the chemical environment and are typically involved in the activation of detoxification processes [7]. In addition, EDCs could influence metabolic homeostasis also through epigenetic modifications and mitochondrial dysfunction [8]. Due to the complexity of ED-associated mechanisms and pathways, the identification of molecular initiating events behind the metabolic phenotype effects of EDCs is a challenging task. Over the years, numerous endogenous mediators have been investigated to better understand metabolic syndromes. In this background, bioactive lipids such as bile acids, endocannabinoids, ceramides, and oxysterols seem to play a key role [9,10]. In particular, oxysterols gained importance, being considered more than by-products of cholesterol metabolism. They can derive from non-enzymatic processes, including cholesterol autooxidation mediated by reactive oxygen species, or enzymatic mechanisms, as intermediates in the formation of the bile acids and steroid hormones [9][10][11][12]. They are considered bioactive lipids, exerting their pleiotropic effects through multiple receptors such as nuclear receptors (LXRs, ERs, RORs, glucocorticoid receptors, GRs), G-protein-coupled receptors (GPCRs), and targeting also on regulatory proteins [13]. Variations in oxysterol levels have been reported in patients affected by metabolic syndromes [12], raising interest about the possibility of using oxysterols as markers of pathological conditions. For example, Samadi et al. (2018) reported that high levels of some types of oxysterols could positively correlate with coronary risks factors in patients affected by type 1 and 2 diabetes, giving information about oxidative stress [14]. In addition, Alkazemi et al. (2008) examined the relationship between the serum levels of oxysterols (7α-hydroxy-cholesterol, 7β-hydroxycholesterol, and 7-ketocholesterol) and the presence of metabolic alterations, reflected in obesity, insulinresistance, and an increase in oxidative stress [15]. Since oxysterol levels reflect the status of metabolism, they are expected to be altered after exposure to environmental toxicants, and in this case, testing two well-known metabolic disrupters, propylparaben (PP) and triclocarban (TCC). They have recently emerged as chemicals able to disrupt yolk sac consumption and lipid metabolism in zebrafish early-life stages [16,17]. PP is the propyl ester of 4-hydroxybenzoic acid, and it is used as a preservative in personal care products and food. Its wide use is related to its antifungal and antimicrobial properties [18]. The concern about PP derives from its estrogenic properties and the evidence reported from several epidemiology studies. PP has been detected in human urines and matrices, such as breast milk, cord blood and placenta, seminal plasma, adipose tissue, and also breast cancer [19]. PP is an emerging contaminant, and its presence has been reported in wastewaters at concentration of 20,000 ng/L [20], in freshwater at 3142 ng/L, and also in bottled drinking water at 23 ng/L [21]. Negative effects on aquatic species have been also reported [22,23]. The detrimental effects mediated by PP are due to the ability to influence lipid metabolism [16], and in particular fatty acids metabolism [24], in line with its reported estrogen-like properties [25]. Triclocarban (TCC) is a diphenyl urea; it is an antimicrobial agent, which is added to many personal care products [26]. It is one of the contaminants of emerging concern, characterized by low solubility, which explains its tendency to persist in the environment (in surface waters, sediments) and to bioaccumulate in aquatic organisms [27]. It has been detected in human plasma, urine, umbilical cord blood, and milk, raising concerns about prenatal exposure in the developing fetus [28]. Influence on thyroid hormone function [29] and weak estrogenic effects [30] have been also reported. New and improved tools are needed to increase the quality, efficiency, and effectiveness of existing methods to evaluate the effects of metabolic disrupting chemicals, and zebrafish (Danio rerio) is increasingly used as an animal model to study the effects of EDCs, including those acting on metabolic homeostasis [31]. Due to its advantages (e.g., small size, short generation time, high fecundity, and rapid ex utero development of optically transparent embryos), different metabolic syndromes, such as hyperglycemia, obesity, diabetes, and hypertriglyceridemia, have been successfully studied using zebrafish [32][33][34]. Moreover, different oxysterols have been investigated in zebrafish as LX activators, and these mediators have also been related to neurodevelopment and immune function [35][36][37]. Zebrafish Maintenance and Eggs Collection Adult zebrafish (wild-type AB strain) were bred in the University of Teramo facility (code 041TE294). Adults were kept in 3.5 L ZebTec tanks (Tecniplast S.p.a., Buguggiate, Italy) in a recirculating aquatic system. The temperature was maintained at 28 • C, pH at 7 ± 0.2, the conductivity at 500 ± 100 µS cm −1 , and dissolved O 2 at 6.1 mg L −1 . The photoperiod was 14 h light-10 h dark. Chemical parameters were kept as follows: ammonia 0.02 mg L −1 , nitrite 0.02 mg L −1 , nitrate 21.3 mg L −1 . Animals were fed twice a day with live food (Artemia salina) and supplemented with Zebrafeed 400-600 (Sparos, Olhão, Portugal). The afternoon before spawning, ten groups of females and males (1:1) were introduced into 1.7 L breeding tanks (beach style design, Tecniplast S.p.a.). Immediately after spawning, which was initiated by morning light, fertilized eggs were collected with a sieve and rinsed thoroughly with deionized water and DW. Newly fertilized eggs were collected immediately after spawning and placed in groups of approximately 100 per Petri dish within a light-and temperature-controlled incubator until 2-3 hpf. Non-fertilized eggs and embryos with injuries were eliminated. Zebrafish Embryos Exposure Stock solutions of each compound covering the tested concentration range were prepared in DMSO and stored at −20 • C. PP and TCC were tested at two concentrations: a toxicological concentration and an environmentally relevant concentration of human exposure. The toxicological concentrations were chosen to have effects on yolk sac resorption of zebrafish early-life stages-namely, TCC 50 µg/L and PP 1000 µg/L [16-39] -while the environmentally relevant concentrations included levels of exposure detected in the urine of pregnant women-namely, TCC 5 µg/L and PP µg/L [40]. Final concentrations of DMSO were 0.01% and 0.1% for TCC and PP, respectively. At 2-3 h post fertilization (hpf), embryos were examined under a dissecting microscope, and only the embryos that were developed normally and reached the blastula stage were selected for subsequent experiments. Afterward, embryos (4-16-cell stage) were transferred to glass beakers (diameter 115 mm, capacity 1000 mL) with 250 embryos in 400 mL of test solution. To prevent evaporation, glass Petri dishes were covered with self-adhesive transparent foil (SealPlate by EXCEL Scientific, Dunn, Asbach, Germany). Embryos were exposed for 24 hpf in the incubator at 27 ± 1 • C and photoperiod (14 h light-10 h dark) conditions. Oxysterol quantification was performed on viable embryos at 8-9 hpf (gastrula period) and 24 hpf (pharyngula period). Three replicates for each concentration were used. Prior to being used for the analytical procedure, zebrafish embryos were removed from the working solution with chemicals, washed two times with DW, and frozen at −80 • C. Quantification of Oxysterols Levels For the quantitative analysis of oxysterols, zebrafish samples were analyzed with a Nexera LC20AD chromatographic system (Shimadzu, Kyoto, Japan) coupled with a Qtrap 4500 mass spectrometer (Sciex, Toronto, ON, Canada), and the analyses were performed according to Fanti et al. (2020) [41]. Briefly, the samples were sonicated in water and then vortexed; internal standard solution was added to 300 µL of sample to final concentration of 10 ng mL −1 . The extraction was then performed in three steps by adding the extraction solvents, CHCl 3 , MeOH, and H 2 O, vortexing and mixing by means of an orbital shaker after each addition; finally, the mixture was centrifuged, and the CHCl 3 portion was dried with a flow of N 2 , then resuspended in an ACN/MeOH/H 2 O solution. The extract was processed by micro Solid Phase Extraction (µSPE), using C18 OMIX tips, providing at the same time a suitable clean-up and a 4 time sample enrichment. The final extract was then analyzed by HPLC-MS/MS in multi reaction monitoring (MRM) acquisition mode, with electrospray ionization (ESI), providing a sensitive and accurate quantitation of the analytes, in addition to an unambiguous identification of the different isomeric forms. Statistical Analysis Data were assessed for normality by means of Shapiro-Wilk test. They were not normally distributed even after log and square root transformation, then Kruskal-Wallis test was applied. Differences were considered statistically significant at p < 0.05. SPSS ® 14.0.2 (SPSS Inc., Chicago, IL, USA) was used as the statistical package. Results Results of total oxysterol concentrations at 8 hpf and 24 hpf in the PP and TCC treated groups and in the control group are reported in Figure 1 and Table S1. Zebrafish embryos exposed to PP and TCC did not show any significant change in total oxysterols amounts in comparison with the CTRL groups. The PP and TCC pattern on the production of the individual oxysterols are shown in Figures 2 and 3, and the mean values and SD are reported in Tables S2 and S3. The 20-OH was below the quantification limit at both time points and in both concentrations. At 8 hpf, the oxysterols concentrations (mean value ± SD of 22.73 ± 11.80 ng/mL) in zebrafish embryos exposed to PP 1000 μg/L was lower compared with the CTRL group (41.32 ± 7.08). The PP and TCC pattern on the production of the individual oxysterols are shown in Figures 2 and 3, and the mean values and SD are reported in Tables S2 and S3. The 20-OH was below the quantification limit at both time points and in both concentrations. In zebrafish embryos exposed to DMSO 0.1%, six different oxysterols (22-OH, 25-OH, 24-OH, 27-OH, 7a-OH, and 7b-OH) were detected at 8 hpf and 24 hpf. In PP treated embryos, the concentration of individual oxysterols did not show significant changes compared with the control group (Figure 2). At 8 hpf, 22-OH and 25-OH oxysterols were not detectable in zebrafish embryos exposed to both concentrations of PP. The levels of 7a-OH and 7b-OH were reduced and the concentrations of 27-OH increased in PP treated embryos compared with CTRL. At 24 hpf in zebrafish embryos exposed to PP, 22-OH and 25-OH oxysterols were not detectable, whereas the concentration of 24-OH oxysterol was higher compared with the CTRL. The levels of 7a-OH and 7b-OH oxysterols in 24 hpf PP treated embryos were similar to those observed in the CTRL group, except for the concentration of 27-OH oxysterol. Zebrafish embryos exposed to TCC did not show significant changes in individual oxysterols levels compared with CTRL (Figure 3). At 24 hpf, 27-OH showed an increase in TCC treated embryos compared with CTRL, and 24-OH was also detectable. The concentrations of 7a-OH and 7b-OH increased in zebrafish embryos exposed to TCC 5 µg/L compared with CTRL. At 8 hpf in zebrafish embryos exposed to TCC 50 µg/L, the level of 24-OH was higher compared with CTRL group, and 22-OH was also detectable. In embryos exposed to TCC, the concentrations of 25-OH were lower compared with CTRL group, whereas 27-OH, 7a-OH, and 7b-OH increased at TCC 5 µg/L. In zebrafish embryos exposed to DMSO 0.1%, six different oxysterols (22-OH, 25-O 24-OH, 27-OH, 7a-OH, and 7b-OH) were detected at 8 hpf and 24 hpf. In PP treated embryos, the concentration of individual oxysterols did not show nificant changes compared with the control group (Figure 2). At 8 hpf, 22-OH and 25-OH oxysterols were not detectable in zebrafish embryos posed to both concentrations of PP. The levels of 7a-OH and 7b-OH were reduced and concentrations of 27-OH increased in PP treated embryos compared with CTRL. At 24 hpf in zebrafish embryos exposed to PP, 22-OH and 25-OH oxysterols were detectable, whereas the concentration of 24-OH oxysterol was higher compared with Discussion Studies conducted on mouse models demonstrated that maternal exposure to realistic concentrations of TCC causes an increase in offspring body weight, alterations in lipid metabolism, and bioaccumulation of TCC-related compounds in organs, such as the brain, fat, muscles, and heart [42]. This hypothesis was also studied in the zebrafish model, in which TCC exposure induced an increase in neutral lipids content in the zebrafish larvae yolk [39]. The same alteration, in addition to a decrease in PLA2 activity, was also observed in zebrafish larvae treated with sublethal PP concentrations [16]. In agreement with these results, Bereketoglu and Pradhan (2019) showed that the apolipoprotein genes involved in fatty acid transport (apoab, apoeb, apoa4) and fatty acid synthesis (fasn) are downregulated in zebrafish larvae treated with PP [24]. The role of oxysterols as markers of toxicity of endocrine-disrupting chemicals is still unexplored. Several EDCs can interact with CYP enzymes as well as being able to induce oxidative stress [43,44], leading to an increase and/or decrease in oxysterols concentrations in exposed organisms. PP and TCC are emerging endocrine disruptors with potential estrogenic activity , and as reported in the present study, the modulation of 27-OH concentrations following their exposure in the zebrafish model could represent an additional mechanism of estrogenic activity of these emerging contaminants. Estrogen receptors, on the other hand, are involved in insulin signaling and several metabolic processes [46]. Interestingly, the modulation of 27-OH depends on the tested chemical, on the concentration, and on the evaluation time. At 24 hpf, zebrafish embryos exposed to both concentrations of TCC showed an increased level of 27-OH oxysterols compared with the control group; on the contrary, no modifications happened in PP treated embryos. Plasma levels of 27-OH were increased in hypercholesterolemic patients and people with nonalcoholic fatty liver disease compared with healthy controls [47]. In the zebrafish model, 27-OH is synthesized by CYP27A1, and the expression of this enzyme starts from 24 hpf to 96 hpf [48]. The increased levels of 27-OH at 24 hpf compared with 8 hpf, both in treated and untreated zebrafish embryos, could be related to the activation of CYP27A1 enzymatic activity. Although the liver is the most studied organ responsible for oxysterol generation, they are also produced in other organs due to the expression of CYP in numerous tissues [9]. In the brain, the main cholesterol oxidation reaction, which is catalyzed by the brain-specific cholesterol 24-hydroxylase (CYP46A1), leads to the production of 24-OH exported from the brain to the systemic circulation [49]. 24-OH is a positive allosteric modulator of Nmethyl-d-aspartate receptors (NMDARs) [50], and it is potentially involved in neurological disorders including Alzheimer's disease, Smith-Lemli-Opitz Syndrome, Parkinson's disease, multiple sclerosis, Huntington's disease, amyotrophic lateral sclerosis, Niemann-Pick C disease, and autism spectrum disorders [47][48][49][50][51]. Interestingly, in treated embryos at 24 hpf, the levels of 24-OH were higher compared with the control group. Both parabens and triclocarban have shown neurotoxicity in in vitro and in vivo studies [52,53], and the potential role of 24-OH in EDCs neurotoxicity needs further investigations. 25-OH, 7a-OH, and 7b-OH are produced both enzymatically through CH 25 H, CYP7A1, and 11bHSD1, respectively, and non-enzymatically through ROS production tissues [9]. 25-OH is a known ligand of the nuclear receptors LXRs, and despite it playing an important role in multiple metabolic pathways, it is also involved in the inflammation process [36]. In fact, the interferon-independent antiviral role of 25-OH was extended to a non-mammalian species, including teleost fish, where viral replication was also negatively affected by 25-OH administration to the zebrafish cell line ZF 4 [37]. However, the toxicity of 25-OH was recently evaluated in the zebrafish model, where the exposure to this oxysterol impaired neuromuscular development, survival, and behavior, probably due to uncontrolled inflammatory responses in the treated organisms [36]. Following the obtained results, it could be possible to suppose that; PP exposure reduced the level of 25-OH both at 8 and 24 hpf, and this reduction could lead to an increased susceptibility to infections in treated embryos. 7a-OH and 7b-OH were increased after exposure to 5 µg/L of TCC both at 8 and 24 hpf. The increased level of 7b-OH could be related to the production of ROS induced by TCC exposure. In fact, the enzyme (11bHSD1) responsible for the synthesis of this oxysterol is expressed starting from 120-144 hpf [48]. Moreover, the exposure of zebrafish embryos to TCC can activate oxidative stress, induce total antioxidant capacity expression and lipid peroxidation, and increase the activities of superoxide dismutase and other antioxidant enzymes to resist oxidative damage [54]. Conclusions In conclusion, the present study demonstrated for the first time that exposure to such emerging endocrine disruptors as PP and TCC modulated the oxysterol concentrations in zebrafish embryos at 8 and 24 hpf. These results prompt the hypothesis that EDCs can modulate the oxysterol profile in the zebrafish model and that these variations could be potentially involved in the toxicity mechanism of these emerging contaminants. The analytical method used for the determination of oxysterols could be applied also to study the effects of other endocrine disruptors on oxysterol profile of in vivo models. Further studies are needed to evaluate the consequences of oxysterols variations at biological levels and the potential role of these molecules as a toxicological biomarker. Supplementary Materials: The following are available online at https://www.mdpi.com/article/10 .3390/ijerph19031264/s1, Table S1: Total oxysterols mean values ± SD at 8 and 24 hpf reported after the PP and TCC treatment. Table S2: Oxysterol mean values ± SD in zebrafish embryos treated with PP and TCC at 8 hpf. The oxysterols concentrations are reported as ng/mL. Table S3: Oxysterol mean values ± SD in zebrafish embryos treated with PP and TCC at 24 hpf. The oxysterols concentrations are reported as ng/mL.
2022-01-28T16:08:31.755Z
2022-01-24T00:00:00.000
{ "year": 2022, "sha1": "5beaaf003775647eb45d3052aaa4e659d23b09b3", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1660-4601/19/3/1264/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2e76451e4c65c149f5f801d3dd8eafe7d1d2ca15", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine" ] }